Comic Characters With Stable Diffusion SDXL
HTML-код
- Опубликовано: 1 янв 2025
- In this comprehensive tutorial, learn how to harness the power of Stable Diffusion AI to produce stunning and visually consistent comic book characters. Whether you're a seasoned artist or just starting, I’ll guide you through the step-by-step process of generating characters that maintain a consistent style from image to image.
You’ll learn how to prepare custom character datasets, a crucial step in creating your own Stable Diffusion AI model for comic book character generation.
Discover valuable tips, techniques, and tools to elevate your comic book artistry.
Want to advance your ai Animation skills? Checkout my Patreon:
/ sebastiantorresvfx
www.sebastianto...
Install Stable Diffusion: • Stable Diffusion In Mi...
Consistent faces : • Consistent Faces in St...
Links from the Video ###
SDXL Models:civitai.com/
Random Name Generator: www.behindthen...
This video is a gem, really. I'm so sick and tired of most tutorials being so long and complicated, truly, you're explanations made me learn. Thank you, for real. We need more! ❤
I have more coming soon, it’s been a busy month unfortunately but I’m back on track now.
Quick, clear and conscise. You are right on point here. The video is a damn gem! ANNNNNDDDDD thanks for being awesome @sebastiantorresvfx
@Mr.Sinister_666, Made my day 😎 good to know I’m doing it right 😄
This tutorial is exactly what I want in tutorials. Giving us the information quick and not being to heavy on the memes. I've happily hit the sub and bell button.
Much appreciated, glad it’s what you were after 😁
Thanks so much for creating these videos, Sebastian. I'm in the early stages of the learning curve in trying to get consistent characters and the kinds of images I need for a graphic novel. I spent September and October generating images for a different graphic novel which I published through Amazon KDP, but I did it through generating loads and loads of images to pick only those I could work with. I also spent at least 150 hours fixing problems and deformities, such as hands, eyes, limbs, clothing etc. in nearly every image. I basically brute-forced my way through and didn't get the results I wanted. I published it anyway. The end result was deficient character consistency and not the most dynamic posing and inadequate interaction between characters. I cannot go through the process like that again. I need to have a high degree of character consistency and images that work as generated, requiring little or no redrawing. I have generated a single image of a character with a design I like for the new graphic novel. However, SDXL produces a completely different looking image every time I click generate, even with the same text prompt. I cannot build a dataset of consistent character images when I cannot even generate a second image that looks like the first. What am I missing? Do you have any idea what I'm doing wrong? Any help or advice would be greatly appreciated. Thanks.
Damn, use of actual names is so smart lol. Previously people had to make models with reference photos to get consistent characters.
love your style. and tutorials. subscribed already
Thank you 🙂 you’re awesome! Happy to have you onboard.
great tutorial
Tutorial is great. I’m using Midjourney for consistenct characters and exploring new styles. But the main issue with ai is the jagged line art and proportions for me. I sketch over ai art and draw my line art adding a unique style.
I’ve been playing with re-inking after generating. Also another method I’ve found is to upscale the images, inpaint sections that need sharper line art. I’ll then downscale as needed and the quality of the line art will be superior. It’s basically how traditional comics are done, down scaling original art to roughly 65% of the original art work size.
still waiting on the 2nd part to this amazing video! great work!
Almost ready 😁
Hi mate, great tutorial, can you recommend model/lora that look simple like manhua or webtoon , because model that i see mostly for anime illustration
Thank you
Try Counterfeit-V3.0 from civitai. And for the painted look I’d suggest using style selector extension and setting it to painting or something of that sort to push the image in that direction.
Great Video. If I want to make a consistent character for a pet, how can I do it. I still use Random Name Generator to name the pet?
For pets, depending on your situation I would suggest either getting a Lora that’s pre trained on a specific animal. Or training your own with photos of just one animal that way SD won’t mix other animals into it.
Unfortunately when it comes to side characters (and pets) in comics, if they’re going to be showing up consistently. Then you’ll need a way to make sure they come out looking the same even if for a couple panels. Loras are your best bet.
Thanks for the tutorial. As for me, the main problem is background.I cant draw comics,for now, because i just cant get the same background (for example, the same classroom or the same street in the city) without using of 3d model.And ,in my point of view, it is vitally necessary to be able to generate the same background from different angles (and at a different distance) to draw action scenes in comics.Could you please tell me, if you know, how to solve this problem ?How can i get the same background to draw comics (without 3d model) ?
Unfortunately SD isn’t reliable for consistent backgrounds in different angles. My work around would be to generate the backgrounds then project them onto some rudimentary 3D geometry. The Archer tv show does as similar process so they can render out a different angle when needed.
If you’re projecting an SD generation onto the 3D model you’ll get the same look and have more control. There’s ways to change the lighting and light sources also which can be useful.
so informative - i subbed
Thanks for the sub! Glad you liked it. Good timing, follow up video is coming this week 😁
Thank u so much for sharing
You’re welcome, glad you enjoyed it 😁
Thanks for sharing your knowledge. good job.
Thank you for watching 🙂
Love the explanations and the wisdom. Would love to see a video where you work through a few panels for a comic strip, also possibly showing how you add the blurbs. I imagine you’d do that in Photoshop, but wondering if there’s a lora or something in stable diffusion that also works for that
As for how to put the pages together we’ll get there for sure.
The word balloons and captions are best done in a photo editor. The best for it being clip studio formerly known as manga studio. I love photoshop but it’s not made for that where clip studio is more directed towards comic books. And once a year you can outright buy it for like $50-$60 for a permanent license. Can’t say the same for photoshop 😆
Thanks for that.@@sebastiantorresvfx
Could you do a video on how to train on our own artwork? So that the images come out in our specific style? Is that possible?
If you go through the process in the Lora video you can switch that out for your own art. Just make sure the images around around 1024px or bigger, but don’t go too crazy or it will take a while to train.
But yeah the process is the same no matter what your source images are.
hi, would you mind share what video card you are using? mine is 1070ti 8g and takes 3 minitues to generate an image with the same prompt😪
Hello, I’m using a Gigabyte 3090 RTX turbo. Its a few years old now but still does the job.
Make sure you have --medvram in your command arguments line of the webui-user.bat and might be a good idea to turn off live previews in your a1111 settings. Might give you a slight boost.
it's much better now with --medvram, thanks!@@sebastiantorresvfx
Awesome! Glad to hear it. 🙂
This was so helpful thank you so much! One quick question what is SDXL style we're using to get that superhero look it was awesome?
Thank you 😁
The style itself is using the SDXL style selector extension that you can find in the extensions tab and set to comic. As for the model its the realities edge anime XL checkpoint from civitai.
@@sebastiantorresvfx Sorry for bothering you, one more question when it comes to making the lora, how many pictures should I generate?
No worries at all, that’s a complicated question. Because technically you could get away with 15 images, but you run the risk of it not having enough flexibility for what you require later on. I’d say it’s probably best to go with something like 30-50 good all round images to cover yourself.
@@sebastiantorresvfx Thank you, once again. Your videos are incredibly helpful and easy to understand.
Which checkpoint were you using? I didn't see it in the video but really liked the output. Your videos have really helped me dive back into Stable Diffusion and catch up. Thanks!
Thank you so much your message, means a lot to know it’s helping you. I’m using the realities edge Anime xl , you can find the direct link to it on the description of my latest video on comic book line art. Have fun 😁
really great video! so much more straight forward than others lol. using this process how might you handle for multiple characters? say, instead of a superhero i'm working on two brothers and a dog in a fantasy setting. would you train a lora for each character? and then how would you bring something like that together?
I’d prefer to have an individual Lora for each character and the dog so I have more consistency with the look and the clothing.
As for combining them in automatic 1111 there’s a number of different methods but it’s a little long for a comment to cover. Perhaps a livestream 🙂
Nice vid, good content
Thank you Dean I appreciate that 😁
Is automatic1111 handling sdxl properly now? I switched to comfy UI because it was pretty bad at it.
I believe it is, I’ve been using SDXL exclusively for the last couple months. I believe it’s only short comings at the moment is the implementation of controlnet. It isn’t as consistent as it was with 1.5 models. But that might be more to do with the controlnet models more so than automatic 1111. But in terms of image quality, the potential is definitely greater.
@@sebastiantorresvfx Hey that is good news. Thanks for the fast reply at an ungodly hour :)
I guess that depends on where you are in the world 😂
Amazing! Waiting on next video sir Torres, do you know how to create low file sizes Loras (possibly with faster training?)
Wait no more, just went live.
Network rank and network alpha will keep the files smaller if you choose a lower value, as for training times 😬 it can take a couple hours depending on the amount of images in your dataset.
WOOH :D@@sebastiantorresvfx
Love your channel! ❤
Thank you for creating this tutorial. It will be great if you could also show us how to create TWO or more consistent characters in the SAME scene. I am looking forward to it. Thanks again for the great work.
Thanks! Straight to the point!
Glad to see you back Daniel. 😁
@@sebastiantorresvfx i released a new tutorial and a node workflow on civitai
Taking a couple days to play on stable, I’ll check it out 😃
Very informative video! I love the starwars style 2:04 you added to the prompts lol
lol only took a month for someone to mention the Star Wars crawl 😂 😂 i got a good chuckle making it so I refused to cut it 😂
How about comfyui
Another great video. How can we help getting you more subscribers?
You’re awesome! Share them on any forums, groups and discord you think the videos could be helpful. Unfortunately I’ve never been good at keeping up with forums. Definitely something I need to get on board with.
Perhaps I should do live videos too? Only thing keeping me from doing that so far is that I like the fast pace of the videos. Can’t really do that on a live video.
@@sebastiantorresvfx find out the common problems like the repeatability issue and solve them too.
👋
I'm looking to streamline my workflow. I.....as a REAL ARTIST, who actually use real graphite, bristol board (I name those since I doubt 'Prompt Jockey' may not know what real art supplies are).....need a tool to allow me to make MY own art faster, trained on MY own art.
(The above statement is for all the anti-ai artist out there)
🤣 love that comment. Unfortunately you can yell that from the highest mountain. The anti ai mob still think you’re lying. I’m a trained artist, photographer, film maker and vfx artist. Soon as I picked up ai all that went out the window apparently 😝
Your best option will be to train a Lora on your own art if that’s what you’re interested in doing. One on the characters and one on backgrounds. You’ll still need to do some compositing to bring it all together. But it will definitely speed up your workflow. You’ll just need to do a bunch of touch ups to fix any errors the ai made.
@@sebastiantorresvfx I totally agree! I have several real and “real” (beginner, even after 20y of casual practice) who still show their fangs every time I mention ai tools. Irony is one artist, I mentored her to be self sufficient selling at anime and furry cons. She’s talented and would had got there but I think I accel her by 5y since I have a car and she was 19 at that time (sorry for side story)
Yes, my Nikon film bodied, Kiev medium format, my f2.8, 1.8 lenses are ignored. My old real sable brushes (before the ban on cruelty to sable pelts), my Windsor Newton, liquitex, vellum and real (old) xylene markers are just for show. 🤣🤣🤣
I’m glad I found your vid. Thanks. I’m glad I’m not alone in my belief as a heretic that will be burned at the stake 😂
Have you looked into the new Toon Craft, I think is the name? Ai animation? That’s the ultimate goal.
(Irony, my artist friend attended LightBox in California and inquired into studios, shopping around her IP. Ai animation would cure that effort. It’s not like any indie produced idea could hit gold (sarcasm, Southpark, RWBY, Simpson, voices of a distant star
inkreadible
I see what you did there 😂
I get this ''NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 4096, 1, 512) (torch.float32) key : shape=(1, 4096, 1, 512) (torch.float32) value : shape=(1, 4096, 1, 512) (torch.float32) attn_bias : p : 0.0 `cutlassF` is not supported because: device=cpu (supported: {'cuda'}) Operator wasn't built - see `python -m xformers.info` for more info `flshattF` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see `python -m xformers.info` for more info triton is not available `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 Operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 512'' I guess the reason is I am using a laptop with no GPU. Anyway I can fix it using my existing potato? I have tried googled to fix this and tried bunch of tricks but still not able to generate my first image. I keep pixels as 512 * 512 and sampling methog DDIM (seems the fastest) but I still not able to generate my first artwork.
Hey Jeffrey, without knowing your specs it’ll be difficult to say. But if you have an Nvidia GPU make sure you have the right Cuda soft installed I believe the latest is 11.8.
Also make sure you have the latest versions of torch and xformers installed. You can install xformers automatically by adding “--Xformers” to the command arguments in your webui-user.bat.
I should have installed the pip, Xformers and torch latest version but still got the same result. I solved it by temporarily removing the --xformers flag.
, is the impact slower only?@@sebastiantorresvfx
You need a GPU to run a local model of SD. Integrated laptop gfx just wont cut it.
Try look into stablehoard. Its kinda like a peer to peer compute net. People with higher power cards donate their cards in downtime to other users without the hardware to run SD.
It uses a credit system and has a pretty good community willing to help teach people.