Comic Characters With Stable Diffusion SDXL

Поделиться
HTML-код
  • Опубликовано: 1 янв 2025
  • In this comprehensive tutorial, learn how to harness the power of Stable Diffusion AI to produce stunning and visually consistent comic book characters. Whether you're a seasoned artist or just starting, I’ll guide you through the step-by-step process of generating characters that maintain a consistent style from image to image.
    You’ll learn how to prepare custom character datasets, a crucial step in creating your own Stable Diffusion AI model for comic book character generation.
    Discover valuable tips, techniques, and tools to elevate your comic book artistry.
    Want to advance your ai Animation skills? Checkout my Patreon:
    / sebastiantorresvfx
    www.sebastianto...
    Install Stable Diffusion: • Stable Diffusion In Mi...
    Consistent faces : • Consistent Faces in St...
    Links from the Video ###
    SDXL Models:civitai.com/
    Random Name Generator: www.behindthen...

Комментарии • 77

  • @kanavwastaken
    @kanavwastaken Год назад +9

    This video is a gem, really. I'm so sick and tired of most tutorials being so long and complicated, truly, you're explanations made me learn. Thank you, for real. We need more! ❤

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      I have more coming soon, it’s been a busy month unfortunately but I’m back on track now.

    • @CScottSmith
      @CScottSmith Год назад +1

      Quick, clear and conscise. You are right on point here. The video is a damn gem! ANNNNNDDDDD thanks for being awesome @sebastiantorresvfx

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      @Mr.Sinister_666, Made my day 😎 good to know I’m doing it right 😄

  • @teamozOFFICIAL
    @teamozOFFICIAL Год назад +3

    This tutorial is exactly what I want in tutorials. Giving us the information quick and not being to heavy on the memes. I've happily hit the sub and bell button.

  • @kenny_numbers
    @kenny_numbers Год назад

    Thanks so much for creating these videos, Sebastian. I'm in the early stages of the learning curve in trying to get consistent characters and the kinds of images I need for a graphic novel. I spent September and October generating images for a different graphic novel which I published through Amazon KDP, but I did it through generating loads and loads of images to pick only those I could work with. I also spent at least 150 hours fixing problems and deformities, such as hands, eyes, limbs, clothing etc. in nearly every image. I basically brute-forced my way through and didn't get the results I wanted. I published it anyway. The end result was deficient character consistency and not the most dynamic posing and inadequate interaction between characters. I cannot go through the process like that again. I need to have a high degree of character consistency and images that work as generated, requiring little or no redrawing. I have generated a single image of a character with a design I like for the new graphic novel. However, SDXL produces a completely different looking image every time I click generate, even with the same text prompt. I cannot build a dataset of consistent character images when I cannot even generate a second image that looks like the first. What am I missing? Do you have any idea what I'm doing wrong? Any help or advice would be greatly appreciated. Thanks.

  • @shallmow
    @shallmow Год назад +2

    Damn, use of actual names is so smart lol. Previously people had to make models with reference photos to get consistent characters.

  • @gatotboediman9680
    @gatotboediman9680 Год назад +1

    love your style. and tutorials. subscribed already

  • @hairy7653
    @hairy7653 Год назад +2

    great tutorial

  • @meritorioustechnate9455
    @meritorioustechnate9455 Год назад +2

    Tutorial is great. I’m using Midjourney for consistenct characters and exploring new styles. But the main issue with ai is the jagged line art and proportions for me. I sketch over ai art and draw my line art adding a unique style.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      I’ve been playing with re-inking after generating. Also another method I’ve found is to upscale the images, inpaint sections that need sharper line art. I’ll then downscale as needed and the quality of the line art will be superior. It’s basically how traditional comics are done, down scaling original art to roughly 65% of the original art work size.

  • @Ronin2079
    @Ronin2079 Год назад +1

    still waiting on the 2nd part to this amazing video! great work!

  • @g-aram1405
    @g-aram1405 Год назад +1

    Hi mate, great tutorial, can you recommend model/lora that look simple like manhua or webtoon , because model that i see mostly for anime illustration
    Thank you

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Try Counterfeit-V3.0 from civitai. And for the painted look I’d suggest using style selector extension and setting it to painting or something of that sort to push the image in that direction.

  • @jeffreychung7307
    @jeffreychung7307 Год назад +1

    Great Video. If I want to make a consistent character for a pet, how can I do it. I still use Random Name Generator to name the pet?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      For pets, depending on your situation I would suggest either getting a Lora that’s pre trained on a specific animal. Or training your own with photos of just one animal that way SD won’t mix other animals into it.
      Unfortunately when it comes to side characters (and pets) in comics, if they’re going to be showing up consistently. Then you’ll need a way to make sure they come out looking the same even if for a couple panels. Loras are your best bet.

  • @user-ui2on4ll9v
    @user-ui2on4ll9v Год назад +2

    Thanks for the tutorial. As for me, the main problem is background.I cant draw comics,for now, because i just cant get the same background (for example, the same classroom or the same street in the city) without using of 3d model.And ,in my point of view, it is vitally necessary to be able to generate the same background from different angles (and at a different distance) to draw action scenes in comics.Could you please tell me, if you know, how to solve this problem ?How can i get the same background to draw comics (without 3d model) ?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +2

      Unfortunately SD isn’t reliable for consistent backgrounds in different angles. My work around would be to generate the backgrounds then project them onto some rudimentary 3D geometry. The Archer tv show does as similar process so they can render out a different angle when needed.
      If you’re projecting an SD generation onto the 3D model you’ll get the same look and have more control. There’s ways to change the lighting and light sources also which can be useful.

  • @michaelcarnevale5620
    @michaelcarnevale5620 Год назад +1

    so informative - i subbed

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Thanks for the sub! Glad you liked it. Good timing, follow up video is coming this week 😁

  • @arnabroy2193
    @arnabroy2193 Год назад +1

    Thank u so much for sharing

  • @TeluguNarrativeHub
    @TeluguNarrativeHub Год назад +2

    Thanks for sharing your knowledge. good job.

  • @roymathew7956
    @roymathew7956 Год назад +3

    Love the explanations and the wisdom. Would love to see a video where you work through a few panels for a comic strip, also possibly showing how you add the blurbs. I imagine you’d do that in Photoshop, but wondering if there’s a lora or something in stable diffusion that also works for that

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      As for how to put the pages together we’ll get there for sure.
      The word balloons and captions are best done in a photo editor. The best for it being clip studio formerly known as manga studio. I love photoshop but it’s not made for that where clip studio is more directed towards comic books. And once a year you can outright buy it for like $50-$60 for a permanent license. Can’t say the same for photoshop 😆

    • @roymathew7956
      @roymathew7956 Год назад +1

      Thanks for that.@@sebastiantorresvfx

  • @WhatDoesEvilMean
    @WhatDoesEvilMean Год назад +1

    Could you do a video on how to train on our own artwork? So that the images come out in our specific style? Is that possible?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      If you go through the process in the Lora video you can switch that out for your own art. Just make sure the images around around 1024px or bigger, but don’t go too crazy or it will take a while to train.
      But yeah the process is the same no matter what your source images are.

  • @kentuckeytom
    @kentuckeytom Год назад +1

    hi, would you mind share what video card you are using? mine is 1070ti 8g and takes 3 minitues to generate an image with the same prompt😪

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      Hello, I’m using a Gigabyte 3090 RTX turbo. Its a few years old now but still does the job.
      Make sure you have --medvram in your command arguments line of the webui-user.bat and might be a good idea to turn off live previews in your a1111 settings. Might give you a slight boost.

    • @kentuckeytom
      @kentuckeytom Год назад +1

      it's much better now with --medvram, thanks!@@sebastiantorresvfx

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Awesome! Glad to hear it. 🙂

  • @Carmidian
    @Carmidian Год назад +1

    This was so helpful thank you so much! One quick question what is SDXL style we're using to get that superhero look it was awesome?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Thank you 😁
      The style itself is using the SDXL style selector extension that you can find in the extensions tab and set to comic. As for the model its the realities edge anime XL checkpoint from civitai.

    • @Carmidian
      @Carmidian Год назад +1

      @@sebastiantorresvfx Sorry for bothering you, one more question when it comes to making the lora, how many pictures should I generate?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      No worries at all, that’s a complicated question. Because technically you could get away with 15 images, but you run the risk of it not having enough flexibility for what you require later on. I’d say it’s probably best to go with something like 30-50 good all round images to cover yourself.

    • @Carmidian
      @Carmidian Год назад

      @@sebastiantorresvfx Thank you, once again. Your videos are incredibly helpful and easy to understand.

  • @ConwayBrew
    @ConwayBrew Год назад +1

    Which checkpoint were you using? I didn't see it in the video but really liked the output. Your videos have really helped me dive back into Stable Diffusion and catch up. Thanks!

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +2

      Thank you so much your message, means a lot to know it’s helping you. I’m using the realities edge Anime xl , you can find the direct link to it on the description of my latest video on comic book line art. Have fun 😁

  • @Greensacks
    @Greensacks Год назад +1

    really great video! so much more straight forward than others lol. using this process how might you handle for multiple characters? say, instead of a superhero i'm working on two brothers and a dog in a fantasy setting. would you train a lora for each character? and then how would you bring something like that together?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      I’d prefer to have an individual Lora for each character and the dog so I have more consistency with the look and the clothing.
      As for combining them in automatic 1111 there’s a number of different methods but it’s a little long for a comment to cover. Perhaps a livestream 🙂

  • @DeanCassady
    @DeanCassady Год назад +2

    Nice vid, good content

  • @Kelticfury
    @Kelticfury Год назад +1

    Is automatic1111 handling sdxl properly now? I switched to comfy UI because it was pretty bad at it.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      I believe it is, I’ve been using SDXL exclusively for the last couple months. I believe it’s only short comings at the moment is the implementation of controlnet. It isn’t as consistent as it was with 1.5 models. But that might be more to do with the controlnet models more so than automatic 1111. But in terms of image quality, the potential is definitely greater.

    • @Kelticfury
      @Kelticfury Год назад +1

      @@sebastiantorresvfx Hey that is good news. Thanks for the fast reply at an ungodly hour :)

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      I guess that depends on where you are in the world 😂

  • @iamnow8
    @iamnow8 Год назад

    Amazing! Waiting on next video sir Torres, do you know how to create low file sizes Loras (possibly with faster training?)

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      Wait no more, just went live.
      Network rank and network alpha will keep the files smaller if you choose a lower value, as for training times 😬 it can take a couple hours depending on the amount of images in your dataset.

    • @iamnow8
      @iamnow8 Год назад +1

      WOOH :D@@sebastiantorresvfx

  • @luozhan
    @luozhan Год назад

    Love your channel! ❤
    Thank you for creating this tutorial. It will be great if you could also show us how to create TWO or more consistent characters in the SAME scene. I am looking forward to it. Thanks again for the great work.

  • @DanielSchweinert
    @DanielSchweinert Год назад +2

    Thanks! Straight to the point!

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +2

      Glad to see you back Daniel. 😁

    • @DanielSchweinert
      @DanielSchweinert Год назад

      @@sebastiantorresvfx i released a new tutorial and a node workflow on civitai

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      Taking a couple days to play on stable, I’ll check it out 😃

  • @anaversary-
    @anaversary- Год назад +3

    Very informative video! I love the starwars style 2:04 you added to the prompts lol

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      lol only took a month for someone to mention the Star Wars crawl 😂 😂 i got a good chuckle making it so I refused to cut it 😂

  • @lastlight05
    @lastlight05 6 месяцев назад

    How about comfyui

  • @deprive-999x
    @deprive-999x Год назад +1

    Another great video. How can we help getting you more subscribers?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      You’re awesome! Share them on any forums, groups and discord you think the videos could be helpful. Unfortunately I’ve never been good at keeping up with forums. Definitely something I need to get on board with.
      Perhaps I should do live videos too? Only thing keeping me from doing that so far is that I like the fast pace of the videos. Can’t really do that on a live video.

    • @deprive-999x
      @deprive-999x Год назад +1

      @@sebastiantorresvfx find out the common problems like the repeatability issue and solve them too.

  • @LouisGedo
    @LouisGedo Год назад +2

    👋

  • @peterxyz3541
    @peterxyz3541 2 месяца назад +1

    I'm looking to streamline my workflow. I.....as a REAL ARTIST, who actually use real graphite, bristol board (I name those since I doubt 'Prompt Jockey' may not know what real art supplies are).....need a tool to allow me to make MY own art faster, trained on MY own art.
    (The above statement is for all the anti-ai artist out there)

    • @sebastiantorresvfx
      @sebastiantorresvfx  2 месяца назад +1

      🤣 love that comment. Unfortunately you can yell that from the highest mountain. The anti ai mob still think you’re lying. I’m a trained artist, photographer, film maker and vfx artist. Soon as I picked up ai all that went out the window apparently 😝
      Your best option will be to train a Lora on your own art if that’s what you’re interested in doing. One on the characters and one on backgrounds. You’ll still need to do some compositing to bring it all together. But it will definitely speed up your workflow. You’ll just need to do a bunch of touch ups to fix any errors the ai made.

    • @peterxyz3541
      @peterxyz3541 2 месяца назад

      @@sebastiantorresvfx I totally agree! I have several real and “real” (beginner, even after 20y of casual practice) who still show their fangs every time I mention ai tools. Irony is one artist, I mentored her to be self sufficient selling at anime and furry cons. She’s talented and would had got there but I think I accel her by 5y since I have a car and she was 19 at that time (sorry for side story)
      Yes, my Nikon film bodied, Kiev medium format, my f2.8, 1.8 lenses are ignored. My old real sable brushes (before the ban on cruelty to sable pelts), my Windsor Newton, liquitex, vellum and real (old) xylene markers are just for show. 🤣🤣🤣
      I’m glad I found your vid. Thanks. I’m glad I’m not alone in my belief as a heretic that will be burned at the stake 😂
      Have you looked into the new Toon Craft, I think is the name? Ai animation? That’s the ultimate goal.
      (Irony, my artist friend attended LightBox in California and inquired into studios, shopping around her IP. Ai animation would cure that effort. It’s not like any indie produced idea could hit gold (sarcasm, Southpark, RWBY, Simpson, voices of a distant star

  • @ledesseinduneidee
    @ledesseinduneidee 11 месяцев назад

    inkreadible

  • @jeffreychung7307
    @jeffreychung7307 Год назад +1

    I get this ''NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 4096, 1, 512) (torch.float32) key : shape=(1, 4096, 1, 512) (torch.float32) value : shape=(1, 4096, 1, 512) (torch.float32) attn_bias : p : 0.0 `cutlassF` is not supported because: device=cpu (supported: {'cuda'}) Operator wasn't built - see `python -m xformers.info` for more info `flshattF` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see `python -m xformers.info` for more info triton is not available `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 Operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 512'' I guess the reason is I am using a laptop with no GPU. Anyway I can fix it using my existing potato? I have tried googled to fix this and tried bunch of tricks but still not able to generate my first image. I keep pixels as 512 * 512 and sampling methog DDIM (seems the fastest) but I still not able to generate my first artwork.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Hey Jeffrey, without knowing your specs it’ll be difficult to say. But if you have an Nvidia GPU make sure you have the right Cuda soft installed I believe the latest is 11.8.
      Also make sure you have the latest versions of torch and xformers installed. You can install xformers automatically by adding “--Xformers” to the command arguments in your webui-user.bat.

    • @jeffreychung7307
      @jeffreychung7307 Год назад

      I should have installed the pip, Xformers and torch latest version but still got the same result. I solved it by temporarily removing the --xformers flag.
      , is the impact slower only?@@sebastiantorresvfx

    • @musicwelikemang
      @musicwelikemang Год назад

      You need a GPU to run a local model of SD. Integrated laptop gfx just wont cut it.
      Try look into stablehoard. Its kinda like a peer to peer compute net. People with higher power cards donate their cards in downtime to other users without the hardware to run SD.
      It uses a credit system and has a pretty good community willing to help teach people.