Create consistent characters with Stable diffusion!!

Поделиться
HTML-код
  • Опубликовано: 12 янв 2025

Комментарии • 615

  • @justinwhite2725
    @justinwhite2725 Год назад +18

    I love that this video doesn't gloss over the fact that a lot of touch up is necessary.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      I always try to encourage the use of external tools and skills if possible hahha tyty!

    • @salemcripple
      @salemcripple 2 месяца назад

      And there in lies my problem, my whole reason for using AI is i am NOT an artist. I've never even once used photoshop, or anything other than just ms paint. Once he gets to that point..... COMPLETELY lost!

  • @1tponie
    @1tponie Год назад +4

    bro, there is channels with millions of subscribers and i can't learn as much out of those channels. this channel is a GOLD. liked and subbed.

  • @Aleshotgun
    @Aleshotgun Год назад +4

    I just digged into stable diffusion and your infos are an absolute life saver!!

  • @1august12
    @1august12 Год назад +17

    Thanks for making these videos! I started playing with stable diffusion a couple of days ago and binged all your videos. SD is honestly too fun, I sat up to like 4 am yesterday inpainting instead of in-bedding😅
    I'm really impressed that your videos are so concise without being hard to understand. Not to mention funny! Everything looked really daunting at first but I just want to learn more, and you make that a lot easier, and a lot more entertaining. So thanks!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      thank you for the kind comment!! Glad you are enjoying it :3

  • @solovoypasando
    @solovoypasando 3 месяца назад +2

    incredible, really hard to find a video that summarizes the whole process from start to finish, one more subscriber

  • @patrickstonks7841
    @patrickstonks7841 Год назад +37

    Wtf. Literally was looking for this exact thing to get a consistent character yesterday. You're a legend.

  • @Sophias-Universe
    @Sophias-Universe Год назад +3

    Thank you for taking the time to share your knowledge!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      thank you for watching and the posstive comment!

  • @imnotcabs1805
    @imnotcabs1805 Год назад +11

    Hey, new viewer here. You might get a lot of this but I also want to share my piece. Thank you for all these insightful content in your channel. I've been dabbling with Stable Diffusion and AI Image Generation and you are one of the few people who gives out in-depth, no bs, and actually helpful videos here on RUclips. I really appreciate you maan! I'm currently learning ControlNet and you make it a lot easier with your tutorials.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +2

      hey!! thank you so much!! Appreciate it fr :3 Glad you are finding it useful and hope to keep providing with informative content!

  • @TheRoyalSkies
    @TheRoyalSkies 11 месяцев назад +2

    Good stuff bro! Keep it up!

    • @Not4Talent_AI
      @Not4Talent_AI  11 месяцев назад

      yooo sup royal! Thank you so much!
      (fun fact I'm having to learn blender for the next vid XD)

    • @lefourbe5596
      @lefourbe5596 11 месяцев назад

      🥳 i'm sure you could make a 3 min version of this !
      there is much cleaning to be done and part to divide

  • @JimiVexTV
    @JimiVexTV Год назад +7

    Thank you kindly for this in-depth, yet concise breakdown. Done a bit of lora training myself, but was really free-wheeling it, and this has given me a lot of ideas to improve my results. Vastly underrated channel, liked and subbed my G

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      thank you so much!! We are trying to make an in depth video for lora training with Lefourbe. so people can traing without having to "guess" what parameters are good. Lets see how that goes xD
      but tyty! hope it helped

  • @inkmage4084
    @inkmage4084 Год назад +72

    As an artist I initially did not like the AI stuff.. But as I am working on remaking a game I did way back in high school, this is a massive time saver. I'd take my designs and run it through the AI and get different variations that have allowed me to quickly finish a character's final redesign. This is quite amazing, it also saves money as I am using this very technique to have 3D models done of the main character, that I will later have printed for a statue. I will also be using it to have the figures done of the characters to another project.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +2

      super great to hear!!! really curious on the 3D model aspect tbh (as a 3D modeler xD)

    • @UeharaKeitaro上原恵太郎
      @UeharaKeitaro上原恵太郎 Год назад +9

      I think the people who would benefit the most from these AI tools are real artists like yourself.

    • @inkmage4084
      @inkmage4084 Год назад +2

      @@Not4Talent_AI That is awesome! Thanks for this video, definitively glad I subscribed too!

    • @lefourbe5596
      @lefourbe5596 Год назад +5

      it's heart warming to see that some people find the right use behind the black spots of this revolution.
      i've started learning blender for a bit for chara modelling. I was painfuly missing orginal 2D reference.
      then i saw Royal Skies video and was sold Instantly... however i did not touch blender since. time and such you know :/

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +2

      @@lefourbe5596 time is a b***

  • @substandard649
    @substandard649 Год назад +4

    Super interesting. Thanks for your hard work, I'm exhausted just watching 😂

  • @audiogus2651
    @audiogus2651 Год назад +5

    I trained a checkpoint on a 3D model when Dreambooth first came out last year and it turned out fairly well in that I could change backgrounds and poses. I tried again the other day on a Lora and it was terrible. I was left scratching my head until I saw your video and you explained that all of the auto captioning (which did not exist back then) was likely throwing it off. Thanks so much for the tip, can't wait to try it again! Exciting stuff!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      ty!!! We are working on a fairly in depth LORA training guide so If you keep running into a wall I hope that video helps when it comes out! (And then there is the discrod for help ofc XD)

    • @lefourbe5596
      @lefourbe5596 Год назад +2

      you're just like me then. my avatar here is made up from a 3D video game model i made. lucky you i've not given up trying and example will follow soon

    • @audiogus2651
      @audiogus2651 Год назад +1

      @@lefourbe5596 I would share the one I made last year but alas it is for work. Was pretty easy in dreambooth, I wager if I just filter the excess of the auto captioning I should be OK.

  • @LogoCat
    @LogoCat Год назад +1

    This video tutorial and magic numbers are legends.

  • @JelliedGrapes
    @JelliedGrapes Год назад +7

    For those who are lazy (like me) here's the text at around 11:08
    close up of a man,{{1$$__cameraView__}}, {{1$$orientation__}}, {{1$$__expression__}}
    full body shot of a man, dynamic pose, {{1$$cameraView__}}, {{1$$__orientation__}}
    upper body shot of a man, {{1$$__orientation__}}
    Change Man to Woman if you want a woman

  • @CaltagironeLegend
    @CaltagironeLegend 10 месяцев назад +2

    no mms eres un genio, por alguna razon tuve la idea de que hacer pero no sabia como y llegue a tu video como caido del cielo, te amo sos un crack

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      haahahahahahaha me alegro!! Gracias a ti!

  • @amaterasu5001
    @amaterasu5001 Год назад +1

    thank you man that u remembered to make this. big love from me

  • @Retroxyl
    @Retroxyl Год назад +1

    Super cool video and explanation. I have made a character of some sort just yesterday, so training my own lora would be really helpfull, I guess. I'll try it next week and see how it works out.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      thanks!!! If you need help we'll be happy to give you a hand on the discord :3 Hope the video helped hahahaha

  • @jibcot8541
    @jibcot8541 Год назад +1

    Very good in-depth video on Lora Training!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Thanks!! an even more indepth one incoming soon enough xD

  • @cofiddle
    @cofiddle Год назад +50

    Really like how you're encouraging the use of photoshop, and other outside sources. Really emphasizes how powerful ai can be for an artists workflow. Also, wanna make a huge shoutout for generative fill, for that clean up step. Being able to just ask it for a ribbon or something and get multiple results untill I see one I like, it's incredible what these tools are becoming capable of lol.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Ohh true, so used to old photoshop I forget generative fill XDDDDD really nice thing to keep in mind for sure.
      And yeah, these tools advance so fast it's mind-blowing xD
      Thank you!!

    • @jaxkk1119
      @jaxkk1119 Год назад +3

      pretty sure we don't need any artist for this workflow

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +3

      @@jaxkk1119 no, but if you want perfect results and fully custom characters. The best way is to use artistic skills. Either your own or someone elses

    • @jaxkk1119
      @jaxkk1119 Год назад +2

      @@Not4Talent_AI won't need it in the future, also I don't think people would willing to learn for their whole life just to do ps for AI image, just don't make any sense

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +3

      @@jaxkk1119 hahahaha no ofc. But I'm a firm believer that artistic skills help a lot on the AI art space. As you can compensate for a lot of AIs shortcomings. At least current shortcomings

  • @deejaytabz1
    @deejaytabz1 Год назад +1

    Thank you so much for your resources. you are a legend bro!, have also joined your discord channel.

  • @shadowdemonaer
    @shadowdemonaer Год назад +24

    If you struggle with getting it to make consistent faces, I highly recommend making the face in Vroid Studio and Photoshopping them in. It's also necessary when making a lora to get close ups of the faces, and also for details on the clothes that you will want to be able to inpaint later in case the program struggles.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +2

      Thanks!!

    • @lefourbe5596
      @lefourbe5596 Год назад +4

      Good Idea.
      I know Vroid and it have gotten pretty good at making anime oc.
      You can use any videogame character marker to get close to the style you look for.

  • @planktonfun1
    @planktonfun1 Год назад +1

    Thanks, this helps a lot from stable diffusions limitations

  • @nikoleifalkon
    @nikoleifalkon Год назад +2

    Wow, pretty detailed explanation and i am a LoRA expert, subscribed!

  • @nyanbrox5418
    @nyanbrox5418 Год назад +6

    What I love about videos like this is *someone* is going to make a tool that simplifies all of these steps, maybe AI to generate new poses too?

  • @GryphonDes
    @GryphonDes Год назад +1

    Great video! Fun ideas and it was great to follow along!

  • @audiogus2651
    @audiogus2651 Год назад +4

    Lol, 'freedom signs', this guy is a comedian😂

  • @guilhermegamer
    @guilhermegamer Год назад +14

    HERE I AM! Ready for another GREAT video! =D

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      hahhahahha tyty! Hope it lives up to the expectations XD

    • @MelloWaifuBtz
      @MelloWaifuBtz Год назад +1

      um classico do youtube @guilhermegamer

  • @blacksage81
    @blacksage81 Год назад +1

    Great tutorial, and I had just finished wrestling with the Charater turner Embedding, Lora, and open openpose to finally get repeatable character sheets. So this video came just in time. Also, when upscaling with ControlNet I've found that you actually dont need to load the image you want to upscale into controlnet itself, all you need is to select Tile, and Controlnet is more important.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Thanks!!
      I saw that on a discussion but I always import it just for placebo xD (even though it is probably a bad idea cuz some times I import the wrong image and mess up the whole thing hahahahahhaa) should stop doing that prob.

  • @MooreKatMoore
    @MooreKatMoore Год назад +3

    Thank You so much. I do anime edits and copyright is a hige issue right now and switching over to ai has been easier but people love the stable diffusion more then just pictures 😅 ive been looking for a way to get more consistent characters too ❤ you get a sub my brother!

  • @slackerpope
    @slackerpope 10 месяцев назад +1

    Fantastic video! Subscribed! ❤

  • @mariolombardo9284
    @mariolombardo9284 Год назад +2

    your videos are just too good!!!!

  • @TheElBudo
    @TheElBudo Год назад +7

    Great accumulation of info, workflow and the topic of consistency is so important.++
    Consider breaking your next videos into 10 minute segments ( which means more videos for you!) so they're more digestible for us. Separate them into bite-sized skills all under one related thread or collection of videos.
    Yours is the only tutorial I've had to slow the playback down for to fully hear what you're saying because your visuals are also moving very quickly. You can fill up the extra time by both being sure you're explaining the next step and not just showing what to do but WHY you're doing it; plus you can give some alternative examples regarding what you're demonstrating.
    Great work but it feels like I'm "rewinding" more than I'm playing!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Thanks for the feedback! I dont know about making them 10 min each, but we plan on making a firt video covering the basics and how to understand them. And then another video with more advanced info.
      Tyty!!

  • @USBEN.
    @USBEN. Год назад +2

    Epic video, huge value.

  • @morganjones6401
    @morganjones6401 Месяц назад +1

    I went through this entire video and came to the realization that I'm soooo glad that I learned 3d modeling and animation. Not hating on the hustle, but it's actually easier to learn how to 3d sculpt and rig. That way you can take these characters and do whatever you want with them. Even use AI rendering techniques like ComfyUI with your animation and making these styles move the way you want. Jus sayn.

    • @Not4Talent_AI
      @Not4Talent_AI  Месяц назад +1

      nah you are 100% right bro hahahaha I totally agree.
      I'm not sure if I'd say it's easier. but if you want something very specific and with a lot of detail, then the effort to results ratio is just better when you know how to draw or 3D model + rig.
      As a 3D modeler tho, I'm trash at modeling characters xD And I'm not a fan of rigging (mainly weight painting)

  • @Nickknows00
    @Nickknows00 Год назад +3

    You should try the same process but start with a 3d model character you can pose and use as the Lora training data, I feel that would best process for a small studio, pay an artist to make a custom 3d character then use that as the Lora training base

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      I think that could be interesting yeah. Mainly seeing how good can the LORA get the character. (Lefourbe tried something like that, with a very hard character, and he is getting pretty nice results). so should be possible I think-
      Things to note in the "future testing" notebook for sure! tyty!

    • @lefourbe5596
      @lefourbe5596 Год назад +1

      Yeah i'm doing that mostly !
      I'm trying to make some example for the next video.
      The 3D character i have are not so well made and i was hopping to improve their visuals with SD without sacrificing their design.

  • @Cutest1TheGame
    @Cutest1TheGame Месяц назад +1

    @10:47 { LOL. They're called "curly brackets" }
    @12:14 The "broom looking parts" are called tassels.

    • @Not4Talent_AI
      @Not4Talent_AI  Месяц назад

      Hahahahaha learning new stuff. Thanks!!!

  • @LanaABA
    @LanaABA 8 месяцев назад

    You seem to know so much, thanks for sharing! Frankly, for a beginner the majority of the videos are hard to understand though. You present so many different features and techniques in one video that it gets overwhelming. Would appreciate it if you also make some videos in the future for noobs like me, with a slower pacing and less concepts but more in-depth explanations 😅❤

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад

      Tyty!!
      Hahaha I ve been tolda that, true that the channel is more aimed towards people with more experience.
      I have a vid for most concepts touched in this one, but they are still just as fast😂
      I might go back and update some vids on the basics eventually. In a more calmed and beginerfriendly way hahah
      Ty for the feedback!!

  • @paulrayner6744
    @paulrayner6744 Год назад +5

    I will kindly argue about some of your points:
    1. When you caption the character you should describe the outfit and any accessories as well. Trust me, you will have an easier time if you want to prompt your character in any others outfits which is not the default one or to undress it.
    2. Increasing the max resolution of the training dataset to 768x768 does actually make a difference on the overall quality of the images. I would drop the number of batch size to 2 (this wil be comfortable for most people without getting the Cuda memory error) and set the image resolution to 768x768. Lora training already takes little time so don't sacrifice image quality for training speed.
    3. If you're a beginner in Lora training don't bother with regularization images, you're overcomplicating yourself (I know in your video you said it's optional, just wanted to make a mention about this)

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      Yes! I agree with everything. For better results having 768 or even 1024 is the best option. But more time consuming. If you want the perfect training then thats perfect.
      In this case we were testing so I thin 512 by 512 is the fastest testing option.
      Also true what you say about tagging. Even though I wasnt looking for sn ourfit change as that is prettt much the character. The face and hairstyle are very standard😂😂
      And for epochs also true, gpu is pretty much everything there hahaha

    • @lefourbe5596
      @lefourbe5596 Год назад +3

      That exactly what i told N4T when WE made the vid 😁. 1. You are right and it's a choice we made. Describing everything will make the Lora harder to prompt and make comparison between diffents designs difficult cause of bleeding concepts.
      There is more in the video than i have planned. It would have been in part 2 for many of these details to me.
      Trust me he is well aware. My best Lora is a one that need a long freak prompt to get every part of it's complex design. (My profile pic)
      Facial mark, Horn, asymetrical, heterochromia, high collar, black sclera, ponytail, gauntelet, belt, coffin pouch...
      For second point, i disagree cause GTX and 20** RTX cards exist and are slow with less VRAM.
      I get it as i have a 3090 but even then i prefer to be able to tweak LR, models and network dim first before. Especialy for N4T was short on Time and run a GTX 1080.
      So yeh 512 training first with your 1024 dataset.

  • @namesas
    @namesas Год назад +1

    Great video as always!

  • @lefourbe5596
    @lefourbe5596 6 месяцев назад +1

    once i finish with my client i think this video deserve a good remaster

    • @Not4Talent_AI
      @Not4Talent_AI  6 месяцев назад

      Let me know if thats the case then! Hahahah

  • @relaxation_ambience
    @relaxation_ambience Год назад +3

    20:50 I saw one guy provided various results experimenting with Network Rank and Network Alpha. So the best was 128 to 1. I also experimented with my characters and also found that the best is 128 to 1. But my characters were photorealistic, for anime maybe your parameters better.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      From what I've been seeing this days, there is a lot of different opinions about that, tbh just said what worked for me. But could be what you say too

  • @shimmyfpv7472
    @shimmyfpv7472 10 месяцев назад +1

    Thanks for the tutorial!

  • @ImInTheMaking
    @ImInTheMaking Год назад +2

    This workflow is so fun when you're fluid with your Photoshop skills. In my case, I open up another tutorial how to edit in Photoshop :D

  • @japaoyagami3273
    @japaoyagami3273 Год назад +1

    Que dhora!!!
    Muito obrigado pelo vídeo!!!

  • @Larimuss
    @Larimuss 2 месяца назад +1

    Flux is good for making the charactersheet if you got the vram, then upscale and cut the faces and use face detailer and face changer for different expressions.
    But after the character sheet flux is not so great. You then plug it into SD gsneration and ip adapters. In comfyui.

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад

      totally! If you have seen this video, it is basically a more recent upgrade to this workflow:
      ruclips.net/video/MbQv8zoNEfY/видео.html

  • @theprodigalshrimp
    @theprodigalshrimp Год назад +1

    So good thank you for the new knowledge.

  • @Luxcium
    @Luxcium 10 месяцев назад +1

    I have gave you a thumbs down on the first video I watched and I was about to walk away and I don’t know what happened but I was just really interested about the topic so I listened to it and then gave you the thumb up and then subscribed you are now one of my favourites RUclipsr on the topic I appreciate your genuine interest and your dedication🎉🎉🎉🎉

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      super glad to hear that, thank you!!!

    • @Luxcium
      @Luxcium 10 месяцев назад +1

      @@Not4Talent_AI I don’t know how I felt but it is so rare that I give thumbs down normally I do mean it and given that you are in my top list make me feel like… I remember that it was genuine then and now I don’t know why it was like that… but knowing that I would have never heard you saying *Popimpokin* many times in the other video makes me feel very happy that I changed my mind

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      hahahahahaha popimpokin changed everything @@Luxcium

  • @babyfox205
    @babyfox205 4 месяца назад +2

    Is there a way to generate separated legs, torso, head, hands, for skeletal animation? Or maybe a custom lora can be trained for that? Which approach could work for that?

    • @Not4Talent_AI
      @Not4Talent_AI  4 месяца назад

      I've only seen loras that try that, but I've never actually tried it myself

    • @babyfox205
      @babyfox205 4 месяца назад

      @@Not4Talent_AI could you suggest where I can find examples of these lora attempts? 🙏

    • @Not4Talent_AI
      @Not4Talent_AI  4 месяца назад

      @@babyfox205 look on civit.ai, main site where people publish all types of lora

  • @kyperactive
    @kyperactive Год назад +1

    I feel like you'd get more mileage actually picking up a pencil... but this works too.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      100%. If you can create the cajaracter ny drawing it then go for it. Thats always the bes way ahhahah

  • @sorryyourenotawinner2506
    @sorryyourenotawinner2506 2 дня назад +1

    Create consistent characters with Stable diffusion!! if you're a professional with everything of course

  • @morizanova
    @morizanova Год назад +1

    Thanks for this video . although I doubt able to prepare and fix the dataset like yours, Now I`m bit understand the basic concept - idea and workflow needed And about the discord invitation , yeah ... when Lex offering you to hangout in his crib you should come and ready to feel amazed

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      ty!! I hope it gets easier with time and testing. In the case it does I'll do a new video. atm there is quite a bit of manual labor involved xD

  • @CherryLolotv
    @CherryLolotv Год назад +1

    Thanks for your help!

  • @b0lasater
    @b0lasater Год назад +1

    Wow, very detailed and beautifully produced video. My guess is that there is no fully online service that would allow a creator to do the things you describe here, but please correct me if I am wrong.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      thank you so much!!
      As far as FREE online services for this, I wouldt say it is possible atm, since google colabs are RIP.
      But there are a few payed options out there.
      Like thinkdiffusion for Automatic1111 (you probably could find a free solution to this as well, even though I cant tell you one atm cuz they usually either close up or end up changing to a payed model over time).
      And for the training Dreamlook.AI is a very good option. (I have been sponsored by them in the past. But I still think they are an awesome option)
      If you have any question or doubt, you can ask on discord, there is a lot of people that might be using online tools there as well.

  • @ChloeAfterDark
    @ChloeAfterDark Год назад +1

    This is great, thank you!

  • @guest777-nf8dc
    @guest777-nf8dc Год назад +2

    Thank you for the video.
    Is it really okay to use it for comersial use?
    How if the character is robot or machine. Would you make a video about it?

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      It is okay to use for commercial use. Just check if the model you are using allows it or not
      I'm trying to find a way to make the same thing for non-humanoids. I think maybe you could use midjourney, or batch generate without controlnet and pray xD
      (ty for watching, btw :3 )

  • @AbyssalPrimarch
    @AbyssalPrimarch Год назад +1

    This looks amazing! I hope I figure this out on how to do it xD for now I gotta figure out why I've no models to be found in my control net portion ha

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      You need to download them from the official hugging face page! (I think it was in hugging face. Cant really check rn but I have a video on it where all the info should be)

    • @AbyssalPrimarch
      @AbyssalPrimarch Год назад +1

      awesome tyty! I'll dig it up and see how it's done

  • @P1x3lGuy
    @P1x3lGuy Год назад +3

    "Look What They Need To Mimic A Fraction Of Our Power"

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      hahhahahhahaha so true xD
      AI people planning 3 weeks of projects to get a character.
      Artists: *draws*
      It is what it is tho, still fun

    • @wallacewells6969
      @wallacewells6969 Год назад +4

      ​@@Not4Talent_AI it must be so hard to type words im sry 🥺🥺🥺

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      @@wallacewells6969don't know if you watched the video but I'm assuming you didint XD
      I'm agreeing with you my man

    • @wallacewells6969
      @wallacewells6969 Год назад

      @@Not4Talent_AI no i didnt watch ur dumb vid learn to draw instead of stealing art pls

  • @Backtitrationfan
    @Backtitrationfan Год назад +1

    Omg Abraham Lincoln was my first thought when I clicked this video 😂

  • @Not4Talent_AI
    @Not4Talent_AI  Год назад +26

    The LORA training video is out finally: ruclips.net/video/xXNr9mrdV7s/видео.html

    • @belajarblockxhain
      @belajarblockxhain Год назад +1

      can you share your pc spec?

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      @@belajarblockxhain 16 gb ram, 1080TI gpu, intelcore i7-8700 3.20ghz

    • @belajarblockxhain
      @belajarblockxhain Год назад +1

      @@Not4Talent_AI thanks a lot, seem, i will use my old 980ti for trial

    • @SquekretGenius420
      @SquekretGenius420 Год назад +1

      Just wondering does Stable Diffusion still have issues with AMD? This has been keeping me away from trying it out more. I heard they were going to add support for AMD. Has that happened?

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      @@SquekretGenius420 havent really looked into it much.
      All ive seen is an old comment of this guy:
      ""github.com/lshqqytiger/stable-diffusion-webui-directml works on Windows + AMD GPU
      Tested on Win10, mine is RX 480, and is 4x faster (2.5-3s/it) than CPU (10-12s/it), not much, but at least faster...""
      But no idea. I would think they probably found a way tho

  • @CherryWood-kq1gr
    @CherryWood-kq1gr Год назад +4

    Great idea! But a character sheet in one drawing style will make the Lora learn that style. That's why it is suitable to use the first result to create new images with more variety and retrain again as suggested.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      true, that's a pretty good idea. Instead of having the lora already trained in a style and retrain from there. Create different styles directly, right? (not sure if that is what you mean. but sounds possible and nice)

  • @Toritto713
    @Toritto713 Год назад +2

    hello friend, a friend and I are starting to develop lora character models but we don't fully understand what regularization images are, we have a question about whether regularization images are random images for the model to convert them into your character or if they are images that the model has created and that have gone wrong in order to correct the model, we also borrowed a better computer during this weekend (since our pc are "potatoes", our best graphics and with more vram is an RX 590 the which doesn't even support cuda xd) so we want to take advantage of the fact that they lent it to us to train better models, currently we train a model with 80 images, 3000 steps and 1 epoch and we train another model with 460 images, 4600 steps and 1 epoch, what kind of training do we get? would it give better results? in neither of the 2 we used regularization images because we did not understand that point :c

    • @Toritto713
      @Toritto713 Год назад +1

      our original language is not english so maybe that's why we missed the point

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      If you have more than 60-70 images dont even bother with regularization!
      I exain it a little more here tho: ruclips.net/video/xXNr9mrdV7s/видео.html
      I have no idea what training you will get, since it highly depends on the concept trained and the complexity of it. Also depends on the quality, learning rate, resolution, captioning... there is a hige list of things that go into a good training other that steps!
      You can get into the discord if anything, there the community will most likely help.
      Amma be almost gone for a few days so cant help much myself tho

  • @muhammadzazulirizki1000
    @muhammadzazulirizki1000 Год назад +2

    Hi... I am totally new in AI stuffs. Can you tell me what does it mean to write "BREAK" in the prompt? Also, is it necessary to write it in capitals?

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Hi!! It is a way to try for AI to get better colors. I explain it a lil in this video: ruclips.net/video/wso_O2vk2dw/видео.html
      and yes, it is necessary to use capital letters :3

    • @muhammadzazulirizki1000
      @muhammadzazulirizki1000 Год назад +1

      @@Not4Talent_AI thanks for the instant reply! Does it work in ANY other platforms like midjourney?
      Also thanks for the provided link, definitely will check it!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      @@muhammadzazulirizki1000 No problem!! I dont think it does, its specific to Automatic1111 I think

  • @greyfox78569
    @greyfox78569 Год назад +2

    I am finding out it's not the prompts, it is the negative prompts that make good pictures.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      they can make and break the image, yep

    • @lefourbe5596
      @lefourbe5596 Год назад +1

      yes but no. if you want to avoid a generic look you should leave the model express itself. negative embedding will drop creativity and randomness greatly (especially negative embeddings).
      model quality, style lora and details tweaker tools can make good picture by themself.
      negative are very useful but they suck at randomness.
      for my part i avoid hem on recent model or i use them later in the generation using *[ lowres, blur, artifacts:0.3 ]*, with that the original composition is mostly unafected in the first frame and so is the resulting generation

  • @kabirgomez7967
    @kabirgomez7967 Год назад +1

    I am "relatively" new to this, and everything was going very well, but halfway through the video I got lost, there were many fields that I don't know, but basically it is loading a good amount of images of the character and then creating a LORA, I think I can try

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      Hahhahaha sorry! Yep, thats basically the idea. And using a photoediting software to create some variations in the images

    • @kabirgomez7967
      @kabirgomez7967 Год назад +1

      @@Not4Talent_AI don't worry,English is not my primary language,that could explain it,but I'm working with all that I understad,thanks bro

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      @@kabirgomez7967 thank you for watching! hope it helps to some extent :3

  • @finalomega8894
    @finalomega8894 Год назад +1

    brush looking stuff-- cackle cackle cackle!🎉🍾

  • @NezD
    @NezD Год назад +1

    He got that Ghosthunters scanner!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Wtf😂😂 where?

    • @NezD
      @NezD Год назад +1

      At 1:32. That’s exactly what the spirits be looking like!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      @@NezD hhahhahahhahahaha ok I see now xD

  • @pladselsker8340
    @pladselsker8340 Год назад +2

    This video is a goldmine

  • @omegasrk
    @omegasrk Год назад +1

    How do I set my own character as the person who is going to be on the control net.
    So basically I set poses with my designed then tell diffusion to put my character (that i have uploaded) .
    Thank you

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Responded in the other comment, but basically you would use controlnet as usual but the prompt would be something like
      "yourchar, holding an apple on the beach,

    • @omegasrk
      @omegasrk Год назад +1

      @@Not4Talent_AI If you can make entire video where, You explain how we can do all this things. i want to work on comic so i gonna need this tutorial

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      I will do that eventually, but it might take a while. (As I want to first make videos on how to fix/avoid the vast number of issues one might have while making a comic with AI).
      I'll note this down and try to make a short on it as soon as I can. I'll @ you when I post it if you want! @@omegasrk

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      @omegasrk short done! ruclips.net/user/shortsKm7EY49YbOA?feature=share

  • @MattFixesStuff
    @MattFixesStuff Год назад +2

    I use all the same settings and i get super bad results. Its not following the poses from open pose and if I put control weight to 2 (max) then its following the lines but its creating really bad fractured results.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      weird, have you tried changing checkpoints? Maybe going with a higher Resolution from the start instead of highres fix?

    • @lefourbe5596
      @lefourbe5596 Год назад +1

      Too much control will give you broken result.
      Too Big of a Map will lead to subject shifting.
      Too rich of a prompt will lower creativity.
      Too few of a resolution will give blurry mess.
      Draft your first work. Use it for img2img with utlimate SD upscale with Tile of 512*512.
      optionnaly use the Tile controlnet to better guide the image at higher denoising.

    • @MattFixesStuff
      @MattFixesStuff Год назад +1

      @@lefourbe5596​ @Not4Talent_AI Thank you. I got some better resulst now by amping up the resolution and making my own poses. Its not nearly as good as in your video but I think I can go from there :)
      Pretty amazing to see what is possible. And a bit scarry.

  • @davidpoirierfilms
    @davidpoirierfilms Год назад +1

    16:25 I might be blind, but it seems the link is not in the description.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      My bad, added it now!
      ruclips.net/video/xXNr9mrdV7s/видео.html

  • @Gamecore-cdmx
    @Gamecore-cdmx Год назад +2

    Hello, it's the first time I use control net, and I'm trying to follow your tutorial, but I can't get controlnet to give me any results, it's as if it doesn't exist, it doesn't affect the final image.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      probably you are using a preprocessor when it should be at "none". If that's not the case, please contact me via email or discord with a screen capture of what you have as settings!

    • @Gamecore-cdmx
      @Gamecore-cdmx Год назад +1

      @@Not4Talent_AI I re installed stable difusion and then I downloaded the open pose file from github, its working perfectly now, i don´t understand what went wrong before, but ill finish ur tutorial now, thank you for your reply, your videos are amazing.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      glad to hear that! and thank you so much! @@Gamecore-cdmx

  • @chrisSandersASX
    @chrisSandersASX Год назад

    im about to give it a shot

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Hope it works well, gl!!

    • @chrisSandersASX
      @chrisSandersASX Год назад +1

      @@Not4Talent_AI gettign better. trying to get a model sheet so i can model a character

  • @mamorutoriyama
    @mamorutoriyama Год назад +1

    You have to use *Higher Resolutions* to get better generations from the get go. I've personally found if I generate at anything lower than 1024, the Ai can't produce enough detail to make a complete and coherent character/image, sometimes you might get something good, but the lower res you generate at, the worse the actual Design/Art quality.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      yeah, 100% true. character sheets with 512 are unintelligible xD

  • @urgyenrigdzin3775
    @urgyenrigdzin3775 Год назад +1

    Thank you so much! 👍👍👍

  • @burakgoksel
    @burakgoksel 8 месяцев назад +1

    Hey man, great job. Are you still using a1111 and lora training or have you switched to comfyui?
    Another question I have is, when you were solving all this stuff, did you have any prior knowledge? Like software knowledge or graphic design. Especially when I look at comfyui workflows, it seems impossible to reinstall it.

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад +1

      Hi! Tyty
      Im still using a1111, even though I dont train many lora. But thats bc I have no need for that atm.
      All the knowladge I have is from studing animation in uni. Comfy ui I havent gotten to it but I have worked with nodal stuff before like nuke, maya, ue, blender... so it isnt as intimidating to me.
      I dont know about installing all that stuff tho, havent done it yet 😂

  • @DanielThiele
    @DanielThiele 11 месяцев назад +1

    do you have suggestions for creaating a character sheet based on my own character? I have one illustration in my own style and now want to make a character sheet based on that reference.

    • @Not4Talent_AI
      @Not4Talent_AI  11 месяцев назад

      I think it is now possible, maybe with help of sites like: huggingface.co/spaces/sudo-ai/zero123plus-demo-space
      Also IP adapters can help.
      I know that @lefourbe5596 made a dataset from just 1 image, but no idea how atm hahahhaha

  • @TheFear434
    @TheFear434 Месяц назад +1

    what control net model do i need to use for pony? The one in the video wont work for it sadly

    • @Not4Talent_AI
      @Not4Talent_AI  Месяц назад

      Pony uses the same as SDXL I think. It shouldnt give you any issues.
      If you use "noobAI" though, you will need a different controlnet model.
      NormalSDXL models: civitai.com/models/136070/controlnetxl-cnxl
      NoobAI models:
      civitai.com/models/929685/noobai-xl-controlnet
      IF you need to look for models, civitAI will 90% of the time have them

  • @henrylarson1
    @henrylarson1 2 месяца назад +1

    Love this guide as I’ve been using OpenPose for a week or so now. However, no matter what guidance I provide the generated image won’t do multiple poses matching the guide poses. Could it be my model? Any tips?

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад

      hmmmmmm that's kind of weird, maybe it is the model/checkpoint you are using. Make sure that if you are using SDXL you use cn for sdxl. and if it's 1.5 then you use cn for 1.5 as well.
      There are also more recent guides that follow this process if you are interested. I recently saw this one which is a more advanced and newer version of this:
      ruclips.net/video/MbQv8zoNEfY/видео.html

  • @SorcerWizard-f8f
    @SorcerWizard-f8f Год назад +2

    I am curious- would this work for a character that you already have the reference image for. Basicallly I generated a character and am pretty happy with it but I'm trying to work out how to generate that same character in different poses while keeping hair, face and clothing consistent. If I try to change the pose it either changes how the character looks or the pose doesn't change even with controlnet and openpose

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Yep, thats the usual problem. We havent found much of a solution tbh. There are a few options. Wont b perfect but its your best shot.
      1- train a lora with that one image /separating it flping it editing it etc..(to have as many variations of that same image as possible and make the training a little more flexible).
      2- generating a lot of images describing your character and cherrypickinh the ones that look most like it. Then training a lora with those
      3 a mix of the 2 options

    • @SorcerWizard-f8f
      @SorcerWizard-f8f Год назад +1

      @@Not4Talent_AI I see thanks for the advice - I will give it a look over the week

    • @lefourbe5596
      @lefourbe5596 Год назад +2

      @@SorcerWizard-f8f actually i'm doing that thing ... i got decent results but it have to be forced with openpose.
      the less you have the more frozen your generation gets. move an arm and it goes to hell FAST !
      fliping manually saved the day in the dataset as the character can somehow face two direction and have both arms down. i've yet to generate a correct lower body to feed. my V1 lora have 9 of the same image, my V3 Lora have 22 of carefully selected and cleaned images.
      however you will fight your SD model. i have a anime girl that is BLACK and SD anime models are usually racist... can't really get more than taned skin color.
      my solution is to merge your favorite model with a good general/digital model. in my case AnyAnimeMix with Dreamshaper bring back the dark skin tone a bit along some finer details that AnyAnimeMix lacks.

  • @DarkPhantomchannel
    @DarkPhantomchannel 10 месяцев назад +1

    Great video!!!!! Only one question: about how much time does it takes to do all this process with details and refinement?

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Thanks!!
      It will depend on your pc speed and luck when generating the character. Also the complexity of it.
      So i cant really giva good estimate for this.
      For me it took like 2 hour to prepare an ok dataset once I knew what I was doing. And then training took a bit more.
      With my current pc it would be a total of maybe 2 hours total if the character isnt super hard.
      If you dont really care super ultra much bout the character having a lot of precission. Then you could do this in 30 min + wtv it takes for the training

  • @Anniefonde
    @Anniefonde Год назад +1

    Hi! I didn't get so much how CharacterTurner works as I'm no totally familiar with the program. Can you explain that more in detail? Thank you!

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      hi! Ended up not using it, but it's a LORA. You basically download it and, adding it to de prompt, it will make the image a turn around. Sorry for the late response!!!

  • @gglivetv
    @gglivetv Месяц назад +1

    My model doesn't seem to "attach" to poses. If I use the model u use I have this error RuntimeError: mat1 and mat2 shapes cannot be multiplied (616x2048 and 768x320)
    mat1 and mat2 shapes cannot be multiplied (616x2048 and 768x320)

    • @Not4Talent_AI
      @Not4Talent_AI  Месяц назад +2

      this is probably because the controlnet model you are using was made for a different type of checkpoint. If you are using an sdxl checkpoint use an sdxl controlnet open pose model.
      For a noobAI checkpoint a noobaI controlnet, and same for a pony checkpoint with a pony model.
      In the video I'm using a Sd 1.5 checkpointso I use an sd 1.5 controlnet

    • @gglivetv
      @gglivetv Месяц назад +1

      @Not4Talent_AI thanks!

    • @Not4Talent_AI
      @Not4Talent_AI  Месяц назад +2

      @@gglivetv if that doesnt work make sure that the proportions of the controlnet reference and the image you are generating somewhat match. Or change the way controlnet uses the proportions (like, just resize, crop, etc)

  • @nitingoyal1495
    @nitingoyal1495 10 месяцев назад +1

    Hey I am working on a project which is to create a comic book. First the user would define the character and then narrate the whole story. Can you tell if for my case it would be a good idea to train a lora using the character description and then use it while generating images of narration part. AND How much time it takes to train a character LORA given I am working on AWS EC2 instance with 16 GB GPU access? Also i want to automate all the steps in code itself(without manually doing). Can you tell if it is possible. THANKS

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      I think it is a possible idea, some websites have already started doing similar stuff so it also must be possible to automate.
      hardest part would be the correct upscaling and cleanup. (making sure that the generated character makes sense before starting the training)
      then, for a 16GB gpu, a lora of something around 20 images, should take 15-20 min? maybe? I'm not sure tbh, has a lot of "ifs" involved.
      It would take a while to do and figure out how to solve some of the possible issues you might encounter along the way, but I do think it is possible to do.
      Would do some manual testing before investing a lot of time into it tho

    • @nitingoyal1495
      @nitingoyal1495 10 месяцев назад +1

      Thanks for your reply. Really appreciate your content!!@@Not4Talent_AI

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      np! thank you for watching it! @@nitingoyal1495

  • @acetum_
    @acetum_ Год назад +1

    When I select the OpenPose control type and select the preprocessor as "none" my model also appears as "None." I feel like this is causing my outputs to end up not looking like a character sheet despite using the provided OpenPose references. Is there anyway I can fix this?

    • @acetum_
      @acetum_ Год назад +1

      UPDATE: It's been a good couple weeks since I've tried this "tutorial." Back when I installed controlnet I didn't realize the models themselves. That was my main issue right there. I'm going to use this comment as a log for my progress (if I decide to continue)

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      I take it as you dont currently have the "none" issure, right?
      Just in case, you need to download the models. Once you have them properly placed in your stable diffusion, models, controlnet folder. You'll be able to select any model you want. You can do this by clicking on the drop down menu. If you dont find it you can just click on the "open pose" buttont. That will automatically add the openpose preprocessor and model. You cna just take out the preprocessor and it should work fine. @@acetum_

  • @RodrigoLimaDias-i7c
    @RodrigoLimaDias-i7c 11 месяцев назад +1

    Someone got something like a spreetsheet of a warrior slashing with SD? The greatest problema I face was (other IA's) was that something like a slash, poses, frames and sequencies in the action was completyly unknow, so my question is if someone can generate consistent spritesheet of actions like slash, smash dash and others with SD.

    • @Not4Talent_AI
      @Not4Talent_AI  11 месяцев назад +1

      Atm I have no idea, but you pose an interesting issue that might be very worth to look into. I'll note it down and see what I can do!

  • @DarkStoorM_
    @DarkStoorM_ Год назад +1

    The "like first, then watch" gang reporting in 😂

  • @rhinosdesigns291
    @rhinosdesigns291 2 месяца назад +1

    how to do it if we already have the face and the pose sheets but want a consistent model, is the method the same ? do you have any tutorial to apply given face from differents angles to a consistent char ? ty

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад

      hey!! Yes you would need to do the second part of the tutorial. Which would focus on training the model on your images.
      If you have side views of the faces it would be better than not having them. The more variety on poses the better.
      I think this works mainly for realism, but tools like the ones portrayed in this video (that let you pose the face) will probably come out for anime or cartoon soon enough too; ruclips.net/video/MbQv8zoNEfY/видео.html

  • @srikantdhondi
    @srikantdhondi Год назад +1

    I watched many times, but could not follow this video, Could you please make video on "Character Consistency SOLVED in Stable Diffusion" since i could not find it on youtube?

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      Sorry for that? My idea of "character consistency solved" is pretty much this video😂 im trying to find other ways with 3D and stuff. But for now this is all I was able to get.
      If you want, you could tell me what is it the video doesnt explain clearly, so I can try to improve on it when, eventually, I make an update on the method.
      Again, srry. And thanks!

  • @Showbiz_Stuff
    @Showbiz_Stuff 5 месяцев назад +1

    Thanks for the video! What does BREAK do?

    • @Not4Talent_AI
      @Not4Talent_AI  5 месяцев назад

      tokens at the start of the prompt have more weight. BREAK acts like a second "start of the prompt" to reset the token weights after it.
      Can help with prompt comprehension

    • @Showbiz_Stuff
      @Showbiz_Stuff 5 месяцев назад +1

      @@Not4Talent_AI thanks!

  • @lefourbe5596
    @lefourbe5596 Год назад +1

    Hello there ! It's Le_Fourbe !
    Damn why YT trow me an error when i comment ? I'm so late !
    Anyway i'm here so i can answers some of your question that will probably gonna cover in part 2 :)

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      Wtf no idea tbh 😂 this one worked😂

  • @PrinceEnki777
    @PrinceEnki777 Год назад +2

    wish midjourney could do this... damn invested it the wrong one lol

    • @lefourbe5596
      @lefourbe5596 Год назад +2

      Don't feel sad : you Can generate great reference sheet on midjourney. And way more easily.
      Try out "character expression sheet, reference art, 9 head --ar 1:1"
      Then train it on stable diffusion 😂

  • @TheOriginalOrkdoop
    @TheOriginalOrkdoop Год назад +1

    Do you have any tips and tricks for training objects? Like chairs from a specific era? Or cakes in a specific style? I am trying to make a cake lora that will give me the flexibility of color and design, but all within a specific cake decorating technique. I have already made a few successful ones, but they are not perfect and I am going down a rabbit hole of perfection. can anyone recommend a video specifically about making a Lora for objects instead of characters? I'm going crazy.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      I don't yet, unfortunately I havent gotten much further with actual training practice cuz my GPU is not that fast. It is on my list of things I need to investigate for sure tho.
      (since it is imperative to have in case you want to make a comic or an animation, and that's where my experimenting is being focused atm)

  • @DarkNixilius
    @DarkNixilius Год назад +1

    I have a question about regularization images.
    Do they have to be in 512*512, or 768*768 or 1024*1024 format? Or can we make it in 768*1024 format for example? Thanks.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      all can work, try to have a balance that is the most similar to your dataset. if your dataset is 1:1, then most regul should be 1:1

  • @ClaraZfengshuicafe
    @ClaraZfengshuicafe Год назад +1

    Thanks so much for the video. I'm wondering is there a way to make each scene move and transition like a short film?

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      With AI I'm not sure tbh. Dont really get what you mean either without a visual refrence, srry.
      Thanks for watching btw!!

  • @Carmidian
    @Carmidian Год назад +1

    When making the character sheet in the beginning would it be ok to make them completely naked then add clothing when you go ahead to use them?

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      Wouldnt recomend that but it is possible. The problem is that the character will be naked most of the time if you train it like that

  • @saintSP
    @saintSP Год назад +1

    En donde hiciste las poses, con caras y mano? He estado buscando y no encuentro 😢

  • @begna112
    @begna112 Год назад +1

    I cannot get my images to generate with an all white background no matter what I do. They're always adding some kind of abstract cloth flying around or a panted fresco background. Any advice? I even tried img2img as you suggested here and can't get it. I'm fairly new to writing prompts so maybe I'm just not doing a negative prompt properly or something

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад

      hmmmmm did you change models? what is your prompr and negative prompr?

    • @lefourbe5596
      @lefourbe5596 Год назад

      to contourn that, instead of txt2img, you should use img2img with a WHITE BACKGROUND. dial up the denoising to the max and prompt for (white background:1.3).
      the white base image will influence the generation (like a hint left behind)

  • @lamdelmundo8492
    @lamdelmundo8492 Год назад +2

    This will be soooo useful for making webcomics 😮

  • @pladselsker8340
    @pladselsker8340 Год назад +1

    Do you think starting with 1 image would work with the cherypicking loop?
    For example, you could use controlnet to force other poses, and then do some cleanup to have better training examples.

    • @Not4Talent_AI
      @Not4Talent_AI  Год назад +1

      I think it could yeah. You can always create variations of 1 image by hand. Even cropping and separating the character, flipping the image and rotating it a bit can help.