Create consistent characters with Stable diffusion!!

Поделиться
HTML-код
  • Опубликовано: 13 май 2024
  • Create your own consistent characters with Stable Diffusion! Even training a LORA to use it however you want.
    Join our Discrod server- / discord
    to learn & help about this and more!
    References used in the video:
    drive.google.com/file/d/1-XOM...
    ------------ Links used in the VIDEO ---------
    Lora training and intall guide:
    • LORA training EXPLAINE...
    DynamicPrompts Extension: github.com/adieyal/sd-dynamic...
    CharacterTurner Emb: civitai.com/models/3036/chart...
    Detail Tweaker LORA: civitai.com/models/58390/deta...
    MoreDetails LORA: civitai.com/models/82098/add-...
    TagManager(BDTM_V1.6.5_NetCore60_AllInclude): github.com/starik222/BooruDat...
    ------------ Social Media ---------
    -Instagram: / not4talent_ai
    -Twitter: / not4talent
    Make sure to subscribe if you want to learn about AI and grow with the community as we surf the AI wave :3
    #aiairt #digitalart #automatic1111#stablediffusion #ai #free #tutorial #betterart #goodimages #sd #digitalart #artificialintelligence #inpainting #outpainting #img2img
    #consistentCharacters #characters #characterdesign #personaje #midjourney
    0:00 intro
    0:11 Problem
    0:30 Basic current Solution
    0:54 Our first step
    1:04 Making OpenPose Refs
    1:45 Our Character
    1:56 Base SpreadSheet
    2:30 Hires Fix the result
    3:00 Objective
    3:13 Easy tricks (variations)
    3:42 Secure a White BG
    4:10 Creating new poses
    4:48 Next step
    4:58 comic example
    5:12 Clean up
    6:55 Clean up realistic
    7:25 Upscaling
    9:05 Regularization images
    9:45 Dynamic Prompts
    10:42 Prompts script
    10:52 Using wildcards
    11:22 Second Step
    11:38 Dataset Example
    11:56 Roop Fixing
    12:06 Fixing and cleaning
    12:30 Separating the poses
    13:48 Creating compositions
    14:42 Make Light variations
    15:00 Blur and extra steps
    15:14 Examples
    15:40 Make it however
    16:08 Final Step
    16:20 KohyaSS needed
    16:48 Renaming
    17:05 Tagging
    18:30 Realistic style captions
    18:46 Training Folders
    19:15 Model Choosing
    19:37 training parameters
    21:30 Sample image steps
    22:20 IMPORTANT STUFF
    22:45 DISCROD SERVER
    23:25 Results
    23:50 Improve results infinitely
    24:06 Change the LORA's style
    24:23 Limits
    24:40 Realistic Stress Test
    25:20 Prototype idea
    25:40 Possible solutions
    26:25 Lets Work together!
    26:38 Ty for watching

Комментарии • 561

  • @1august12
    @1august12 10 месяцев назад +12

    Thanks for making these videos! I started playing with stable diffusion a couple of days ago and binged all your videos. SD is honestly too fun, I sat up to like 4 am yesterday inpainting instead of in-bedding😅
    I'm really impressed that your videos are so concise without being hard to understand. Not to mention funny! Everything looked really daunting at first but I just want to learn more, and you make that a lot easier, and a lot more entertaining. So thanks!

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      thank you for the kind comment!! Glad you are enjoying it :3

  • @JimiVexTV
    @JimiVexTV 10 месяцев назад +5

    Thank you kindly for this in-depth, yet concise breakdown. Done a bit of lora training myself, but was really free-wheeling it, and this has given me a lot of ideas to improve my results. Vastly underrated channel, liked and subbed my G

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      thank you so much!! We are trying to make an in depth video for lora training with Lefourbe. so people can traing without having to "guess" what parameters are good. Lets see how that goes xD
      but tyty! hope it helped

  • @Aleshotgun
    @Aleshotgun 6 месяцев назад +4

    I just digged into stable diffusion and your infos are an absolute life saver!!

  • @imnotcabs1805
    @imnotcabs1805 10 месяцев назад +10

    Hey, new viewer here. You might get a lot of this but I also want to share my piece. Thank you for all these insightful content in your channel. I've been dabbling with Stable Diffusion and AI Image Generation and you are one of the few people who gives out in-depth, no bs, and actually helpful videos here on RUclips. I really appreciate you maan! I'm currently learning ControlNet and you make it a lot easier with your tutorials.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +2

      hey!! thank you so much!! Appreciate it fr :3 Glad you are finding it useful and hope to keep providing with informative content!

  • @patrickstonks7841
    @patrickstonks7841 10 месяцев назад +28

    Wtf. Literally was looking for this exact thing to get a consistent character yesterday. You're a legend.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +2

      hahahha hope it helps!!

    • @woofkaf7724
      @woofkaf7724 3 месяца назад

      Your phone is watching you

  • @Sophias-Universe
    @Sophias-Universe 4 месяца назад +3

    Thank you for taking the time to share your knowledge!

    • @Not4Talent_AI
      @Not4Talent_AI  4 месяца назад +1

      thank you for watching and the posstive comment!

  • @Retroxyl
    @Retroxyl 10 месяцев назад +1

    Super cool video and explanation. I have made a character of some sort just yesterday, so training my own lora would be really helpfull, I guess. I'll try it next week and see how it works out.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      thanks!!! If you need help we'll be happy to give you a hand on the discord :3 Hope the video helped hahahaha

  • @audiogus2651
    @audiogus2651 10 месяцев назад +4

    I trained a checkpoint on a 3D model when Dreambooth first came out last year and it turned out fairly well in that I could change backgrounds and poses. I tried again the other day on a Lora and it was terrible. I was left scratching my head until I saw your video and you explained that all of the auto captioning (which did not exist back then) was likely throwing it off. Thanks so much for the tip, can't wait to try it again! Exciting stuff!

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      ty!!! We are working on a fairly in depth LORA training guide so If you keep running into a wall I hope that video helps when it comes out! (And then there is the discrod for help ofc XD)

    • @lefourbe5596
      @lefourbe5596 10 месяцев назад +2

      you're just like me then. my avatar here is made up from a 3D video game model i made. lucky you i've not given up trying and example will follow soon

    • @audiogus2651
      @audiogus2651 10 месяцев назад +1

      @@lefourbe5596 I would share the one I made last year but alas it is for work. Was pretty easy in dreambooth, I wager if I just filter the excess of the auto captioning I should be OK.

  • @jibcot8541
    @jibcot8541 10 месяцев назад +1

    Very good in-depth video on Lora Training!

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Thanks!! an even more indepth one incoming soon enough xD

  • @justinwhite2725
    @justinwhite2725 9 месяцев назад +4

    I love that this video doesn't gloss over the fact that a lot of touch up is necessary.

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      I always try to encourage the use of external tools and skills if possible hahha tyty!

  • @substandard649
    @substandard649 10 месяцев назад +2

    Super interesting. Thanks for your hard work, I'm exhausted just watching 😂

  • @MooreKatMoore
    @MooreKatMoore 9 месяцев назад +3

    Thank You so much. I do anime edits and copyright is a hige issue right now and switching over to ai has been easier but people love the stable diffusion more then just pictures 😅 ive been looking for a way to get more consistent characters too ❤ you get a sub my brother!

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      Thank you so much, hope it helps!!!

  • @deejaytabz1
    @deejaytabz1 10 месяцев назад +1

    Thank you so much for your resources. you are a legend bro!, have also joined your discord channel.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      thank you for watching! hope it helps!

  • @inkmage4084
    @inkmage4084 9 месяцев назад +62

    As an artist I initially did not like the AI stuff.. But as I am working on remaking a game I did way back in high school, this is a massive time saver. I'd take my designs and run it through the AI and get different variations that have allowed me to quickly finish a character's final redesign. This is quite amazing, it also saves money as I am using this very technique to have 3D models done of the main character, that I will later have printed for a statue. I will also be using it to have the figures done of the characters to another project.

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад +2

      super great to hear!!! really curious on the 3D model aspect tbh (as a 3D modeler xD)

    • @JuriBinturong
      @JuriBinturong 9 месяцев назад +9

      I think the people who would benefit the most from these AI tools are real artists like yourself.

    • @inkmage4084
      @inkmage4084 9 месяцев назад +2

      @@Not4Talent_AI That is awesome! Thanks for this video, definitively glad I subscribed too!

    • @lefourbe5596
      @lefourbe5596 8 месяцев назад +5

      it's heart warming to see that some people find the right use behind the black spots of this revolution.
      i've started learning blender for a bit for chara modelling. I was painfuly missing orginal 2D reference.
      then i saw Royal Skies video and was sold Instantly... however i did not touch blender since. time and such you know :/

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад +2

      @@lefourbe5596 time is a b***

  • @GryphonDes
    @GryphonDes 4 месяца назад +1

    Great video! Fun ideas and it was great to follow along!

  • @blacksage81
    @blacksage81 10 месяцев назад +1

    Great tutorial, and I had just finished wrestling with the Charater turner Embedding, Lora, and open openpose to finally get repeatable character sheets. So this video came just in time. Also, when upscaling with ControlNet I've found that you actually dont need to load the image you want to upscale into controlnet itself, all you need is to select Tile, and Controlnet is more important.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Thanks!!
      I saw that on a discussion but I always import it just for placebo xD (even though it is probably a bad idea cuz some times I import the wrong image and mess up the whole thing hahahahahhaa) should stop doing that prob.

  • @planktonfun1
    @planktonfun1 8 месяцев назад +1

    Thanks, this helps a lot from stable diffusions limitations

  • @theprodigalshrimp
    @theprodigalshrimp 10 месяцев назад +1

    So good thank you for the new knowledge.

  • @TheElBudo
    @TheElBudo 10 месяцев назад +7

    Great accumulation of info, workflow and the topic of consistency is so important.++
    Consider breaking your next videos into 10 minute segments ( which means more videos for you!) so they're more digestible for us. Separate them into bite-sized skills all under one related thread or collection of videos.
    Yours is the only tutorial I've had to slow the playback down for to fully hear what you're saying because your visuals are also moving very quickly. You can fill up the extra time by both being sure you're explaining the next step and not just showing what to do but WHY you're doing it; plus you can give some alternative examples regarding what you're demonstrating.
    Great work but it feels like I'm "rewinding" more than I'm playing!

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Thanks for the feedback! I dont know about making them 10 min each, but we plan on making a firt video covering the basics and how to understand them. And then another video with more advanced info.
      Tyty!!

  • @LogoCat
    @LogoCat 8 месяцев назад +1

    This video tutorial and magic numbers are legends.

  • @guilhermegamer
    @guilhermegamer 10 месяцев назад +14

    HERE I AM! Ready for another GREAT video! =D

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      hahhahahha tyty! Hope it lives up to the expectations XD

    • @lupeck
      @lupeck 10 месяцев назад +1

      um classico do youtube @guilhermegamer

  • @ChloeLollyPops
    @ChloeLollyPops 10 месяцев назад +1

    This is great, thank you!

  • @shimmyfpv7472
    @shimmyfpv7472 2 месяца назад +1

    Thanks for the tutorial!

  • @TheRoyalSkies
    @TheRoyalSkies 3 месяца назад +2

    Good stuff bro! Keep it up!

    • @Not4Talent_AI
      @Not4Talent_AI  3 месяца назад

      yooo sup royal! Thank you so much!
      (fun fact I'm having to learn blender for the next vid XD)

    • @lefourbe5596
      @lefourbe5596 3 месяца назад

      🥳 i'm sure you could make a 3 min version of this !
      there is much cleaning to be done and part to divide

  • @namesas
    @namesas 10 месяцев назад +1

    Great video as always!

  • @slackerpope
    @slackerpope 2 месяца назад +1

    Fantastic video! Subscribed! ❤

  • @mariolombardo9284
    @mariolombardo9284 10 месяцев назад +2

    your videos are just too good!!!!

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      thank you so much!! hope they help :3

  • @japaoyagami3273
    @japaoyagami3273 5 месяцев назад +1

    Que dhora!!!
    Muito obrigado pelo vídeo!!!

  • @USBEN.
    @USBEN. 10 месяцев назад +2

    Epic video, huge value.

  • @amaterasu5001
    @amaterasu5001 10 месяцев назад +1

    thank you man that u remembered to make this. big love from me

  • @CherryLolotv
    @CherryLolotv 4 месяца назад +1

    Thanks for your help!

  • @1tponie
    @1tponie 10 месяцев назад +1

    bro, there is channels with millions of subscribers and i can't learn as much out of those channels. this channel is a GOLD. liked and subbed.

  • @shadowdemonaer
    @shadowdemonaer 10 месяцев назад +22

    If you struggle with getting it to make consistent faces, I highly recommend making the face in Vroid Studio and Photoshopping them in. It's also necessary when making a lora to get close ups of the faces, and also for details on the clothes that you will want to be able to inpaint later in case the program struggles.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +2

      Thanks!!

    • @lefourbe5596
      @lefourbe5596 10 месяцев назад +4

      Good Idea.
      I know Vroid and it have gotten pretty good at making anime oc.
      You can use any videogame character marker to get close to the style you look for.

  • @urgyenrigdzin3775
    @urgyenrigdzin3775 10 месяцев назад +1

    Thank you so much! 👍👍👍

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      hope it helps! ty for watching!

  • @ClaraZfengshuicafe
    @ClaraZfengshuicafe 8 месяцев назад +1

    Thanks so much for the video. I'm wondering is there a way to make each scene move and transition like a short film?

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад

      With AI I'm not sure tbh. Dont really get what you mean either without a visual refrence, srry.
      Thanks for watching btw!!

  • @ProjectShinkai
    @ProjectShinkai 9 месяцев назад

    im about to give it a shot

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      Hope it works well, gl!!

    • @ProjectShinkai
      @ProjectShinkai 9 месяцев назад +1

      @@Not4Talent_AI gettign better. trying to get a model sheet so i can model a character

  • @user-oy9dz3xj2n
    @user-oy9dz3xj2n 2 месяца назад +1

    no mms eres un genio, por alguna razon tuve la idea de que hacer pero no sabia como y llegue a tu video como caido del cielo, te amo sos un crack

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад

      haahahahahahaha me alegro!! Gracias a ti!

  • @nyanbrox5418
    @nyanbrox5418 10 месяцев назад +6

    What I love about videos like this is *someone* is going to make a tool that simplifies all of these steps, maybe AI to generate new poses too?

  • @CherryWood-kq1gr
    @CherryWood-kq1gr 10 месяцев назад +4

    Great idea! But a character sheet in one drawing style will make the Lora learn that style. That's why it is suitable to use the first result to create new images with more variety and retrain again as suggested.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      true, that's a pretty good idea. Instead of having the lora already trained in a style and retrain from there. Create different styles directly, right? (not sure if that is what you mean. but sounds possible and nice)

  • @AbyssalPrimarch
    @AbyssalPrimarch 10 месяцев назад +1

    This looks amazing! I hope I figure this out on how to do it xD for now I gotta figure out why I've no models to be found in my control net portion ha

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      You need to download them from the official hugging face page! (I think it was in hugging face. Cant really check rn but I have a video on it where all the info should be)

    • @AbyssalPrimarch
      @AbyssalPrimarch 10 месяцев назад +1

      awesome tyty! I'll dig it up and see how it's done

  • @Bulldog-Chelista
    @Bulldog-Chelista 7 месяцев назад +1

    so much information

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      Yeah, I had to hold back too xD

  • @LanaABA
    @LanaABA 8 дней назад

    You seem to know so much, thanks for sharing! Frankly, for a beginner the majority of the videos are hard to understand though. You present so many different features and techniques in one video that it gets overwhelming. Would appreciate it if you also make some videos in the future for noobs like me, with a slower pacing and less concepts but more in-depth explanations 😅❤

    • @Not4Talent_AI
      @Not4Talent_AI  8 дней назад

      Tyty!!
      Hahaha I ve been tolda that, true that the channel is more aimed towards people with more experience.
      I have a vid for most concepts touched in this one, but they are still just as fast😂
      I might go back and update some vids on the basics eventually. In a more calmed and beginerfriendly way hahah
      Ty for the feedback!!

  • @cofiddle
    @cofiddle 10 месяцев назад +49

    Really like how you're encouraging the use of photoshop, and other outside sources. Really emphasizes how powerful ai can be for an artists workflow. Also, wanna make a huge shoutout for generative fill, for that clean up step. Being able to just ask it for a ribbon or something and get multiple results untill I see one I like, it's incredible what these tools are becoming capable of lol.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Ohh true, so used to old photoshop I forget generative fill XDDDDD really nice thing to keep in mind for sure.
      And yeah, these tools advance so fast it's mind-blowing xD
      Thank you!!

    • @-rcrc-r7624
      @-rcrc-r7624 10 месяцев назад +3

      pretty sure we don't need any artist for this workflow

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +3

      @@-rcrc-r7624 no, but if you want perfect results and fully custom characters. The best way is to use artistic skills. Either your own or someone elses

    • @-rcrc-r7624
      @-rcrc-r7624 10 месяцев назад +2

      @@Not4Talent_AI won't need it in the future, also I don't think people would willing to learn for their whole life just to do ps for AI image, just don't make any sense

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +3

      @@-rcrc-r7624 hahahaha no ofc. But I'm a firm believer that artistic skills help a lot on the AI art space. As you can compensate for a lot of AIs shortcomings. At least current shortcomings

  • @morizanova
    @morizanova 10 месяцев назад +1

    Thanks for this video . although I doubt able to prepare and fix the dataset like yours, Now I`m bit understand the basic concept - idea and workflow needed And about the discord invitation , yeah ... when Lex offering you to hangout in his crib you should come and ready to feel amazed

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      ty!! I hope it gets easier with time and testing. In the case it does I'll do a new video. atm there is quite a bit of manual labor involved xD

  • @audiogus2651
    @audiogus2651 10 месяцев назад +4

    Lol, 'freedom signs', this guy is a comedian😂

  • @anitaart6501
    @anitaart6501 4 месяца назад +1

    感谢您制作这些视频!

  • @evil1knight
    @evil1knight 10 месяцев назад +3

    You should try the same process but start with a 3d model character you can pose and use as the Lora training data, I feel that would best process for a small studio, pay an artist to make a custom 3d character then use that as the Lora training base

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      I think that could be interesting yeah. Mainly seeing how good can the LORA get the character. (Lefourbe tried something like that, with a very hard character, and he is getting pretty nice results). so should be possible I think-
      Things to note in the "future testing" notebook for sure! tyty!

    • @lefourbe5596
      @lefourbe5596 10 месяцев назад +1

      Yeah i'm doing that mostly !
      I'm trying to make some example for the next video.
      The 3D character i have are not so well made and i was hopping to improve their visuals with SD without sacrificing their design.

  • @paulrayner6744
    @paulrayner6744 10 месяцев назад +5

    I will kindly argue about some of your points:
    1. When you caption the character you should describe the outfit and any accessories as well. Trust me, you will have an easier time if you want to prompt your character in any others outfits which is not the default one or to undress it.
    2. Increasing the max resolution of the training dataset to 768x768 does actually make a difference on the overall quality of the images. I would drop the number of batch size to 2 (this wil be comfortable for most people without getting the Cuda memory error) and set the image resolution to 768x768. Lora training already takes little time so don't sacrifice image quality for training speed.
    3. If you're a beginner in Lora training don't bother with regularization images, you're overcomplicating yourself (I know in your video you said it's optional, just wanted to make a mention about this)

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      Yes! I agree with everything. For better results having 768 or even 1024 is the best option. But more time consuming. If you want the perfect training then thats perfect.
      In this case we were testing so I thin 512 by 512 is the fastest testing option.
      Also true what you say about tagging. Even though I wasnt looking for sn ourfit change as that is prettt much the character. The face and hairstyle are very standard😂😂
      And for epochs also true, gpu is pretty much everything there hahaha

    • @lefourbe5596
      @lefourbe5596 10 месяцев назад +3

      That exactly what i told N4T when WE made the vid 😁. 1. You are right and it's a choice we made. Describing everything will make the Lora harder to prompt and make comparison between diffents designs difficult cause of bleeding concepts.
      There is more in the video than i have planned. It would have been in part 2 for many of these details to me.
      Trust me he is well aware. My best Lora is a one that need a long freak prompt to get every part of it's complex design. (My profile pic)
      Facial mark, Horn, asymetrical, heterochromia, high collar, black sclera, ponytail, gauntelet, belt, coffin pouch...
      For second point, i disagree cause GTX and 20** RTX cards exist and are slow with less VRAM.
      I get it as i have a 3090 but even then i prefer to be able to tweak LR, models and network dim first before. Especialy for N4T was short on Time and run a GTX 1080.
      So yeh 512 training first with your 1024 dataset.

  • @SovereignVis
    @SovereignVis 10 месяцев назад +84

    Interesting. I'm a skilled artist that spends way too much time making details most people will never even notice. I have been thinking about trying to get into training an AI to draw things in my art style. 🤔

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +5

      If you decide to do so, I'd be super interested in how you feel about the results. I think it is a pretty interesting idea that is talked about in the AI space but never actually seen anyone do it and comment on it.
      Hope to see you in the discord sharing your process! (if tou dont mind ofc xD. If you do try it we would love to help finding the best training result).
      Thanks btw!

    • @lefourbe5596
      @lefourbe5596 10 месяцев назад +11

      you are basically the audience i'm trying to reach and save.
      i'm often seen as the greedy ennemy in the field. but i know that the work SD do is from artist. i'm measely an manager of an custom assistant.
      artists should use image AI as they aready have their own art style to train on. making their work with fidelity and efficiency.
      from this point on, self made animation is not far away. full story would be illustrated correctly by the hand of the master behind it. true artists have the most power over diffusion model and will put our bests attempt to shame once they edit their generated images.

    • @SovereignVis
      @SovereignVis 10 месяцев назад +2

      @@lefourbe5596 It sounds like it could be fun and I do have a lot of ideas for stuff that would take forever to draw. But not sure my PC can handle it. 😅

    • @Alau.Akhmetzhan
      @Alau.Akhmetzhan 9 месяцев назад +6

      I am artist too, learning Ai for a long time. I am also ethnographer and have to recreate costumes and armoire. For 20 plus years I have collected a lot information to train Ai to help me with reconstruction of costumes just by telling the name of an item and the historical period.

    • @Alau.Akhmetzhan
      @Alau.Akhmetzhan 9 месяцев назад

      I would like to be onboard of your server. This was one of the most comprehensive tutorials, but it is hard to repeat and get the same result.

  • @b0lasater
    @b0lasater 7 месяцев назад +1

    Wow, very detailed and beautifully produced video. My guess is that there is no fully online service that would allow a creator to do the things you describe here, but please correct me if I am wrong.

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      thank you so much!!
      As far as FREE online services for this, I wouldt say it is possible atm, since google colabs are RIP.
      But there are a few payed options out there.
      Like thinkdiffusion for Automatic1111 (you probably could find a free solution to this as well, even though I cant tell you one atm cuz they usually either close up or end up changing to a payed model over time).
      And for the training Dreamlook.AI is a very good option. (I have been sponsored by them in the past. But I still think they are an awesome option)
      If you have any question or doubt, you can ask on discord, there is a lot of people that might be using online tools there as well.

  • @TheOriginalOrkdoop
    @TheOriginalOrkdoop 7 месяцев назад +1

    Do you have any tips and tricks for training objects? Like chairs from a specific era? Or cakes in a specific style? I am trying to make a cake lora that will give me the flexibility of color and design, but all within a specific cake decorating technique. I have already made a few successful ones, but they are not perfect and I am going down a rabbit hole of perfection. can anyone recommend a video specifically about making a Lora for objects instead of characters? I'm going crazy.

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад +1

      I don't yet, unfortunately I havent gotten much further with actual training practice cuz my GPU is not that fast. It is on my list of things I need to investigate for sure tho.
      (since it is imperative to have in case you want to make a comic or an animation, and that's where my experimenting is being focused atm)

  • @pladselsker8340
    @pladselsker8340 10 месяцев назад +1

    Do you think starting with 1 image would work with the cherypicking loop?
    For example, you could use controlnet to force other poses, and then do some cleanup to have better training examples.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      I think it could yeah. You can always create variations of 1 image by hand. Even cropping and separating the character, flipping the image and rotating it a bit can help.

  • @JussimirPasold
    @JussimirPasold 10 месяцев назад +1

    Just subscribed to your channel, very good content, probably will invest a lot of time watching most of your videos in the next days... but I can't find a tutorial of how to configure Stable Diffusion in the first place... in your video it seems to be working locally, but I am running on a macbook that doesn't have a very good graphic card... I wonder if it is easier and faster to configure it on the cloud (It seems to be able to run on Google Colab, but I don't know much about this setup process...) do you have a tutorial that teaches on the configure it the first time?

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Thanks!!
      Not really though, cant do it since I dont have a mac. But I would suggest using a google colab or if your gpu is up for it. Try the mac install. (The instructions are in the same page as the windows and linux install). If you have any problems tho, dont be afraid to ask on discord to see if someone can lend a hand. (Either our community or a bigger one like olivio sarikases)
      Srry to not be able to help :(

  • @JelliedGrapes
    @JelliedGrapes 9 месяцев назад +4

    For those who are lazy (like me) here's the text at around 11:08
    close up of a man,{{1$$__cameraView__}}, {{1$$orientation__}}, {{1$$__expression__}}
    full body shot of a man, dynamic pose, {{1$$cameraView__}}, {{1$$__orientation__}}
    upper body shot of a man, {{1$$__orientation__}}
    Change Man to Woman if you want a woman

  • @landslide8703
    @landslide8703 10 месяцев назад +1

    great video

  • @baihu5914
    @baihu5914 10 месяцев назад +1

    loving your video man , can i ask is there a way to generate specific character like " copy an old anime character look and make more out of it " ?

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      Ty!!!
      If the character is old, you can get images of that existing character and train AI directly with them. No need to go through the full initial process

  • @DarkNixilius
    @DarkNixilius 8 месяцев назад +1

    I have a question about regularization images.
    Do they have to be in 512*512, or 768*768 or 1024*1024 format? Or can we make it in 768*1024 format for example? Thanks.

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад +1

      all can work, try to have a balance that is the most similar to your dataset. if your dataset is 1:1, then most regul should be 1:1

  • @finalomega8894
    @finalomega8894 8 месяцев назад +1

    brush looking stuff-- cackle cackle cackle!🎉🍾

  • @freminet_main
    @freminet_main 8 месяцев назад +1

    Hii, could you please make a tutorial specifically on how to create character designs using AI? That would help me a lot, I just need a character design sheet ideas because sometimes creating designs using our own imagination is difficult

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад

      I'll try!! Added to the list :3 ty for the suggestion!

  • @omegasrk
    @omegasrk 7 месяцев назад +1

    How do I set my own character as the person who is going to be on the control net.
    So basically I set poses with my designed then tell diffusion to put my character (that i have uploaded) .
    Thank you

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      Responded in the other comment, but basically you would use controlnet as usual but the prompt would be something like
      "yourchar, holding an apple on the beach,

    • @omegasrk
      @omegasrk 7 месяцев назад +1

      @@Not4Talent_AI If you can make entire video where, You explain how we can do all this things. i want to work on comic so i gonna need this tutorial

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      I will do that eventually, but it might take a while. (As I want to first make videos on how to fix/avoid the vast number of issues one might have while making a comic with AI).
      I'll note this down and try to make a short on it as soon as I can. I'll @ you when I post it if you want! @@omegasrk

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      @omegasrk short done! ruclips.net/user/shortsKm7EY49YbOA?feature=share

  • @TheBobo203
    @TheBobo203 10 месяцев назад +1

    Crazy Man

  • @Anniefonde
    @Anniefonde 8 месяцев назад +1

    Hi! I didn't get so much how CharacterTurner works as I'm no totally familiar with the program. Can you explain that more in detail? Thank you!

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад

      hi! Ended up not using it, but it's a LORA. You basically download it and, adding it to de prompt, it will make the image a turn around. Sorry for the late response!!!

  • @Luxcium
    @Luxcium 2 месяца назад +1

    I have gave you a thumbs down on the first video I watched and I was about to walk away and I don’t know what happened but I was just really interested about the topic so I listened to it and then gave you the thumb up and then subscribed you are now one of my favourites RUclipsr on the topic I appreciate your genuine interest and your dedication🎉🎉🎉🎉

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад +1

      super glad to hear that, thank you!!!

    • @Luxcium
      @Luxcium 2 месяца назад +1

      @@Not4Talent_AI I don’t know how I felt but it is so rare that I give thumbs down normally I do mean it and given that you are in my top list make me feel like… I remember that it was genuine then and now I don’t know why it was like that… but knowing that I would have never heard you saying *Popimpokin* many times in the other video makes me feel very happy that I changed my mind

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад

      hahahahahaha popimpokin changed everything @@Luxcium

  • @guest777-nf8dc
    @guest777-nf8dc 9 месяцев назад +2

    Thank you for the video.
    Is it really okay to use it for comersial use?
    How if the character is robot or machine. Would you make a video about it?

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      It is okay to use for commercial use. Just check if the model you are using allows it or not
      I'm trying to find a way to make the same thing for non-humanoids. I think maybe you could use midjourney, or batch generate without controlnet and pray xD
      (ty for watching, btw :3 )

  • @relaxation_ambience
    @relaxation_ambience 10 месяцев назад +3

    20:50 I saw one guy provided various results experimenting with Network Rank and Network Alpha. So the best was 128 to 1. I also experimented with my characters and also found that the best is 128 to 1. But my characters were photorealistic, for anime maybe your parameters better.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      From what I've been seeing this days, there is a lot of different opinions about that, tbh just said what worked for me. But could be what you say too

  • @DarkPhantomchannel
    @DarkPhantomchannel 2 месяца назад +1

    Great video!!!!! Only one question: about how much time does it takes to do all this process with details and refinement?

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад

      Thanks!!
      It will depend on your pc speed and luck when generating the character. Also the complexity of it.
      So i cant really giva good estimate for this.
      For me it took like 2 hour to prepare an ok dataset once I knew what I was doing. And then training took a bit more.
      With my current pc it would be a total of maybe 2 hours total if the character isnt super hard.
      If you dont really care super ultra much bout the character having a lot of precission. Then you could do this in 30 min + wtv it takes for the training

  • @Toritto713
    @Toritto713 9 месяцев назад +2

    hello friend, a friend and I are starting to develop lora character models but we don't fully understand what regularization images are, we have a question about whether regularization images are random images for the model to convert them into your character or if they are images that the model has created and that have gone wrong in order to correct the model, we also borrowed a better computer during this weekend (since our pc are "potatoes", our best graphics and with more vram is an RX 590 the which doesn't even support cuda xd) so we want to take advantage of the fact that they lent it to us to train better models, currently we train a model with 80 images, 3000 steps and 1 epoch and we train another model with 460 images, 4600 steps and 1 epoch, what kind of training do we get? would it give better results? in neither of the 2 we used regularization images because we did not understand that point :c

    • @Toritto713
      @Toritto713 9 месяцев назад +1

      our original language is not english so maybe that's why we missed the point

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      If you have more than 60-70 images dont even bother with regularization!
      I exain it a little more here tho: ruclips.net/video/xXNr9mrdV7s/видео.html
      I have no idea what training you will get, since it highly depends on the concept trained and the complexity of it. Also depends on the quality, learning rate, resolution, captioning... there is a hige list of things that go into a good training other that steps!
      You can get into the discord if anything, there the community will most likely help.
      Amma be almost gone for a few days so cant help much myself tho

  • @NezD
    @NezD 10 месяцев назад +1

    He got that Ghosthunters scanner!

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Wtf😂😂 where?

    • @NezD
      @NezD 10 месяцев назад +1

      At 1:32. That’s exactly what the spirits be looking like!

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      @@NezD hhahhahahhahahaha ok I see now xD

  • @user-nc2hs4rp7l
    @user-nc2hs4rp7l 10 месяцев назад +1

    ㅇ호우 대박적!!

  • @uzumakinagato1847
    @uzumakinagato1847 9 месяцев назад +1

    Hey brother you are literally saving my life with tut as I am making my own character.
    But i don't have a pc Or laptop with me so my question was can i do all this Android?

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      Ty!!
      Hmm complicated tbh.
      I mean, you could use google colabs to do this from your phone I guess. But the phone itself is probably not powerfull enough to run this tech yet

    • @uzumakinagato1847
      @uzumakinagato1847 9 месяцев назад

      @@Not4Talent_AI I want to use this for making comic /manga so do you have any alternative ai software?

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      there is stuff like dream by wombo and sht. But not really that good

  • @mamorutoriyama484
    @mamorutoriyama484 10 месяцев назад +1

    You have to use *Higher Resolutions* to get better generations from the get go. I've personally found if I generate at anything lower than 1024, the Ai can't produce enough detail to make a complete and coherent character/image, sometimes you might get something good, but the lower res you generate at, the worse the actual Design/Art quality.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      yeah, 100% true. character sheets with 512 are unintelligible xD

  • @ImInTheMaking
    @ImInTheMaking 10 месяцев назад +2

    This workflow is so fun when you're fluid with your Photoshop skills. In my case, I open up another tutorial how to edit in Photoshop :D

  • @burakgoksel
    @burakgoksel 9 дней назад +1

    Hey man, great job. Are you still using a1111 and lora training or have you switched to comfyui?
    Another question I have is, when you were solving all this stuff, did you have any prior knowledge? Like software knowledge or graphic design. Especially when I look at comfyui workflows, it seems impossible to reinstall it.

    • @Not4Talent_AI
      @Not4Talent_AI  9 дней назад +1

      Hi! Tyty
      Im still using a1111, even though I dont train many lora. But thats bc I have no need for that atm.
      All the knowladge I have is from studing animation in uni. Comfy ui I havent gotten to it but I have worked with nodal stuff before like nuke, maya, ue, blender... so it isnt as intimidating to me.
      I dont know about installing all that stuff tho, havent done it yet 😂

  • @TheEnderFlash
    @TheEnderFlash 2 дня назад +1

    lost it at the freedom signs 💀💀💀

  • @peromiestiloesunico
    @peromiestiloesunico 9 месяцев назад

    Is there a way to make a lora or text inversion data training with a website without downloading anything like a google colab and such.

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      Yes! There are google colabs or pages like dreamlook.ai

  • @pladselsker8340
    @pladselsker8340 10 месяцев назад +2

    This video is a goldmine

  • @casualcorner199secchio
    @casualcorner199secchio 8 месяцев назад +1

    Will it work with realistic characters as well?

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад

      a little harder to clean up, but yes!

  • @shaunbell4372
    @shaunbell4372 9 месяцев назад +1

    Are you running this program from your computer or a website because I cannot figure out what you are using to recreate the character sheet. I have a hard time dedicating a lot of time to this as my brother and I already have webnovel obligations and I make coloring books with AI.

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад +2

      Im running it from my pc, the UI is called Automatic1111. To create the character sheet I use controlnet, which is an extension ti that UI

  • @Backfrontspices
    @Backfrontspices 8 месяцев назад +1

    Hi, is this work for making realistic characters ?

  • @DoozyyTV
    @DoozyyTV 8 месяцев назад +1

    Can you do some videos like this but in ComfyUI?

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад +1

      I'd love to but I havent played with comfUI yet, its been on my list for sooo long

    • @DoozyyTV
      @DoozyyTV 8 месяцев назад +2

      @@Not4Talent_AI it's faster for the not so high-end GPU users and I just like the interface honestly (unpopular opinion lol), hope to see it one day, your videos are really good

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад +1

      Yep! it gives more control too so I really think it has great potencial. Just havent had the time XD
      Thanks btw!! (I'll most likely use it eventually tho) @@DoozyyTV

  • @nitingoyal1495
    @nitingoyal1495 2 месяца назад +1

    Hey I am working on a project which is to create a comic book. First the user would define the character and then narrate the whole story. Can you tell if for my case it would be a good idea to train a lora using the character description and then use it while generating images of narration part. AND How much time it takes to train a character LORA given I am working on AWS EC2 instance with 16 GB GPU access? Also i want to automate all the steps in code itself(without manually doing). Can you tell if it is possible. THANKS

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад

      I think it is a possible idea, some websites have already started doing similar stuff so it also must be possible to automate.
      hardest part would be the correct upscaling and cleanup. (making sure that the generated character makes sense before starting the training)
      then, for a 16GB gpu, a lora of something around 20 images, should take 15-20 min? maybe? I'm not sure tbh, has a lot of "ifs" involved.
      It would take a while to do and figure out how to solve some of the possible issues you might encounter along the way, but I do think it is possible to do.
      Would do some manual testing before investing a lot of time into it tho

    • @nitingoyal1495
      @nitingoyal1495 2 месяца назад +1

      Thanks for your reply. Really appreciate your content!!@@Not4Talent_AI

    • @Not4Talent_AI
      @Not4Talent_AI  2 месяца назад

      np! thank you for watching it! @@nitingoyal1495

  • @user-bq5yt6vm4f
    @user-bq5yt6vm4f 9 месяцев назад +2

    I am curious- would this work for a character that you already have the reference image for. Basicallly I generated a character and am pretty happy with it but I'm trying to work out how to generate that same character in different poses while keeping hair, face and clothing consistent. If I try to change the pose it either changes how the character looks or the pose doesn't change even with controlnet and openpose

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      Yep, thats the usual problem. We havent found much of a solution tbh. There are a few options. Wont b perfect but its your best shot.
      1- train a lora with that one image /separating it flping it editing it etc..(to have as many variations of that same image as possible and make the training a little more flexible).
      2- generating a lot of images describing your character and cherrypickinh the ones that look most like it. Then training a lora with those
      3 a mix of the 2 options

    • @user-bq5yt6vm4f
      @user-bq5yt6vm4f 9 месяцев назад +1

      @@Not4Talent_AI I see thanks for the advice - I will give it a look over the week

    • @lefourbe5596
      @lefourbe5596 9 месяцев назад +2

      @@user-bq5yt6vm4f actually i'm doing that thing ... i got decent results but it have to be forced with openpose.
      the less you have the more frozen your generation gets. move an arm and it goes to hell FAST !
      fliping manually saved the day in the dataset as the character can somehow face two direction and have both arms down. i've yet to generate a correct lower body to feed. my V1 lora have 9 of the same image, my V3 Lora have 22 of carefully selected and cleaned images.
      however you will fight your SD model. i have a anime girl that is BLACK and SD anime models are usually racist... can't really get more than taned skin color.
      my solution is to merge your favorite model with a good general/digital model. in my case AnyAnimeMix with Dreamshaper bring back the dark skin tone a bit along some finer details that AnyAnimeMix lacks.

  • @perrymanso6841
    @perrymanso6841 8 месяцев назад +1

    Asking, to do this or even animation, the required VRAM would be 16??

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад +1

      Id say the minimum is about 8 with a newer nvidia card

    • @perrymanso6841
      @perrymanso6841 8 месяцев назад +1

      @@Not4Talent_AI WOW 8?? My 1070 can't put to work controlnet, maybe is the motherboard? It's an I5...
      Thanks for the answer btw.

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад +1

      @@perrymanso6841 Using lowvRam option in controlnet + -Lowvram or --medvram comand in Args can help if you have a slower card.
      Again, 8 is the minimum so the more the bettter.
      I have a 1080TI and it works fine, but a 3060 is way better cuz it is a newer card that supports cudannn and other stuff like that

    • @perrymanso6841
      @perrymanso6841 8 месяцев назад +1

      @@Not4Talent_AI Thank you for the help, dammit I've been so concentrated in learning to prompt that I've not been learning the WHOLE world of extra things there is to know, 😅

    • @Not4Talent_AI
      @Not4Talent_AI  8 месяцев назад +1

      @@perrymanso6841 there is soooooo much stuff other than promtping. I think most people start thinking AI art is just prompts, iuntil you really get into it and see that that's like 6% xD

  • @LolloLibe
    @LolloLibe 10 месяцев назад +1

    Very useful video! How do I install Controlnet? I can't seem to make it work

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      It should be in the "avaliables" tab under extensions. Inside a1111.
      If it gives you an error ask with the error on the comments or disc and we'll try to help!

  • @DanielThiele
    @DanielThiele 3 месяца назад +1

    do you have suggestions for creaating a character sheet based on my own character? I have one illustration in my own style and now want to make a character sheet based on that reference.

    • @Not4Talent_AI
      @Not4Talent_AI  3 месяца назад

      I think it is now possible, maybe with help of sites like: huggingface.co/spaces/sudo-ai/zero123plus-demo-space
      Also IP adapters can help.
      I know that @lefourbe5596 made a dataset from just 1 image, but no idea how atm hahahhaha

  • @user-mf6bd8zb3p
    @user-mf6bd8zb3p 7 месяцев назад +1

    Will I able to create pictures for my own manga series without Knowing how to draw ?

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      It will be a start. Wouldnt say you will get professional lvl quality, but you can get decent results as a starting line.
      I'd suggest learning how to draw either way, but maybe focus on learning how to work with AI to create what you are looking for instead of learning to do everything on your own.

    • @user-mf6bd8zb3p
      @user-mf6bd8zb3p 7 месяцев назад +1

      @@Not4Talent_AI Thanks for the response ❤️ and may I know how much will this method cost ?

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      Depends on what cost we are talking about. And your current tools.
      In terms of money, if you have a decent pc (about 8Gb Vram on an nvidia card with some other decent specs), this is 100% free.
      If you, on the other hand, dont have a good pc. The cost range increases. You could either invest in a new pc with nice specs (in which case I'd go all out and buy a pretty nice gpu). Or you can try to do this using online services that run these programs for a fee.
      In time, well, that depends on the end result you are looking for. To train the skere character I used about 5-6 hours total without accounting previous experimenting. Thats on generation time, clean up, upscaling, compositing, and training. But it can be done way faster and more efficiently.
      the results in this case are pretty nice, but it will get harder as the character design gets more complex. So there is not a 1 fit all rule @@user-mf6bd8zb3p

    • @user-mf6bd8zb3p
      @user-mf6bd8zb3p 7 месяцев назад +1

      @@Not4Talent_AI Thanks ❤️ For your efforts ...

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      ty for watching! @@user-mf6bd8zb3p

  • @user-ow9zt1uv5w
    @user-ow9zt1uv5w 3 месяца назад +1

    Someone got something like a spreetsheet of a warrior slashing with SD? The greatest problema I face was (other IA's) was that something like a slash, poses, frames and sequencies in the action was completyly unknow, so my question is if someone can generate consistent spritesheet of actions like slash, smash dash and others with SD.

    • @Not4Talent_AI
      @Not4Talent_AI  3 месяца назад +1

      Atm I have no idea, but you pose an interesting issue that might be very worth to look into. I'll note it down and see what I can do!

  • @srikantdhondi
    @srikantdhondi 10 месяцев назад +1

    I watched many times, but could not follow this video, Could you please make video on "Character Consistency SOLVED in Stable Diffusion" since i could not find it on youtube?

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      Sorry for that? My idea of "character consistency solved" is pretty much this video😂 im trying to find other ways with 3D and stuff. But for now this is all I was able to get.
      If you want, you could tell me what is it the video doesnt explain clearly, so I can try to improve on it when, eventually, I make an update on the method.
      Again, srry. And thanks!

  • @kyperactive
    @kyperactive 9 месяцев назад +1

    I feel like you'd get more mileage actually picking up a pencil... but this works too.

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      100%. If you can create the cajaracter ny drawing it then go for it. Thats always the bes way ahhahah

  • @thinkinginmotion
    @thinkinginmotion 5 месяцев назад +1

    in the video it appears you are using Counterfeit V3.0 for a SD model. Is that what you recommend using still or is there a better model that also works with OpenPose? I've been getting fairly poor results using SD 1.5

    • @Not4Talent_AI
      @Not4Talent_AI  5 месяцев назад

      I'd never use the base 1.5 model. Counterfeit is nice, but you have a lot of other options on civit AI

    • @thinkinginmotion
      @thinkinginmotion 5 месяцев назад

      C@@Not4Talent_AI would you mind recommending one that works well with the process you describe here? I've discovered that not every model (XL for one) adheres to OpenPose in ControlNet.

    • @thinkinginmotion
      @thinkinginmotion 5 месяцев назад +1

      Ok it finaly discovered some models that work with your OP character sheets (some models ignore them). I fould ComicBabes to work perfectly for my needs. I can't thank you enough for your video... very helpful!

    • @Not4Talent_AI
      @Not4Talent_AI  5 месяцев назад

      glad to hear that! Thanks!! @@thinkinginmotion

  • @davidpoirier8725
    @davidpoirier8725 9 месяцев назад +1

    16:25 I might be blind, but it seems the link is not in the description.

    • @Not4Talent_AI
      @Not4Talent_AI  9 месяцев назад

      My bad, added it now!
      ruclips.net/video/xXNr9mrdV7s/видео.html

  • @ahminlaffet3555
    @ahminlaffet3555 10 месяцев назад +1

    Why do you actually split the image if tiling is activated? Wouldn't it work as good if you simply used one image and tile it in 512 in koyha?

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Not sure 100% what you mean. But guessing that, instead of doing the tedious job of making a whole image for every different pose. why not use just the base image and split it in training?
      Haven't really tried that, but when training with a very small dataset it is advised to create as much different images as possible while maintaining cohesiveness (not separating just one hand for example).
      creating different environments, light settings, and angle Will help the training understand how your character is supposed to interact with the setting it is in.
      but maybe you can get similar results with just doing what you say. Tbh it would be amazing, cutting off 70% of the work xD

  • @yurizappa268
    @yurizappa268 10 месяцев назад +1

    I'm a total newbie in this domain but why not transfer images generated by SD or Midjourney into some 3D-rendering program? This was just my first thought on the subject.

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      It is an option I will experiment with eventually. But mainly cuz atm that takes even more time, and doesnt give perfect results either. (Maybe with some testing I could change my opinion on that) but I think it could be possible if you have a base model that can fit the textures

  • @pedzii
    @pedzii 10 месяцев назад +1

    i haven't fully watched the video yet and the question might get answered but is there a way to create a character sheet from a character you already have? in this method we are creating a character at the same time we are creating the sheet, what if we already have a character?

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      If you already have a lot of images of the character then you can go straight to LORA training.
      If you only have one image of the character and can't get new ones,
      I don't solve it because I wasn't able to do it. I tried with out painting, inpainting etc... But couldn't really get good enough results to share. Srry.
      It is something I'd like to present as a challenge to the community.
      Maybe you could divide the image, create a Dataset from it by re-lighting, cropping, flipping it and changing every thing you can from it. Then train a lora with that super small dataset. Pray really hard that 1/10 images generated from that lora makes your character correctly and generate 100 images. Use the best 10 images to make a bigger dataset and re-train.
      This is speculative cuz I haven't really tried. But Luck is the biggest factor in that I think

    • @pedzii
      @pedzii 10 месяцев назад +1

      @@Not4Talent_AI thanks for the reply, yea alright somebody will probably come up with a solution soon, everything evolves rapidly

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      Hope so xD

  • @kabirgomez7967
    @kabirgomez7967 10 месяцев назад +1

    I am "relatively" new to this, and everything was going very well, but halfway through the video I got lost, there were many fields that I don't know, but basically it is loading a good amount of images of the character and then creating a LORA, I think I can try

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      Hahhahaha sorry! Yep, thats basically the idea. And using a photoediting software to create some variations in the images

    • @kabirgomez7967
      @kabirgomez7967 10 месяцев назад +1

      @@Not4Talent_AI don't worry,English is not my primary language,that could explain it,but I'm working with all that I understad,thanks bro

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад +1

      @@kabirgomez7967 thank you for watching! hope it helps to some extent :3

  • @alexlefkowitz
    @alexlefkowitz 7 месяцев назад +1

    Can you do this with real people too? That is, copy a person's entire body and face into SD?

    • @Not4Talent_AI
      @Not4Talent_AI  7 месяцев назад

      Yes, you can! With permission from the person ofc, but lora training works with pretty much anything

  • @Carmidian
    @Carmidian 5 месяцев назад +1

    When making the character sheet in the beginning would it be ok to make them completely naked then add clothing when you go ahead to use them?

    • @Not4Talent_AI
      @Not4Talent_AI  5 месяцев назад +1

      Wouldnt recomend that but it is possible. The problem is that the character will be naked most of the time if you train it like that

  • @acetum_
    @acetum_ 4 месяца назад +1

    When I select the OpenPose control type and select the preprocessor as "none" my model also appears as "None." I feel like this is causing my outputs to end up not looking like a character sheet despite using the provided OpenPose references. Is there anyway I can fix this?

    • @acetum_
      @acetum_ 4 месяца назад +1

      UPDATE: It's been a good couple weeks since I've tried this "tutorial." Back when I installed controlnet I didn't realize the models themselves. That was my main issue right there. I'm going to use this comment as a log for my progress (if I decide to continue)

    • @Not4Talent_AI
      @Not4Talent_AI  4 месяца назад

      I take it as you dont currently have the "none" issure, right?
      Just in case, you need to download the models. Once you have them properly placed in your stable diffusion, models, controlnet folder. You'll be able to select any model you want. You can do this by clicking on the drop down menu. If you dont find it you can just click on the "open pose" buttont. That will automatically add the openpose preprocessor and model. You cna just take out the preprocessor and it should work fine. @@acetum_

  • @begna112
    @begna112 10 месяцев назад +1

    I cannot get my images to generate with an all white background no matter what I do. They're always adding some kind of abstract cloth flying around or a panted fresco background. Any advice? I even tried img2img as you suggested here and can't get it. I'm fairly new to writing prompts so maybe I'm just not doing a negative prompt properly or something

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      hmmmmm did you change models? what is your prompr and negative prompr?

    • @lefourbe5596
      @lefourbe5596 10 месяцев назад

      to contourn that, instead of txt2img, you should use img2img with a WHITE BACKGROUND. dial up the denoising to the max and prompt for (white background:1.3).
      the white base image will influence the generation (like a hint left behind)

  • @austinpaulraj6466
    @austinpaulraj6466 10 месяцев назад +1

    I am big beginner to all of this, I am having an issue where even when I load the control net open pose character sheet image provided, it only generates 2 poses for the character instead of every pose. I am using the Realisim3.0 model, not an anime specific model so that may be the issue, but no matter what I do it refuses to generate more then 1 or 2 poses for the character

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      huh, thats weird, if you are in the disc it might be easier to help. But if not, what setings are you using? (image size, and preprocessor + model on controlnet)

    • @austinpaulraj6466
      @austinpaulraj6466 10 месяцев назад +1

      @@Not4Talent_AI I was able to figure it out, turns out I hadnt installed the correct control net model (openpose) lol. Still learning 😂😂

    • @Not4Talent_AI
      @Not4Talent_AI  10 месяцев назад

      @@austinpaulraj6466 ohh hahahaha happens xD