The Truth About Consistent Characters In Stable Diffusion

Поделиться
HTML-код
  • Опубликовано: 13 май 2024
  • The Truth About Consistent Characters In Stable Diffusion...It's not 100% possible without having to train lora's and dreambooth models and without having to do a convoluted process. However with controlnet reference, we can get very close to that. Also the roop extension will help us utilize real photos to expand on this method. Today's video I'll show you how to get to that point with out any training and then in a part 2 to come we will look at making improvements to hands and faces and some post production techniques to get close to that consistent character goal!
    How to install ControlNet • How To Install Control...
    How to install Roop • This Face Swapper is M...
    Random name generators
    www.behindthename.com/random/
    randomwordgenerator.com/name.php
    ⏲Time Stamps
    0:00 The truth about consistent characters in stable diffusion
    0:13 Start with a good model and consistent faces
    1:13 Create images and develop your look
    1:58 Use ControlNet Reference
    3:35 Same character different background
    4:25 Using real photos and Roop extension
    6:17 Experiment and create!
    *Disclaimer Affiliate Links Below*
    📸 Gear I use
    Sony A7C CAN amzn.to/3spWX8C
    Sony 35mm 1.8 CAN amzn.to/36Apekr
    OBS (FREE) obsproject.com/
    Editor Davinci Resolve www.blackmagicdesign.com/ca/p...
    Audio: Rode Podmic amzn.to/35sSnxv
    🎵 Epidemic Sound
    🔦 Find us on:
    Discord: / discord
    Instagram: / monzonmedia
    Facebook: / monzonmedia

Комментарии • 120

  • @MonzonMedia
    @MonzonMedia  8 месяцев назад +27

    Just to clarify, my goal is not to have to train lora's or dreambooth models to achieve consistency, I'm well aware of that. The problem is that it's not accessible to everyone and difficult to do for most people. Do you have any tips for consistent characters?

    • @blacksage81
      @blacksage81 8 месяцев назад +7

      In addition to naming and nationality I say the Seed, or Noise Seed is super important to keep track of. I'm not 100% sure if Adetailer will generate a Seed number, but if it does that is one number to lock in(or choose for yourself) if you want a consistent Gen. I've been designing characters with SDXL in Comfy with the FaceDetailer via the Impact Nodes Addon. With the text2img workflow I'm using, and as long as the Face Detailed Node's Noise Seed is the same, I get about anywhere from 75-90% consistency with my gens.

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      @@blacksage81 Great advice for sure! I've yet to try this out on ComfyUI but I have been experimenting with SDXL and ADetailer as well on A1111. Appreciate you pointing that out!

    • @ne99234
      @ne99234 8 месяцев назад +1

      interesting technique with the long name. from my experience this really shines in img2img. prompt a name, nationality, body type, etc let's you convert almost any image into "your" character and background with low denoise strength ~ 0.4-0.65. doesn't work with every image/pose, but a great way to get a lot of images ...which could also be used to train a lora further down the line.

    • @GamingInfested
      @GamingInfested 2 месяца назад

      making embedding of one model, having parameters somewhat consistent, icbnp ckpt

  • @thedanielblack
    @thedanielblack 8 месяцев назад +1

    Great video! This helps a lot. Also, I hadn't tried Roop before. That's returning some pretty good results for me. Thanks again!

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      Glad it was helpful! Although I noticed roop was giving errors lately, still works but apparently it may not be supported moving forward. There is another one called face swap lab that I’m trying out and seems to work well. I may do a video on it soon. 👍🏼

    • @thedanielblack
      @thedanielblack 8 месяцев назад +1

      @@MonzonMedia Yeah, I had difficulty getting Roop to run, at first. If it helps anyone having the same issue, I wound up manually searching HuggingFace for the "inswapper_128.onnx" that is referenced on the GitHub page as the link is dead. Several other repositories had the same file. Then, I had to restart the UI twice because it seemed to load too quickly to catch the new extension the first time. I look forward to your video on Face Swap. Thanks again!

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      Glad you got it working but you may notice errors in the command window when you load it. It's really not a big deal because it should still work but...eventually it probably won't be supported. I played with Faceswap for a few days now, there is a lot to learn about it but honestly the default settings does pretty well. I might start with that. Stay tuned!

  • @DasLooney
    @DasLooney 8 месяцев назад +5

    Just finished the video. Will be experimenting before long with control net. Bookmarking this one for when I do. Glad you touched on what few people do which is that 100% perfection with stable diffusion is really not possible. The whole thing is being as close to possible with to the original which you touched on right at the beginning! Well done!

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +5

      Hey dude, really appreciate the support! I've watched many a video to see if I'm missing something but every time I'm left somewhat disappointed. I would rather people just be honest and say "we can get close to consistent results but not 100%" but as a content creator I get that they want the click and the view 😬 We are close to getting that consistency though and considering where we were a year ago, we've come a long way. I've got some tips and hacks to get the consistency but of course I'm looking for the easiest and fastest way possible. AI is suppose to help with getting things done easier and quicker, not paying $$$ for GPU's just so we can train models and Lora's to achieve consistency 😂

    • @DasLooney
      @DasLooney 8 месяцев назад +4

      @@MonzonMedia You're welcome. Yeah a lot of people out there don't realistically state what these programs can do. It's as frustrating as trying to replicate something someone did only to find out they lied about the steps or invited like mad.

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +4

      I hear ya man, well...you can count on me telling it like it is. 👊🙌

  • @Onur.Koeroglu
    @Onur.Koeroglu 8 месяцев назад +2

    Hey Man... Great Tutorial.. I learned some new techniques.. 😎✌🏻
    Thanks 💪🏻

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      That's great to hear! There is more to come as I want to focus a bit more on this subject. Glad to got something out of it and appreciate the feedback. 👍

  • @emileklos
    @emileklos 8 месяцев назад +4

    Very nice way of explaining, simple yet detailed. I just started with ai generation, and regarding face consistency, i use after detailer with a lot of success, but usually only to the face. I’ll add controlnet to the workflow for hopefully more consistency in the clothing. The last challenge would be a consistent environment. If i describe a location, it will still give me a variety of backgrounds that don’t really match in consistency

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +2

      You're welcome and glad you got some value out of the video. Yeah that's another tricky thing however there are some workarounds that can work but it's sort of limited too. I'll cover it in a video to come soon but a few things to consider, utilizing the same seed can help, also using a bigger aspect ratio and prompts like character turnaround, character sheet, multiple positions can get some decent results however repeating it several times can be the challenge. I'm slowly getting there though. Will share more soon! 👍

    • @tengdahui
      @tengdahui 7 месяцев назад

      I have a better way to achieve the consistency of the environment and characters

    • @babluwayne3802
      @babluwayne3802 4 месяца назад

      how??
      @@tengdahui

  • @BabylonBaller
    @BabylonBaller 6 месяцев назад +1

    Appreciate the walkthrough my friend

    • @MonzonMedia
      @MonzonMedia  6 месяцев назад

      You're welcome! I'm overdue for a follow up video on this...stay tuned! 👍

  • @Shabazza84
    @Shabazza84 6 месяцев назад +1

    Love it. It's often saving me from having to train a LoRA.

    • @MonzonMedia
      @MonzonMedia  6 месяцев назад

      Yes exactly, it's not perfect but it helps a lot. Also there ControlNet IP-Adapter does something similar. Will be doing a video soon.

  • @SithlordSigma
    @SithlordSigma 2 месяца назад +1

    haha love how you point out not to notice the hands on your first gen and yet they're nearly perfect, something i pretty much never get my first time around.

    • @MonzonMedia
      @MonzonMedia  2 месяца назад

      😊 AT this points hands on AI images is like a meme...hahaha! That being said, it's much better these days and at least there are ways around it. 👍

  • @ai_and_gaming
    @ai_and_gaming 8 месяцев назад +1

    Great tutorial

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      Thank you! More to come on this topic soon 👍🏼

  • @meetaugie
    @meetaugie 7 месяцев назад +1

    Great video! You'll have to give Augie a try sometime :)

  • @jdesanti76
    @jdesanti76 8 месяцев назад +5

    In the equation for consistent characters, I use variables like age and body type, that helps a lot.

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +3

      Yes, that's a good point as well. Not sure if you noticed in my prompt I used "40yr old" because realistic vision tends to make woman too young sometimes hahaha! So I used that to balance it out 😁

    • @freeEnd_
      @freeEnd_ 4 месяца назад

      true lol, i type "20 years old woman" and it makes like 14 years old girl for some reason@@MonzonMedia

  • @GeorgeLitvine
    @GeorgeLitvine 7 месяцев назад +1

    Hi MM! Could you please teach us a similar technique when we have two characters, in order to keeping consistency for both?

    • @MonzonMedia
      @MonzonMedia  7 месяцев назад

      Absolutely thanks for the suggestion. It would be a similar process though, but it’s a bit more tricky

    • @GeorgeLitvine
      @GeorgeLitvine 7 месяцев назад

      Hi MM! Thank you for interest in that suggestion. Would you, please, do it when you get time to do so?@@MonzonMedia

  • @jason-sk9oi
    @jason-sk9oi 8 месяцев назад +3

    Pro tips 👌 😎

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      You're welcome, much appreciated! 👊👍

  •  8 месяцев назад +1

    Good video, Thx.

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      You're welcome! Will be following up on this video soon! 👍

  • @WetPuppyDog
    @WetPuppyDog 6 месяцев назад +1

    First off great video. I love your pace and explanation of your process. I have found great consistency in my models and images. However, I am finding a great deal of degradation on the quality of my images that I produce. Creating the initial reference image is clean and sharp but the images derived from ControlNet come out less than great. Is there something I'm missing? I have double checked my settings and even paused your video to compare. I'm using ControlNet v1.1.411 and SD 1.6 for my workflow.

    • @MonzonMedia
      @MonzonMedia  6 месяцев назад

      Hey there, appreciate the feedback and comments! You know, the more I used the reference only controlnet I started seeing this too however I wasn't able to find the root cause. I'd try to duplicate it then it would go away. I'm going to do more tests but I have my suspicions as to why that happens but I want to be 100% sure that I can recreate it. I find switching models, then back tends to get rid of it. It's very peculiar. With that being said, I'm outlining a video using the IP adapter that works very similar to this method that you may want to watch when it's out. 👍

  • @DrDaab
    @DrDaab 8 месяцев назад +1

    Wow, another great tutorial. Who would think that using non-existent names would be really helpful?
    One of the many errors I and many others got with Roop install is that a component was deprecated with link to read some technical info. Not useful to those of us who need 1 click installs that you explained so well. In addition to Roop, there are other projects that do the same thing (FaceSwapLab, sd-webui-roop, Gourieff / sd-webui-reactor etc).

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      Hello my friend, always great to hear from you. Yes, using names will help shape the look and often times I may use a celebrity last name to give it a similar characteristic. So I looked around and it seems that Roop may not be supported going forward and even if it is, it's not a very reliable extension I find. I am however using Faceswaplabs and trying to get more familiar with it so I think I will create content on this one instead.

  • @DrSid42
    @DrSid42 8 месяцев назад +3

    Mixing many random names will give you default average model face. Every model has one. It is affected by race and age, but it is there. If you want different face, I suggest mixing celebrities .. 2 are usually enough, give them weight 0.5 .. or do XYZ plod spread to find what you are looking for. Not only is the face consistent, you can also control facial features this way.

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      I've touched on using celebrity's on previous videos before and also I didn't want to go in-depth on faces on this video. However if you notice the names I use has parts of "celebrity" names which is another sort of hack I discovered especially if they are fairly well known. "Dobrev" for Nina Dobrev by changing the first name some models will still pick up certain traits of that person. Besides, I've been meaning to do an updated video on that as well. 👍Good point nonetheless.

    • @DrSid42
      @DrSid42 8 месяцев назад +1

      @@MonzonMedia yeah ..but you have to experiment a lot with those names .. some have strong associations .. but most don't.

  • @Gromst3rr
    @Gromst3rr 8 месяцев назад +1

    thanks!

  • @GhettoDragon_
    @GhettoDragon_ 3 месяца назад

    Can this also be done with FOOOCUS? If so what are the best base model refiner and lora to use ..

  • @nefwaenre
    @nefwaenre 8 месяцев назад +1

    Thanks so much for this video! i have a question: is there a way to completely change the shirt someone is wearing or to add a shirt on a guy who's not wearing it, without changing the pose? i tried that using controlnet openpose (adding a white shirt on a guy who doesn't have any) but it just keeps creating more half naked men's pictures. And when i go to change a shirt colour from it's original colout to whatever i want, if i set the weight high then it botches the entire pose. Any work arounds, please?

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      Absolutely, you can just use inpainting and mask the area you want to change. play around with the denoising strength for more variation.

    • @ne99234
      @ne99234 8 месяцев назад +1

      for this kind of task i like to create a controlnet canny image, and use an image editor to paint out the stuff that i want to change with black. (at this step you can also paint in new details with a fine white line.) then use the new canny image and prompt a tshirt. because the canny image does not have information that there is a naked torso, everything that's black can be changed with the prompt, for example background, clothing colors, hair color.

    • @Slav4o911
      @Slav4o911 7 месяцев назад

      Try "Inpaint Anything" extension it has built in mask tool to select clothes or other object in the scene and change them.

  • @zoezerbrasilio2419
    @zoezerbrasilio2419 3 месяца назад +1

    Can you do a similar video of achieving great consistency including clothes but using Fooocus instead? What should I do in that case?

    • @MonzonMedia
      @MonzonMedia  3 месяца назад +1

      It's pretty much very similar, will be editing that video next.

  • @user-gq2bq3zf1f
    @user-gq2bq3zf1f 6 месяцев назад +1

    Thank you always. I succeeded in changing my face through roop, is there a way to change my outfit and hairstyle naturally?

    • @awais6044
      @awais6044 Месяц назад

      You find any solution?

  • @javadrip
    @javadrip 5 месяцев назад +1

    how does giving the character names help? does SD continually learn from the text input?

    • @MonzonMedia
      @MonzonMedia  5 месяцев назад

      It's helps with keeping the face consistent. Each model tends to have a default "look" so naming them, giving them an ethnicity helps to shape the face differently while keeping the face consistent. Some models have a stronger default look than others.

  • @user-xk6xc3kz1k
    @user-xk6xc3kz1k 8 месяцев назад +1

    I did not understand what will happen when i close SD and reopen it again. how can i get the same character again? What was the role of giving a name to the character?

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      You’ll have to use a similar prompt and exact same settings to get the same character, only changing the environment. I have a follow up video on this coming soon. Naming the character will keep the face consistent.

    • @user-xk6xc3kz1k
      @user-xk6xc3kz1k 8 месяцев назад +1

      thank you! We are waiting for the new video.@@MonzonMedia

  • @raditedite
    @raditedite 8 месяцев назад +1

    Did u notice overexposed result after using controlnet as references ? I've tried some pictures and it make overexposed result. How can i fix that? Is it because of VAE or something ?

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      Now that you mention it I did have an issue with under exposure randomly but I just thought it was a one off type of thing. I ended up switching models as I was just experimenting anyway and it didn't happen after. I haven't been able to recreate it to see what the issue is. Does this happen on a consistent basis? Can you recreate it?

    • @Slav4o911
      @Slav4o911 7 месяцев назад

      You should lower the strength of the controlnet. 1 is default but is usually too high.

    • @raditedite
      @raditedite 7 месяцев назад

      @@MonzonMedia yup, still an issue for me when using realisticvision5.1vae

  • @edtomlinson1833
    @edtomlinson1833 7 месяцев назад +1

    What if you created your own Dreambooth model using a set of pictures of the same person. How do you generate consistent characters using that model? I am having trouble with this.

    • @MonzonMedia
      @MonzonMedia  7 месяцев назад

      Dreambooth models will only really help with faces and and body, clothes attire will still be random. Training Loras is a way to get close to consistent clothing but still not 100% but pretty close

  • @JAMBI..
    @JAMBI.. 2 месяца назад

    Can you just transfer ones soul into the virtual self?

  • @falsettones
    @falsettones 8 месяцев назад +1

    Hold on, this is like the ones you sent on the messenger group, right?

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      Sort of 😬😊 but yeah, very easy and quick to do now. 👍🏼

    • @falsettones
      @falsettones 8 месяцев назад +1

      ​@@MonzonMedia . This is so fascinating. XDDDD

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      @@falsettones it really is!

  • @lilillllii246
    @lilillllii246 5 месяцев назад +1

    Is it possible to apply clothes and have them look exactly the same when they are slightly different?

    • @MonzonMedia
      @MonzonMedia  5 месяцев назад

      Pretty much the same process but it's still difficult to get them exactly the same. You'd have to generate a lot of images to get some that look similar. I'll be doing a follow up on this very soon.

  • @syu485
    @syu485 6 месяцев назад +2

    Hi! The hands in your pictures were normal. How could you do that? Is it owing to the pre-trained model? I used other models and always get weird fingers.

    • @MonzonMedia
      @MonzonMedia  6 месяцев назад

      Hey there, yeah it's always good to start with a good model that does hands well, Realistic Vision does a great job of that, I mean it's not perfect but one of the better models that can do hands pretty decently. Some of the images I may have done some minor inpainting but not too many of them

    • @syu485
      @syu485 6 месяцев назад +1

      ​@@MonzonMedia Got it! Thanks for your response.

    • @MonzonMedia
      @MonzonMedia  6 месяцев назад

      You're very welcome! I've been working on a video on the topic of hands, I'm just trying to see all the different approaches that we have at our disposal. Hopefully the first video I can get done by next week as it will have to be maybe at least 2 videos. Stay tuned!

  • @joe7258
    @joe7258 3 месяца назад

    i get an error when running the last command?

  • @Clayden
    @Clayden 4 месяца назад +1

    How do you fix the hands???

    • @MonzonMedia
      @MonzonMedia  4 месяца назад

      I'll be covering this soon. Starts with a good model, Lora's help, there is also controlnet.

  • @zac_vaughn
    @zac_vaughn 4 месяца назад

    A hypothetical name just directs the seed.
    It does not direct the seed any more than any other descriptive word would.
    And therefor it is fairly meaningless to include a name, IMO. Maybe I'm missing something, or there's something I'm not fully understanding.
    What you could do instead is save some very KEY descriptive words in a document and make sure to always use those 3-10 descriptive words along with your seed. The character should look the same every time unless you change up the Lora's you're using. Lora's cause your seed to be interpreted differently.

    • @zac_vaughn
      @zac_vaughn 4 месяца назад

      The reason you may think that using a name works, is because it will work... What I'm saying is that using a name is not as effective as using words that actually describe your character, and making sure to always use those words and the same seed.

  • @AliHaidari1343
    @AliHaidari1343 8 месяцев назад +2

    Hello my dear friend good night and very nice video L ❤😂❤😂❤😂❤😂

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      Thank you sir 👍🏼

  • @sessizinsan1111
    @sessizinsan1111 3 месяца назад

    I don't understand is why SD doesn't implement the prompts. It's like my S.D. has a different character. watched a lot of videos but SD keeps being stubborn with me. I love my SD, but we fight all the time. We're like a couple always fight. I am so bored.

  • @sachinbahukiya1517
    @sachinbahukiya1517 2 месяца назад

    Which web site

    • @MonzonMedia
      @MonzonMedia  2 месяца назад

      This is a local platform called Automatic1111

  • @3diva01
    @3diva01 8 месяцев назад +1

    Getting consistent characters, clothing, hair, and backgrounds/environments are extremely difficult unless you start with a base image. That's why tools like Daz Studio are massively helpful with character, clothing, hair, and environment consistency.

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +2

      Yes for sure, starting with some sort of base, even a simple drawing is always better for control and consistency. I'll be touching on that soon as I develop this series of videos. I think though if you can develop some simple workflows like in this video, it will only make developing consistency a lot easier once you can utilize other mediums.

    • @3diva01
      @3diva01 8 месяцев назад +1

      @@MonzonMedia I completely agree! The tips and techniques you've outlined in this video are very helpful for more character consistency! Thank you for the great video! :D

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      You're very welcome! I'm glad you brought up Daz3D, I didn't use it much in the past but I'm a bit familiar with it from my Cinema 4D days. I recently started to pick it up again to use the models and other assets with controlnet to experiment with character development. Do you use it much and are you an experienced user?

    • @3diva01
      @3diva01 8 месяцев назад +1

      @@MonzonMedia Full disclosure - I'm a Daz3D PA. But I've used the Daz Studio program for years, even well before I started selling 3D assets there. I am happy though with how it really helps with character consistency. The ability to use Daz Studio renders to control the exact outfit, hair, character and environment has been really helpful with my work with Stable Diffusion. Daz Studio render + ControlNet allows for some pretty impressive control over your characters. I was surprised how useful it is to getting exactly the characters, environment, and clothing I want. Having a base image that you can control at that level is hugely helpful, IMO.

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      Oh wow that's amazing! I can totally see it's use cases. I'd love to see your work if you don't mind sharing? To be honest, this is how I want to use AI, along with pictures and drawings, I find I have more control using assets versus starting from scratch just from prompting. I've been trying to make time to learn Daz as it's been so very long since I've tried it and when I was using it, it was just basic concepts. Nevertheless, I think there are some very useful assets, even the free ones just to get familiar. You will see very soon how I will be shifting the focus of this channel on more on the creative side. I mean, what's point if we can't create what we envision in our heads right?

  • @maraderchikXD
    @maraderchikXD 8 месяцев назад +2

    Easy way to figure out it's a Ai generated is that all women's jeans have fake pockets and she's can't put her hand in it. 😄

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      😂 lmao right? So true! We need a negative embedding called deep pockets! Hahaha!

  • @DVDKC
    @DVDKC 4 месяца назад

    It doesn't matter since you can faceswap easily...

    • @MonzonMedia
      @MonzonMedia  4 месяца назад

      Sure but you have to create a consistent face first right? Most models have a default look so you would have to tweak the look to what you want. Then yes face swap all you want and that doesn't address consistent clothing or attire.

  • @borutesufaibutv1115
    @borutesufaibutv1115 8 месяцев назад +2

    Si sara G yan hahaha

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +2

      😂 haha! Now that you mention it, I see the resemblance! 😊

    • @borutesufaibutv1115
      @borutesufaibutv1115 8 месяцев назад +1

      @@MonzonMedia nice tut idol, trying out your method ❤️

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      @@borutesufaibutv1115 nice! Let me know how it goes. Just a heads up, I started getting errors with Roop and I found out that it may not be supported anymore. There is another extension called face swap lab that does the same thing and is more advanced. I'm trying it out and may do a video on it soon. 👍

    • @borutesufaibutv1115
      @borutesufaibutv1115 8 месяцев назад +1

      @@MonzonMedia cool! I'm actively using roop. Thanks for the heads up about the other tool, Was really looking for a new way to do face swaps tbh i think roop is good but can be improved especially the way it leverages the codeformer/gfpgan algo. Will deffo let you know sir 👌

  • @Macatho
    @Macatho 8 месяцев назад

    Why not just crreate a LORA of your character?

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад

      Oh yes absolutely, I did pin a comment saying my goal is to find methods without having to train lora's or dreambooth models. I'm always looking at "easier" solutions and options for people that don't want to have to spend the time doing something like training. I do however intend on covering that as part of this series.

    • @3diva01
      @3diva01 8 месяцев назад +2

      Not everyone has a computer that can handle LORA training. Videos like this are hugely helpful for those of us on older machines or who don't have the ability to create LORAs. :)

    • @IceMetalPunk
      @IceMetalPunk 8 месяцев назад +2

      I have an 8GB GPU. I technically "can" train a LoRA, but it would take literally 24 to 48 hours of training for a single LoRA with relatively few training points. If we can get most of the way there without that hassle, I'm happy.

    • @Macatho
      @Macatho 8 месяцев назад +1

      ​@@IceMetalPunk understandable, and for some it can be a hefty $ amount, a used 3090rtx can be as cheap as $800 btw

    • @Macatho
      @Macatho 8 месяцев назад

      ​@@3diva01 understandable, but it costs about 5 bucks to rent a gpu that can train a lora in less than 2 hours... So money really isnt the issue is it? Also a used 3090 rtx you can get for $800, sure that is a lot for some people I guess.

  • @putinninovacuna8976
    @putinninovacuna8976 3 месяца назад

    I mean for assian people you just need a single picture cause they all look the same lmao

  • @sitr2516
    @sitr2516 8 месяцев назад +1

    The truth? I demand Lies sir! lie to me!!!!!

    • @MonzonMedia
      @MonzonMedia  8 месяцев назад +1

      LMAO! 😂Ok well, the truth is...these are real photos! Deformed hands and all! hahaha!😬Appreciate the good laugh man.