DREAMBOOTH: Easiest Way to Train an AI Model for Stable Diffusion

Поделиться
HTML-код
  • Опубликовано: 2 дек 2024

Комментарии • 128

  • @jonhylow1239
    @jonhylow1239 Год назад +1

    5:55 where did you use the trigger word and what is the word exactly bc it is hard to understand. Thanks

    • @RussellKlimas
      @RussellKlimas  Год назад

      I just replaced the generic triggerword "zwx toy" and "a toy" with my own trigger word. Then it processed it automatically after I hit play.

  • @1salacious
    @1salacious Год назад

    Thx for the vid Russell. At 05:50 I understand how to use a trigger word in prompting (I'm using Auto1111 locally), but when training my LoRA's, I don't understand where to _set_ the trigger word. I'm confused by what you're saying here that you went back and "I used the trigger word rkkgr". Where did you do that? Where \ how did you set it? Is the trigger word the Instance Prompt? I can see how you later -used_ that trigger, but not where you actually set it.

    • @RussellKlimas
      @RussellKlimas  Год назад +1

      So originally in the collab it will say "photo of zwx toy" so replace "zwx toy" with your trigger word.

    • @1salacious
      @1salacious Год назад

      @@RussellKlimas Thanks ... so in the collab version you're referring to the "Instance prompt"? (@ 03:54) So Instance prompt = trigger word

    • @RussellKlimas
      @RussellKlimas  Год назад

      @@1salacious only the part you replace. You need to specify if it's a drawing or photo of whatever and then what you want the trigger word to be

  • @marcdonahue5986
    @marcdonahue5986 Год назад

    Awesome tutorial Russell!!!

  • @norbzys430
    @norbzys430 Год назад +1

    This the only process what gave me results thx so much ur da goat!

  • @SwathiK-cv3wq
    @SwathiK-cv3wq 4 месяца назад

    is it possible to run this without having a GPU? or on a virtual machine with just CPU?
    i have images which mostly look similar, is it better if we have variety in datasets or it also works with similar looking data?

    • @RussellKlimas
      @RussellKlimas  4 месяца назад

      I mean it's potentially possible on CPU but would take A LONG TIME. I would just use the google collab link I have in the description. You can train for free that way for at least a little while, or last time I checked. Most important thing is different backgrounds. it doesn't have to be A LOT different but that is ideal. The ai needs to be able to tell what is consistent and what isn't.

  • @MarkArandjus
    @MarkArandjus Год назад +7

    People always use faces to demonstrate this process, but it'd work for anything right? Power Rangers, cactus plants, fish, buildings, etc?

    • @RussellKlimas
      @RussellKlimas  Год назад +1

      Yes should work for anything

    • @kendarr
      @kendarr Год назад

      Anything you have enough photos of to train stuff

    • @nithins3648
      @nithins3648 Год назад

      I like to do it with dress like pants ,shirts , sunglasses is it possible

    • @nithins3648
      @nithins3648 Год назад

      If am doing dress which model should i use please reply ❤

  • @chiaowork
    @chiaowork Год назад +1

    Hi, Does anyone know how to fix this error?
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.0.0+cu118 which is incompatible.
    torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.0.0+cu118 which is incompatible.
    Successfully installed torch-2.0.0+cu118 torchaudio-2.0.1+cu118 torchvision-0.15.1+cu118
    WARNING: The following packages were previously imported in this runtime:
    [nvfuser,torch]
    You must restart the runtime in order to use newly installed versions.

    • @shadowdemonaer
      @shadowdemonaer Год назад +1

      If you haven't resolved it yet, have you tried uninstalling torch completely before reinstalling?

    • @ryancarper595
      @ryancarper595 10 месяцев назад

      Looks like youve got the wrong version of torch installed you need the version shown above, 2.0.1

  • @jonathaningram8157
    @jonathaningram8157 Год назад +1

    Any idea on why I have the "MessageError: RangeError: Maximum call stack size exceeded." error when uploading images for training?
    edit: The issue was coming from Safari, can't upload images with safari... great.

  • @skl949
    @skl949 Год назад

    really a great video

    • @skl949
      @skl949 Год назад

      no luck trying with a different base model though

  • @QueenLilli-i6d
    @QueenLilli-i6d Год назад

    is it possible to use a model from civitai or some other external site? Hugging face doesn't have the best models.

    • @RussellKlimas
      @RussellKlimas  Год назад

      I don't know 100% I would ask the maker of the repo here! They are pretty responsive. stable-diffusion-art.com/dreambooth/

  • @Juninholara21
    @Juninholara21 Год назад

    How fix "404 Client Error: Not Found for url (name of the model git)"? only work fine stable diffussion model to me.

    • @RussellKlimas
      @RussellKlimas  Год назад

      check the comments here stable-diffusion-art.com/dreambooth/ and ask him questions. He will have better answers than me.

  • @sdkjasdnap
    @sdkjasdnap Год назад +2

    Hey! Great tutorial. I wanted to ask in-depth about what I need to do with AI training and see if you can give me a hand. I've been generating 3D models of some characters and also making scenes with them. For example, one running. I've been looking for a way to create these scenes without having to 3D render each one. So, I've tried putting images of these characters in AI to make scenarios using them as a base, but I haven't been successful. What would you say is the best approach to solve this problem? Is it even possible to achieve what I'm asking with AI? Thanks a lot for your response.

    • @RussellKlimas
      @RussellKlimas  Год назад +2

      We currently don't have anyways to render 3D objects with AI. Best option now it Blender or some other software and then AI on top really.

  • @nicoasmr7612
    @nicoasmr7612 Год назад

    Bagaimana cara melatih dengan model yang berdeda ?
    Misalnyaa aku ingin melatih dengan model Chilloutmix atau Deliberate ?
    Apakah ada caranya . 😃
    Terimakasih

  • @ashorii
    @ashorii Год назад

    thanks for this.

  • @MrRandomPlays_1987
    @MrRandomPlays_1987 7 месяцев назад

    You saved me more headache with your tutorial so tons of thanks to you for it, your tutorial's colab page was the only one that worked despite being old, the rest of them I tried only gave me errors and they never worked for me, it drove me crazy all day long trying to find a working way to teach the model how I look like and thankfully I stumbled upon your video which finally did it and managed to learn how I look like and the results are cool, so thanks, I subbed and liked the video.
    On a side note, how can I get better results that would capture my likeness even better yet would remain flexible enough? (on the first try I used the default settings of 300 steps and let it learn on 20 photos of me, I have 145 photos in my original dataset tho), what are the best settings and base model for that?

    • @RussellKlimas
      @RussellKlimas  7 месяцев назад +1

      For this kind of a flash pan process it's going to be difficult. You'll be better off using LORA's and training in OneTrainer or Kohya to get the most flexibility.

    • @MrRandomPlays_1987
      @MrRandomPlays_1987 7 месяцев назад

      @@RussellKlimas I see, thanks, I might have to still consider finding a way to teach it via LORA/kohya then but its tough finding one that is working well, too many errors or bugs on everyone of them almost and some of the dependencies like Git and Python that are required for the process are containing malware or such so they are no option for me.

    • @RussellKlimas
      @RussellKlimas  7 месяцев назад

      @@MrRandomPlays_1987 Do you mean some of the things you would need to download using Git and Python? If you obtain Git and Python through the official sources you definitely shouldn't have malware.

    • @MrRandomPlays_1987
      @MrRandomPlays_1987 7 месяцев назад

      @@RussellKlimas Yeah, basically I scanned in 3 scanning sites their files from their official source and it detected in some of their scanners that it is malicious/contain malware

    • @RussellKlimas
      @RussellKlimas  7 месяцев назад +1

      @@MrRandomPlays_1987 Hmm I don't know what scanning sites are but here are the official sites for those
      git-scm.com/downloads
      www.python.org/downloads/
      Python wise I prefer 3.1110

  • @prathameshmoree
    @prathameshmoree Год назад +2

    hi sir, I am from India and i was searching for this type of tutorials since long. thankgod i finnaly found your channel....do we have to charge for dreambooth?

    • @RussellKlimas
      @RussellKlimas  Год назад +1

      to train models no you don't, just need a gmail account

  • @bernadettpapis6572
    @bernadettpapis6572 Год назад

    Hi, I have a problem. When I click the play button, it says that I have a FetchError. What do I do?

    • @RussellKlimas
      @RussellKlimas  Год назад

      Hmmm I'm not certain. Reach out to the creator at the stable diffusion art link and ask on their blog page. They are pretty quick to respond and have helped me out before.

  • @DrysimpleTon995
    @DrysimpleTon995 Год назад

    Does this technique only work for creating a person? Can i use this to create something like an Achitech design? Or maybe something like a normal map for skin texture?

  • @blackkspot9925
    @blackkspot9925 Год назад

    There is something missing here imho. Where did the tags come from? Is SD adding these images in to it's premade models then? Sorry for wrong terminology here. I'm still trying to figure out the architecture behind SD.

    • @RussellKlimas
      @RussellKlimas  Год назад

      What do you mean by tags?Yes you are training the images into a model that's already been made.

    • @nithins3648
      @nithins3648 Год назад

      @@RussellKlimas don’t take me wrong. Can we do this without a model from scratch?

    • @RussellKlimas
      @RussellKlimas  Год назад

      @@nithins3648 You would need millions of images to make your own model and would need an insane graphics card to do so like an a6000

  • @jerryjack6976
    @jerryjack6976 Год назад +2

    great tutorial! my example images have been coming out looking nothing like the pics i used, i used 23 pictures, and i tried it at, 800, 1600 and 2300 and all have not produced any results that look like the pictures

    • @RussellKlimas
      @RussellKlimas  Год назад +5

      I've run into a similar issue when trying to train lately with it as well. It's so annoying that the process changes so much. Going to try again right now.

    • @jerryjack6976
      @jerryjack6976 Год назад +2

      @@RussellKlimas great 👍 I'd love to hear how it goes and if there are any work arounds

    • @adohlala
      @adohlala Год назад +2

      @@RussellKlimas any updates?

  • @philjones8815
    @philjones8815 Год назад

    So I'm following this but was asked to pay $5 to get access to the collab dreambooth. Now Google wants another $13 because my GPU type is not available...am I getting scammed here or do I have to pay to get this working?

    • @philjones8815
      @philjones8815 Год назад

      I found the solution. If you get the GPU not available error then go to runtime-change runtime type and select hardware type none. Now I'm stuck 'no training weights directory found' :(

    • @RussellKlimas
      @RussellKlimas  Год назад

      @@philjones8815 what model are you using? Depending on the model I get that error too. I know that Realistic Vision and Revanimated work if instead of FP16 you put main.

    • @philjones8815
      @philjones8815 Год назад

      @@RussellKlimas I was using the SD 1.5 model but I'll try using Realistic Vision without fp16. Thank you so much for the reply.

    • @philjones8815
      @philjones8815 Год назад +2

      I had to use the original model but 'compile xformers' in order for this process to work, even when I had xformers installed. Great tutorial Russell, and I hope people find my hair pulling experience helpful in achieving their goals.

  • @brandonharper3060
    @brandonharper3060 Год назад

    So when you train your own images does it go into their data set?

  • @Clare3Dx
    @Clare3Dx Год назад

    Could it be that you are getting album covers because your class_prompt isn't saying that it is a person?

  • @BucharaETH
    @BucharaETH Год назад +1

    Hey! What a great video, Russel! Thank you!
    Have a question: why Collab is better than just using Stable Diffusion on local files? Maybe I just didn't understand something in codes and so on, but it's look like similiar interfaces...

    • @RussellKlimas
      @RussellKlimas  Год назад +4

      Using this collab is just easy for everyone regardless if you are running locally or not. Personally even though I run a 4090 in the few attempts I've tried training locally they have turned out worse than the collab.

    • @BucharaETH
      @BucharaETH Год назад +1

      @@RussellKlimas Got it! Thank you!

  • @haktan7482
    @haktan7482 Год назад +1

    i am getting a "OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.75
    GiB total capacity; 8.17 GiB already allocated; 10.81 MiB free; 8.31 GiB
    reserved in total by PyTorch) If reserved memory is >> allocated memory try
    setting max_split_size_mb to avoid fragmentation. See documentation for Memory
    Management and PYTORCH_CUDA_ALLOC_CONF" error,can you help me?

    • @RussellKlimas
      @RussellKlimas  Год назад +2

      You used all the available free GPU you have on your account with google collab. You can try running with CPU or make a new gmail and use the credits there. I have created several for this reason.

  • @JimtheAIwhisperer
    @JimtheAIwhisperer 11 месяцев назад

    Didn't work for me :/ Kept getting "RangeError: Maximum call stack size exceeded."

    • @RussellKlimas
      @RussellKlimas  11 месяцев назад

      Hmmmm I've never run into that error before. Wish I could be of more help.

  • @olvaddeepfake
    @olvaddeepfake Год назад

    what do you mean by you use this to train on to start a base? do you train it further on something else after this?

    • @RussellKlimas
      @RussellKlimas  Год назад

      with the 1.5 model it's a great model to start with to try this out first. Then if you want to try and use other models to train with you can branch out from there. 1.5 is just very reliable.

    • @olvaddeepfake
      @olvaddeepfake Год назад

      @@RussellKlimas right ok thanks!

  • @anaximander9215
    @anaximander9215 Год назад

    How would I then use this on controllnet in Colab?

    • @RussellKlimas
      @RussellKlimas  Год назад

      If the collab has controlnet you would use it in the same way you would use collab without controlnet. It makes no difference.

    • @anaximander9215
      @anaximander9215 Год назад

      ​@@RussellKlimas Sorry, I don't follow. Let me rephrase my question to be a little more clear. I've used Dreambooth to train a model. I can add my promt into the the promt input of the Dreambooth interface on Colab and they come out looking great. But now I want to be able to create images with this model on the Controlnet interface on Colab, so I can also use OpenPose editor with it. How do I load the Dreambooth model onto Controlnet?

    • @RussellKlimas
      @RussellKlimas  Год назад

      @@anaximander9215 If you're talking about making a controlnet model I don't know how to do that. Different process.

    • @anaximander9215
      @anaximander9215 Год назад

      @@RussellKlimas No, I'm not talking about making a controlnet model, I'm talking about using the model I created in Dreambooth in controlnet. At 8:00 you say "to keep the model within that so you can call it when you want to generate". That's what I'm trying to figure out how to do. How do I call the model in controlnet to use it to generate there? I'm sure that's a very basic question, but I've never used controllnet until now, so don't know how.

  • @theaccount4100
    @theaccount4100 Год назад

    I was excited but got a error talking about "no training weights" im seriously annoyed that I get errors on every way of trying this shit

    • @RussellKlimas
      @RussellKlimas  Год назад

      It's most likely due to the model you are trying to call to train. I've run into that error before. Definitely hit up the guy on stable diffusion art website. I wanted to use RealisticVision and he set it up to make it work for me.

    • @theaccount4100
      @theaccount4100 Год назад

      @@RussellKlimas broo its doing it on every single freaking one I choose. ive put 9 so far like BRUH

    • @RussellKlimas
      @RussellKlimas  Год назад

      @@theaccount4100 Try asking the stable diffusion art guy what's wrong and sharing your errors with him. Did you try just the basic of what's in there first?

    • @theaccount4100
      @theaccount4100 Год назад

      @@RussellKlimas no I found out most of the ones dont work. I found a couple that do but its not a bug on my part its just the shit barely works correct. I trained a model with 600+ pics and it looks like shit an dont even show the face right. I tried merging checkpoints to make the body better. Its def a no go. my thing is like if deepfake computer programs can use data sets easily why is it so complicated to do it on a single photo? They made it hard on purpose.

    • @RussellKlimas
      @RussellKlimas  Год назад

      @@theaccount4100 You could be overtraining. I never use more then 30 and 1600 steps.

  • @zen6107
    @zen6107 Год назад +1

    stable diffusion art is now a pay site

    • @RussellKlimas
      @RussellKlimas  Год назад

      but if you check the link in the description for the collab still works

    • @xpertsaif
      @xpertsaif Год назад

      It no more works Russell, tried hard but no success dear.@@RussellKlimas

    • @xpertsaif
      @xpertsaif Год назад

      Also can you record it again for our ease? I mean as an update to current settings.

  • @carlosbosque66
    @carlosbosque66 Год назад

    Can we do this process locally?

    • @RussellKlimas
      @RussellKlimas  Год назад

      Probably? If you connect it to your graphics card you could. Otherwise you can use the automatic 1111 UI with the dreambooth extension but I find that confusing.

    • @kendarr
      @kendarr Год назад

      you'll need a beefy GPU for it

  • @mostafasamir9472
    @mostafasamir9472 9 месяцев назад

    Thanks

  • @tounsicorp1487
    @tounsicorp1487 11 месяцев назад +1

    Oh, so now it costs $5 to get access to the Collab? fuck that shit. lmao

    • @RussellKlimas
      @RussellKlimas  11 месяцев назад +2

      It does if you use his. If you check the link for the collab in the description you good to go.

  • @RonnieMirands
    @RonnieMirands Год назад

    Really thanks for your videos and time :) A lot of peolpe saying the latest version of Dreambooth is broken. Is that really real? :(

    • @RussellKlimas
      @RussellKlimas  Год назад

      It's been a huge pain in the butt in the latest version. I just used collab fast dreambooth and with 25 images and 3000 steps was able to train a custom model no problem. Apparently also works at 1500 steps and 15 images but I have not verified.

    • @donutello_
      @donutello_ Год назад

      @@RussellKlimas does fast dreambooth colab produce the same quality or is it worse too?

    • @RussellKlimas
      @RussellKlimas  Год назад

      @@donutello_ fast dreambooth collab is producing good results right now

  • @mrk6090
    @mrk6090 Год назад +2

    45 Gigs????

  • @johnceed1663
    @johnceed1663 Год назад

    how to train on other models?

  • @coolvisionsai
    @coolvisionsai Год назад

    1,000 steps for each photo, but then you said 2,500? should I be doing 25,000 steps, or did you mean 100 steps per photo?

    • @RussellKlimas
      @RussellKlimas  Год назад

      100 steps per photo, though I've had good success at 1600 steps at long as the amount of photos is over like 14

  • @TheTruthIsGonnaHurt
    @TheTruthIsGonnaHurt Год назад

    Why google drive?
    Shouldn’t this be on your hard drive?

    • @RussellKlimas
      @RussellKlimas  Год назад

      I mean you can run it locally if you want but this way it just easier.

  • @kleber1983
    @kleber1983 Год назад +1

    45gigas?!?

  • @deppengu
    @deppengu Год назад

    Is it possible to train it based on videos? specifically video tutorials?
    Not this one, I mean in general, dont really see a AI that train based of videos

    • @RussellKlimas
      @RussellKlimas  Год назад

      Models technically work either way. They can work off the same models. It's comes down to the process on how the video is actually made for it to matter. Similar to Gen 2. Or at least to my understanding.

  • @BlazshoNikolov
    @BlazshoNikolov 11 месяцев назад

    I am truly sorry regarding my comment, but NOBODY i say NOONE is actually showing how to train your own model. Alway going to some database that there are tons of models, that you need to waste literally days to get what you need and want. HOW TO TRAIN YOUR OWN MODEL FROM SCRATCH ?! Like literally not using someone else's preferences for body, face, nose, eyes, hair, skin, legs.... etc. Is there ANY video that makes sense for people totally new in this, besides "click here, go there... you are done". No i am not. I didn't get a thing regarding what i am looking for. :(((

    • @RussellKlimas
      @RussellKlimas  11 месяцев назад +1

      Training your own model from scratch will take millions of images, and super high end graphics card and at this point in time around $10,000. That is something that I do not have the financial capability nor hardware to do.

    • @BlazshoNikolov
      @BlazshoNikolov 11 месяцев назад

      @@RussellKlimas I see. I have the resource but no idea where to start. Too much conflict and basic info. I was looking for deep ML in that field. Thank you for your time! Appreciated!

  • @FlashRey
    @FlashRey Год назад

    can I train an object instead of real people?

  • @kendarr
    @kendarr Год назад +1

    I got all deformed, lol

  • @robertcrystals
    @robertcrystals 6 месяцев назад

    they've locked the colab behind a paywall

    • @RussellKlimas
      @RussellKlimas  6 месяцев назад

      my version still loads up no problem. Hence why it's in the description.

    • @earlcamus3056
      @earlcamus3056 3 месяца назад

      @@RussellKlimas True, use the link in the description

  • @trueomen5017
    @trueomen5017 Год назад +1

    "click this, click that."
    Bruh, explain what it all does, Jesus.

    • @kendarr
      @kendarr Год назад +2

      This is more of how to, not how it works.

    • @vitmedia
      @vitmedia 10 месяцев назад

      perhaps he could also stroke your hair as he explains it all for you, maybe bring you tea?

  • @suzanazzz
    @suzanazzz Месяц назад

    Anyone tried to train the new SD 3.5 model that just came out?