DREAMBOOTH: Train Your Own Style Like Midjourney On Stable Diffusion

Поделиться
HTML-код
  • Опубликовано: 26 окт 2024

Комментарии • 302

  • @SabyMp
    @SabyMp 2 года назад +109

    Do you know that your channel is the only one that helps people to learn all these difficult tasks to do it ourselves, you are amazing that you choose this way of helping people to learn and experiment, i appreciate you so much and love this channel for its educational purpose.

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +7

      Glad to help

    • @cliveagate
      @cliveagate 2 года назад +4

      I totally agree. I'm a complete beginner in the AI field, but the Ai Overlord has shown a bright light at the end of a very long and changing tunnel 🤜🤛

    • @nemonomen3340
      @nemonomen3340 2 года назад +7

      I agree he’s awesome, but technically I know there are at least a couple others who produce similar content.

    • @iamYork_
      @iamYork_ 2 года назад

      I might be able to help you friend...

  • @Aitrepreneur
    @Aitrepreneur  2 года назад +13

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @greendsnow
      @greendsnow 2 года назад

      but then... how are we going to combine my 'person'al dreambooth with my 'style' class dreambooth?

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      You can maybe try checkpoint merger?

    • @greendsnow
      @greendsnow 2 года назад

      @@Aitrepreneur is there such a thing :D yeay! :D

    • @JacKsCave
      @JacKsCave 2 года назад +2

      Im in!!! Thanks for the tutorials, but can you make one to use it in local machine ? I have a 3090 and i want to know how to install it and use it in local , with my 3090.
      There are a lot of info to try it in a free way or a cloud way... No info about how to train with our own 3090.
      Thanks for all

  • @chelfyn
    @chelfyn 2 года назад +12

    Thank you for this awesome tutorial and all the other great work you've done recently. I am getting so much joy and satisfaction out of mastering these amazing bleeding edge tools.

  • @done.8373
    @done.8373 2 года назад +4

    I seriously don't know how you figure this out this quickly and then get a vid up within days of release. Really amazing.

  • @roboldx9171
    @roboldx9171 2 года назад +3

    Thank you. I would never have gotten this together on my own. This is the best channel for understanding the Ai experience. You are the best. Keep up the good work.

  • @zvit
    @zvit 2 года назад +4

    The golden nugget for me was to learn that you can type 'cmd' in the path bar to open a command prompt at that location!

  • @Neurodivergent
    @Neurodivergent 2 года назад +5

    Great info but especially mad props for having concise easy to understand videos. Not enough ppl do that anymore.

  • @muhammadshahzaib9122
    @muhammadshahzaib9122 Год назад

    Best video till now, regarding Stable Diffusion Models... Keep it up 👍👍

  • @AscendantStoic
    @AscendantStoic 2 года назад +11

    First of all thanks for sharing the models, also I think now more than ever "Checkpoint Merger" in AUTOMATIC1111 GUI (which can combine two model files into one) is more important than ever, but it isn't really straightforward, I looked around and it's not clear what each option or choice does, hopefully you can make a tutorial for how to use it properly now that people might want to combine their own trained models with their likeness with the Waifu Diffusion models or with your Midjourney or DiscoDiffusion style models.

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +9

      I will yes

    • @AscendantStoic
      @AscendantStoic 2 года назад

      @@Aitrepreneur Ty!

    • @StillNight77
      @StillNight77 2 года назад +4

      Just to give you some quick help:
      The .ckpt merger doesn't just combine two models together; it mashes them with a loss rate close to the slider percentage you input in the WebUI. This means that you're diluting the data of both models heavily, and the resulting model won't be as good as either of the initial models by themselves. THAT SAID, it's very useful; I do use it a lot.
      - Click on the tab > See two dropdowns and a slider bar.
      - Choose two models to combine using the dropdowns.
      - (Optional) Choose a name for the new file.
      - Move the slider bar along the track to adjust what percent (roughly, it's not exactly how it works internally) of each .ckpt model the new model will have information from.
      - - Example: SD 1.4 + WD 1.3 with the slider bar at 0.3 will be 30% SD 1.4, and 70% WD 1.3
      - Click merge and wait for the message box on the right to say "Success"
      You have to go to Settings and click Reload at the base of the page, or restart the WebUI.bat launcher to use the new model. It automatically saves to the models folder.

    • @AscendantStoic
      @AscendantStoic 2 года назад

      @@StillNight77 Thanks a lot, but what about the options below called "interpolation method" which has three different types/ways to mix the files, as well as the "Save as float 16" option

  • @nayandhabarde
    @nayandhabarde Год назад +1

    Any tips on dataset used for training for style? Like how many landscapes characters objects should be there? What type

  • @thanksfernuthin
    @thanksfernuthin 2 года назад

    Good job picking samples for the Midjourney model. It's a great style. Very rich. Very beautiful. Situationally valuable.

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      I like it too, the sky is the limit with the style training

  • @TheWizardBattle
    @TheWizardBattle 2 года назад

    Thanks for the video, I was already experimenting with this it's really nice to know how someone else more knowledgeable than I does it.

  • @theappointed
    @theappointed 2 года назад +3

    Another great video, thanks! Being able to upload to google is a great addition as well. I was trying to figure it out myself but couldn't get it to work 👍👍

  • @PawFromTheBroons
    @PawFromTheBroons 2 года назад +1

    This was very gracious of you to provide the trained CKPT files.
    Thanks a *LOT*, really.

  • @amj2048
    @amj2048 2 года назад +5

    You make these videos so quickly, very impressive!

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      Glad you like them!

    • @EightBitRG
      @EightBitRG 2 года назад

      Seriously though, good job on staying up to date!

  • @Jukaorena
    @Jukaorena 2 года назад

    10k Bro, congrats

  • @ProducingItOfficial
    @ProducingItOfficial 2 года назад +3

    Aitrepreneur, if you could please explain what regularization images are and how they affect the final model, it would be greatly helpful!

  • @wk1247
    @wk1247 Год назад +1

    in your window you don't need to type cmd, you can just do `git clone repo` in the future.
    I would also recommend others to run the pip install torch in venv (virtual environments) for python modules, so you're not breaking anything in the future.

  • @mashonoid
    @mashonoid 2 года назад +2

    What is regularization, and what does it do? And most importantly how do I get my own regularization for training styles?
    Also what I have concluded in some experiments I did, is that you don't need to specify the token and class in the prompt, you just need the model loaded.
    Also you don't need to restart Automatic1111's WEBUI after changing the model, just wait for it to be loaded (see console) and you are good to go.

  • @Efotix
    @Efotix Год назад

    Disco diffusion works! Thank you.

  • @offchan
    @offchan Год назад

    What I learned from the video:
    1. It seems the `gdrive` CLI can both upload and download files to/from Google Drive. It doesn't only download files. And it's also fast. I used to run `runpodctl send` to upload files to colab because I thought I couldn't upload files to Google Drive directly. This is a game changer.
    2. TheLastBen repo doesn't have a way to specify token class which is different from this video so I'm not sure why.

  • @pablopietropinto5907
    @pablopietropinto5907 Год назад +3

    Sorry, I can't find how to enter the username in GitHub when I try to download your regularization images never download the files, I just can see: Github username.
    Can you help me please?
    Great job!
    Thanks for sharing.
    Pablo.

  • @blakewbillmaier999
    @blakewbillmaier999 2 года назад

    You are an absolute gentle-robot. Thanks for the videos!

  • @Scorpiove
    @Scorpiove 2 года назад

    Thank you, I was hoping someone would the train midjourney style into SD. It works great btw. :)

  • @mingranchen6938
    @mingranchen6938 2 года назад +1

    Thanks for the wonderful tutorial. I want to know if the pre-generated regularization images are important? What I want to do is to train a style of a specific anime(for example attack on titan,Jojo and Ghibli). So is it better if I prepare some pictures of those anime as pre-generated regularization images before I train my style?

  • @madebyrasa
    @madebyrasa 2 года назад

    Thanks for sharing! Stunning, top row share right there. Its really nice that your making these videos.

  • @MarkWilder68
    @MarkWilder68 2 года назад +1

    I find myself just sitting here waiting on your next video to see what it is, very nice, I will definitely try this.
    Thank you.

  • @rice.flakes
    @rice.flakes Год назад

    This is a fantastic tutorial. Thank you thousand times!

  • @isekai_beauty4389
    @isekai_beauty4389 2 года назад

    I joined your discord community!! You are awesome!!

  • @oakman8512
    @oakman8512 2 года назад +6

    Very useful videos thanks a lot! I was wondering maybe you could do a video about "Stable Diffusion Infinity"? It allows infinite outpainting. I think many people would be interested in that.

  • @salvadorrobles7014
    @salvadorrobles7014 2 года назад

    Yes, thanks for your work man...I can not follow everyday because of family, work... but I tried the colab for training person model and IT WORKED, and you are right, not to much quality although it should be interesting someone playing around with samplers to get good stuff from colab trained models... I will try training in runpod for comparison as you did... THANKS again..., I see your numbers of suscriptors raising everyday... I really aprreciatte you sharing your trained models by another site aside mega.... thanks anyway

  • @lithium534
    @lithium534 2 года назад +3

    Do you know a way to joint a style and a trained "object" into one so that you can have yourself of whatever in a style you trained.
    Great video's and great tutorials, well explained.

    • @crimsoncuttlefish8842
      @crimsoncuttlefish8842 2 года назад

      I would bet you can train one dataset (yourself) into the original model.ckpt to make yourself.ckpt and then train the second dataset (your style) into yourself.ckpt, so you end up with yourself+style.ckpt

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      Checkpoint merger?

    • @lithium534
      @lithium534 2 года назад

      @@crimsoncuttlefish8842 That is a good point. That should work.
      As I'm not skilled with python how would you load yourself into the program instead of the model that is directly downloaded from hugging face as doing local is a no go with only 11gb.

    • @lithium534
      @lithium534 2 года назад

      @@Aitrepreneur OK. Idk what that is but will google it tomorrow.
      Thanks.

    • @RhysAndSuns
      @RhysAndSuns 2 года назад

      @@lithium534 I think if you reload trained ckpts into dreambooth then the level of corruption of understanding gets too high. You could use one model and then image2img the style on with a different model, or you could train the 2 objects in as 1

  • @lenguyenphuoc
    @lenguyenphuoc Год назад

    Very useful video, thank you men!

  • @nehoray200
    @nehoray200 2 года назад +3

    Can you make a video that explains how to combine 2 ckpt files together because I have 2 characters that I want to put together?

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +2

      Yes that's a good idea

    • @lekistra1166
      @lekistra1166 2 года назад

      @@Aitrepreneur Hey can you copy what should inference cell look like when loading ckpt model from drive, I keep getting syntax error

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      Check my previous video, I show that I think

    • @lekistra1166
      @lekistra1166 2 года назад

      @@Aitrepreneur is shows running it on runpod not collab

  • @suduvanofficial2270
    @suduvanofficial2270 Год назад

    thank you so much for the files bro!

  • @gigginogigetto7620
    @gigginogigetto7620 2 года назад

    Absolutely fantastic! Thank u so much for this tutorial and the shared folder! :D
    Subscribed for the incredible help u gave me!

  • @MA-ck4wu
    @MA-ck4wu Год назад

    Thanks for the awesome simple-to-follow tutorial. I used Google Colab instead, since LastBen's dreambooth notebook doesn't require as much Vram, (used 7 GB) to train a model

  • @beonoc
    @beonoc 2 года назад +4

    Hey Im stuck at the part arpund 9:15, its asking me to log into git hub in the output, but i cant type anything

    • @CampfireCrucifix
      @CampfireCrucifix 2 года назад +3

      I am also having the exact same issue. It says "Username for: github"

  • @Perfectblue55
    @Perfectblue55 2 года назад

    Thank you... really enjoy and learn so much with yours videos..!!! 😀👍

  • @chaks2432
    @chaks2432 2 года назад +1

    can you do a Google Collab version? or is it the same as the previous video?

  • @andreabigiarini
    @andreabigiarini 2 года назад

    Thank you for your work. You're the best!

  • @davethorn9423
    @davethorn9423 Месяц назад

    Thanks for the video , great content. I don't understand why you don't train the model locally using AUTOMATIC111 and dreambooth , it would be a bit simpler to do that ?

  • @generalawareness101
    @generalawareness101 2 года назад

    OMG, this is what I have been waiting for as I am not wanting to do models for now I hav many artists not in SD that I wanted to do DB on. Thank you for this.

  • @phiavir5594
    @phiavir5594 2 года назад +1

    For newcomers, I'd advise not using community cloud on runpod. It downloads at incredibly slow speeds that you will probably end up wasting time and money compared to secure. It also just seems to get stuck when running certain cells and you can't tell if it's doing anything or not. It sucks the availability is so bad, but it is what it is

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      Better this than nothing

    • @bladechild2449
      @bladechild2449 2 года назад

      Dear god absolutely this. I just spent an hour training the images to then find out Runpod wants to upload the file to google drive at 200 kbps. DO NOT USE THE COMMUNITY SERVERS PEOPLE

  • @mikemenders
    @mikemenders Год назад

    I really like your videos, and I am studying DreamBooth. What I didn't see in the video is what Learning Rate you used for the training. 5e-6 or 1e-6?

  • @binyu335
    @binyu335 2 года назад

    Thank you for your video, since there are so many models, is there any method to merge all the different models(object, style etc), that will be more convinient in the future.😀

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      No, unfortunately that's not how this works...

  • @HavoJavo
    @HavoJavo 2 года назад +1

    No need to convert embeddings to ckpt. As of the latest 1111 version u can use .pt and .bin files ditectly by placingvthem into the /embeddings folder and usimg the filename in prompt. Even multiple embeddings at once works.

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      This is not textual inversion, this is dreambooth

    • @alex.nolasco
      @alex.nolasco 2 года назад

      That’s what I had understood as well , tried it and it seems to work.

  • @ThELUzZs
    @ThELUzZs 2 года назад +1

    Hi great tutorial and work, keep it up

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      The checkpoint selector has been moved to the top left of the screen

  • @GarethOwenFilmGOwen
    @GarethOwenFilmGOwen 2 года назад

    This may sound silly but which is better, but what is better produce images, use a trained style in image2image or do all of the training to then create images in the style with text prompts from the original imagss

  • @jorgegoenaga4556
    @jorgegoenaga4556 2 года назад

    Thank you for your amazing work.
    I cannot find the "Stable diffusion checkpoint list". Do you know if it has been replaced in newer versions?

  • @catrocks
    @catrocks 2 года назад

    Cheers for the video ♥

  • @NeonXXP
    @NeonXXP 2 года назад +1

    This what I've been waiting for. Time to train Cutesexyrobutts' style! The next big leap would be the ability to update and grow my CKPT so I can add other people's trained libraries to my existing file without losing what I already have. Is that a thing yet?

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      No, not yet

    • @StillNight77
      @StillNight77 2 года назад +1

      When you download (or upload) the SD 1.4 model as a base, that's basically what you can do moving forward if you've always used SD 1.4's model. Just take the last trained model and use that as the new base. At some point it'll get super diluted, and you'd have to use new tokens for every single new style/person/etc, but they'll all be in there.

  • @kavellion
    @kavellion 2 года назад

    Thanks man. If you make anymore ckpt files. Those are awesome.

  • @isekai_beauty4389
    @isekai_beauty4389 2 года назад

    Love you so much man!!! Youare my Programming God!!!! Thank you sooooooo much!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! OOOh yeaaaa!!!!!

  • @michalgonda7301
    @michalgonda7301 2 года назад +1

    First of all thank you your videos are great :) ... I wanted to ask you I run into a problem ... when I did everything with disco diffusion and my Menu have model check points in left top corner not in settings :/ ... and if I choose disco diffusion model there it show : ( Error verifying pickled file from D:\Ai\StableDiffusion\stable-diffusion-webui-master\stable-diffusion-webui\models\Stable-diffusion\discodiffusion.ckpt: ) can you help? :D sorry to bother tho :/

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      You can try adding this command --disable-safe-unpickle
      in the webui user bat file so it looks like this set COMMANDLINE_ARGS=--disable-safe-unpickle
      then navigate to C:/users/user/ .cache/ and then rename the folder 'huggingface' to 'huggingfacebackup.'

    • @michalgonda7301
      @michalgonda7301 2 года назад

      @@Aitrepreneur Thanks it worked great :) ... Love your work keep it up ;) ...

  • @hatuey6326
    @hatuey6326 2 года назад

    Just awesome thanks so much !!

  • @zonas7915
    @zonas7915 2 года назад +3

    There is an error when I try to login;
    Failed to load model class 'VBoxModel' from module '@jupyter-widgets/controls'
    Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      No idea where this error comes from, where do this happen?

    • @iisilas
      @iisilas 2 года назад +1

      I am also having this problem it happens where the huggingface logo should be on the login step

  • @TheAndzhik
    @TheAndzhik 2 года назад

    Could you please post your prompts in the description? (for this and for future videos).
    Thanks!

  • @verticallucas
    @verticallucas 2 года назад +1

    Awesome tutorial. I see lot's of potential here. I'm wondering if it's possible to train a specific situation/pose, and then on top of that use this model with another model I've trained. For example, Mario slapping Luigi(character models) in the face(situation models). Would that be checkpoint merger?

  • @flonixcorn
    @flonixcorn 2 года назад

    Great Video Like always, im still trying to run dreambooth locally

  • @wapzter
    @wapzter 2 года назад

    You are really absoluting amazing!!! Thank You very much

  • @sasufreqchann
    @sasufreqchann 2 года назад +2

    bro its saying FileNotFoundError: No such file or directory: '.\\discodefstyle\\unet\\diffusion_pytorch_model.bin'

  • @leeblackharry
    @leeblackharry Год назад

    Is there a video guide for doing this all on ones PC, no services on the internet?

  • @TooArtistic
    @TooArtistic 2 года назад

    Sorry to sound dumb, however how can i know exactly what to put in the prompt to utilize the model? You said model and class name but how do you know what these are in other models? And what other classes are there? In this one its "style" but for example if i download any model will it be in the style class? or did i just miss in the video were you defined all of this?

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      I explain this when downloading the disco model from hugging face, it also depends on what kind of model you download and where you download it form...your question is too vague to be explained correctly

    • @TooArtistic
      @TooArtistic 2 года назад

      @@Aitrepreneur sorry, for example the Wiafu ckpt is pretty popular. What would you type in the prompt to utilize one like this instead of “midjourneyart style” like shown in the video?

  • @greendsnow
    @greendsnow 2 года назад

    When will Stable Diffusion 1.5 come public? We're all training on this old technology...

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +2

      No one knows, but the 1.5 isn't that great tbh just a tiny bit better than 1.4

    • @greendsnow
      @greendsnow 2 года назад

      @@Aitrepreneur that's good to know.

  • @werewolfpreyan
    @werewolfpreyan 2 года назад +1

    I am wondering if it is possible to use multiple Models (for classes, styles etc), and use them in single prompt. That would open up new Worlds all together as then we can truly create something unique from our own inspirations, styles and mix and combine things together. Good work nonethless, I love how quick you are in updating things, and I like the quality of your tutorials and the hard work you do. :D

    • @pabloescaparo6511
      @pabloescaparo6511 2 года назад +1

      Yes. Just merge weights of trained checkpoints.

    • @werewolfpreyan
      @werewolfpreyan 2 года назад +1

      @@pabloescaparo6511 What do you mean? What I meant is something like this as a Prompt- Me(name) Person(class) holding a Ice Katana(sword's name) Prop/Object(class) within Zebra(building name) Building(class) in the My(style name) Style(class). In this, I am using 4 self trained classes, Person, Object, Building and Style all within one Prompt. Possible?

    • @joachim595
      @joachim595 2 года назад

      @@werewolfpreyan You can merge models easily in Automatic1111, but that will also mean that the styles will compromise each other. But try it, you might get some new exciting results mashed together :)

  • @spearcy
    @spearcy 2 года назад

    I didn't see why it's necessary to change any of those names you mentioned at 4:07, because those names already show up on your python link below the way you say they should look.

  • @GyroO7
    @GyroO7 2 года назад

    Great video
    But I didn't get the regularisation part very well
    like should we make our own for every style or the one you provided is good enough?

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      You can just use mine

    • @GyroO7
      @GyroO7 2 года назад

      @@Aitrepreneur Thanks

  • @BryanVorkapich
    @BryanVorkapich 2 года назад

    Your tutorials are so helpful, thank you! I have been running the web ui on RunPod like you demo'd but I find that the UI freezes constantly, especially when using batches. I haven't found a way to fix it, other than refreshing the browser and losing all the prompt details. Have you experienced this or found a way to improve performance? Thanks

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      Yes, it happens but it's working in the background, you can check the final images in the output folder

  • @mohammeda-lk1kw
    @mohammeda-lk1kw Год назад +1

    Anyone managed to do this locally without runpod? Runpod seems to be using very obscure versions of the requirements that makes it almost impossible to run elsewhere.

  • @ardenTrading
    @ardenTrading 2 года назад

    Hello guys! Is is possible to use 2 models at the same time? For example, I have a trained model of me and a model of a style. How can I combine them to make a portrait in the trained style?

    • @specialK_23
      @specialK_23 2 года назад

      In AUTOMATIC1111 webui you could merge the 2 models into one. Didn't work for me because not enough memory though

    • @ardenTrading
      @ardenTrading 2 года назад

      @@specialK_23 thx, I’ll try it out

  • @zonas7915
    @zonas7915 2 года назад +3

    Would be cool to have a video showing how to add yourself to stablediffusion + apply a style to yourself because for now on this video, I can't add myself + a style I want

  • @jean-christophepaulau9040
    @jean-christophepaulau9040 2 года назад

    Nice vid, but a question is : how can you use both a trained model for a person done with Dreambooth and apply a trained style la Discodiffusion or Midjourney to this new trained person ? It involves using two ckpt at once ... because in the setting of SD you can only choose one model at a time ... Should merging the two ckpt files be a solution to have simultaniously the trained person and the trained style to prompt ?? Thanks

  • @MatheusTassoo
    @MatheusTassoo Год назад

    I did everything that you said, but when i try to open the webui-user.bat there is an error " The file may be malicious, so the program is not going to read it.
    You can skip this check with --disable-safe-unpickle commandline argument." How can i fix this issue? Im trying to install disco diffusion btw

  • @AfterAlter.x
    @AfterAlter.x 2 года назад

    if i wanted to train both a style and a person to run in the same ckpt is that possible so i would beable to prompt for both a person and as style without as many characters needed

  • @westingtyler1
    @westingtyler1 2 года назад +2

    3:30 TIP for "python not found" command error thing: if the command window says python not found even though you've installed python, you may need to add your python installation path to the windows environment variables path. it's pretty simple, and there are quick tutorials online for adding python to the windows path. once I did this, both the pip command, and the python command, worked just fine.

  • @vocally13
    @vocally13 Год назад

    2:50 stuck in git clone here, unpacking objects 100% but nothing like filtering but the process still going without any command written in cmd

    • @vocally13
      @vocally13 Год назад

      my cmd stuck when it should be updating line command

    • @vocally13
      @vocally13 Год назад

      is it can be the style i try to clone not responding?

  • @cliveagate
    @cliveagate 2 года назад

    Wow! - amazing tutorial, thank you Ai Overlord. I've Subscribed to your channel and look forward to your follow up videos :) Q1: is there any part of your tutorial that I could run on my MacBook Air M1 2020 (8MB RAM) that allows me to train a stable diffusion model with my own pictures? - and if not Q2: could I get someone else to create CKPT files for me that I could then upload to DreamStudio?

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      1. No not as of right now as far as I know
      2. You can ask someone to create one for you but you cannot upload it to Dreamstudio, you can watch my previous video where I show you how to use a new model on runpod, it is cheap and fast

    • @cliveagate
      @cliveagate 2 года назад

      @@Aitrepreneur - thank you for your quick response, much appreciated. Is this the video (DREAMBOOTH Free CKPT File With Google Colab BUT Is it Worth it? Comparison Notebook Vs Colab!)?

  • @jurandfantom
    @jurandfantom 2 года назад

    Q: if its such video don't mind to correct me. Have you show how to perform training locally without all those gpu rent and colabs and books?
    Second thing would be, it would be possible to show how use multi gpu systems? I found info about use nvidia docker

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      I don't have a powerful GPU so I can't show how to install it locally

  • @goatnamese
    @goatnamese 2 года назад

    Damn you are amazing!!!

  • @FazerGS
    @FazerGS 2 года назад

    Switching between checkpoints in the webui settings doesn't work for me. It still generates using the one that it uses when it first loads. For example, it'll default to one ckpt and only use that, even when I select others from the list. I have to remove the other models from the models folder in order to use a specific one.

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      Maybe update to the latest version or relaunch SD

  • @lashukla
    @lashukla 2 года назад

    Thank you for this and other awesome tutorials. I am wondering if we have a model with custom face and a model with custom style, how do we use them together? In the settings you can only choose one model at a time.

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      You can use checkpoint merger, I'll make a video about that soon

    • @lashukla
      @lashukla 2 года назад

      @@Aitrepreneur Fantastic! I have been going through your videos one by one. Great stuff. Thanks again,

  • @TheOfficialWoover
    @TheOfficialWoover 2 года назад

    Hey Aitrepreneur, is there a chance you could upload your regularization images for "style" class so we can download them?
    Thank you for the great video!

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      It's on my github

    • @semaforrob
      @semaforrob 2 года назад

      @@Aitrepreneur They deleted 700 pictures from your list. They allow only 1000 files in one directory.

    • @alfonsojarago
      @alfonsojarago Год назад

      @@semaforrob have you found any alternative reuglarization images?

  • @JaY4553
    @JaY4553 2 года назад

    Thank you a lot for the video. I used the ddfusion style like you did in the video, but for some reason it always only uses the default one, no matter if I put its name in the prompt or if I change the model in the settings. Any hints on what I might have done wrong?

    • @PiotrPogorzelski
      @PiotrPogorzelski 2 года назад +1

      As far I know, previous versions of web ui had problems with changing models. After you pick a model in UI try to kill and restart app ;)

    • @StillNight77
      @StillNight77 2 года назад

      Make sure you either add "git pull" to the WebUI.bat launcher file, or manually "git pull" to keep your project updated.
      That and definitely remember to click "Apply" for your changes to save in the Settings menu.

  • @Flopproductions
    @Flopproductions 2 года назад

    Is it possible to have both a new style you create and another class (like your own image) in the same model/ckpt file?

  • @animaticmediaUSA
    @animaticmediaUSA Год назад

    Great content! I am trying to install on my D: drive (not my C drive) however I can't seem to get past installing PyTorch to the D drive . I've done it both ways using your suggested command and another method; however when I try to run the convert script it says - ModuleNotFoundError : No module named 'torch' Should I reinstall everything under the C drive? Any guidance is appreciated.

  • @joes3635
    @joes3635 Год назад

    Two things...
    1. The mega zips are corrupted and won't extract in winrar or 7zip
    2. After copying the ckpt to the /models/stable-diffusion folder (or subfolders) errors are being thrown after trying to swap to that ckpt in the GUI.
    Too bad, this was looking like a cool tool to try.

  • @itscout594
    @itscout594 Год назад

    The imgur pictures don't show up like yours do, not sure what I've done wrong, i've changed the end of the url links to my imgur links, still does not work?

  • @LinkPellow
    @LinkPellow Год назад

    does this work for IMG2IMG as well or only text to image

  • @SnoMan1818
    @SnoMan1818 2 года назад

    what is the difference between hypernetwork, google ai, and dreambooth when it comes to training?

  • @gaminghawk4794
    @gaminghawk4794 2 года назад

    hey i tried this part and pip install torch wont work and it says ('pip' is not recognized as an internal or external command,
    operable program or batch file.) so it's not environment variable so did i type it wrong?

  • @MaximeMalters
    @MaximeMalters 11 месяцев назад

    if you feed it style like all super mario world sprite, can one hope to have it accomplish new sprite in this stye?

  • @gdizzzl
    @gdizzzl 2 года назад +1

    will it work if my images aren't 512X512?

  • @fallency4
    @fallency4 2 года назад

    Great video !

  • @joachim595
    @joachim595 2 года назад

    @Aitrepreneur Did you train on images you generated yourself? Because if not that’s not allowed without consent from the user that generated them. I asked the mods about this because I myself wanted to share MJ trained models with others.

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      I found all the images on Google, there is no copyright rights on images generated with Ai so you don't need permission

  • @schmidbeda9866
    @schmidbeda9866 Год назад

    The notebook "download normalisation images" asks for GitHub Username and password, which cannot be passed to the git clone url since the authentication with password is discontinued. Also, it should not be required at all anyway.

  • @Shykar0
    @Shykar0 2 года назад

    Would you mind explaining the sd 1 4 model file correlation?
    Like why did u add bridges and stuff to the 1500 refs of persons when the ai doesnt know that those are bridges? Or did u just hope that it might mix the normal person output with bridges and stuff?
    Just asking to better understand what i'd have to put in the original model file before training to enhance the outcome
    Thank you!!

    • @Aitrepreneur
      @Aitrepreneur  2 года назад +1

      Jus thought that a little more varied images would allow for better outcome, simple as that

  • @TheGameLecturer
    @TheGameLecturer 2 года назад

    On my Vast-ai machine, the google drive trick couldn't work, my acces was denied... But surprisingly, the "normal" download only took a minute or so (with "download as a zip, the simple download returned an error).

  • @sebR.999
    @sebR.999 2 года назад

    Hello K ! I have some trouble using your Regularization images on RunPod as the Regularization cell basically don't clone anything as soon as I put your repo inside. The training cell don't do anything either. I check for any mistake on my side but everything seems correct. For now, I switched back to djbielejeski repo. Thanks for any help.

    • @Aitrepreneur
      @Aitrepreneur  2 года назад

      Check my description for the right command

    • @sebR.999
      @sebR.999 2 года назад

      @@Aitrepreneur Yes thx, I missed this one sorry !

  • @kallamamran
    @kallamamran 2 года назад +1

    Nice tempo!