Do you know that your channel is the only one that helps people to learn all these difficult tasks to do it ourselves, you are amazing that you choose this way of helping people to learn and experiment, i appreciate you so much and love this channel for its educational purpose.
I totally agree. I'm a complete beginner in the AI field, but the Ai Overlord has shown a bright light at the end of a very long and changing tunnel 🤜🤛
Im in!!! Thanks for the tutorials, but can you make one to use it in local machine ? I have a 3090 and i want to know how to install it and use it in local , with my 3090. There are a lot of info to try it in a free way or a cloud way... No info about how to train with our own 3090. Thanks for all
Thank you for this awesome tutorial and all the other great work you've done recently. I am getting so much joy and satisfaction out of mastering these amazing bleeding edge tools.
Thank you. I would never have gotten this together on my own. This is the best channel for understanding the Ai experience. You are the best. Keep up the good work.
First of all thanks for sharing the models, also I think now more than ever "Checkpoint Merger" in AUTOMATIC1111 GUI (which can combine two model files into one) is more important than ever, but it isn't really straightforward, I looked around and it's not clear what each option or choice does, hopefully you can make a tutorial for how to use it properly now that people might want to combine their own trained models with their likeness with the Waifu Diffusion models or with your Midjourney or DiscoDiffusion style models.
Just to give you some quick help: The .ckpt merger doesn't just combine two models together; it mashes them with a loss rate close to the slider percentage you input in the WebUI. This means that you're diluting the data of both models heavily, and the resulting model won't be as good as either of the initial models by themselves. THAT SAID, it's very useful; I do use it a lot. - Click on the tab > See two dropdowns and a slider bar. - Choose two models to combine using the dropdowns. - (Optional) Choose a name for the new file. - Move the slider bar along the track to adjust what percent (roughly, it's not exactly how it works internally) of each .ckpt model the new model will have information from. - - Example: SD 1.4 + WD 1.3 with the slider bar at 0.3 will be 30% SD 1.4, and 70% WD 1.3 - Click merge and wait for the message box on the right to say "Success" You have to go to Settings and click Reload at the base of the page, or restart the WebUI.bat launcher to use the new model. It automatically saves to the models folder.
@@StillNight77 Thanks a lot, but what about the options below called "interpolation method" which has three different types/ways to mix the files, as well as the "Save as float 16" option
Another great video, thanks! Being able to upload to google is a great addition as well. I was trying to figure it out myself but couldn't get it to work 👍👍
in your window you don't need to type cmd, you can just do `git clone repo` in the future. I would also recommend others to run the pip install torch in venv (virtual environments) for python modules, so you're not breaking anything in the future.
What is regularization, and what does it do? And most importantly how do I get my own regularization for training styles? Also what I have concluded in some experiments I did, is that you don't need to specify the token and class in the prompt, you just need the model loaded. Also you don't need to restart Automatic1111's WEBUI after changing the model, just wait for it to be loaded (see console) and you are good to go.
What I learned from the video: 1. It seems the `gdrive` CLI can both upload and download files to/from Google Drive. It doesn't only download files. And it's also fast. I used to run `runpodctl send` to upload files to colab because I thought I couldn't upload files to Google Drive directly. This is a game changer. 2. TheLastBen repo doesn't have a way to specify token class which is different from this video so I'm not sure why.
Sorry, I can't find how to enter the username in GitHub when I try to download your regularization images never download the files, I just can see: Github username. Can you help me please? Great job! Thanks for sharing. Pablo.
Thanks for the wonderful tutorial. I want to know if the pre-generated regularization images are important? What I want to do is to train a style of a specific anime(for example attack on titan,Jojo and Ghibli). So is it better if I prepare some pictures of those anime as pre-generated regularization images before I train my style?
Very useful videos thanks a lot! I was wondering maybe you could do a video about "Stable Diffusion Infinity"? It allows infinite outpainting. I think many people would be interested in that.
Yes, thanks for your work man...I can not follow everyday because of family, work... but I tried the colab for training person model and IT WORKED, and you are right, not to much quality although it should be interesting someone playing around with samplers to get good stuff from colab trained models... I will try training in runpod for comparison as you did... THANKS again..., I see your numbers of suscriptors raising everyday... I really aprreciatte you sharing your trained models by another site aside mega.... thanks anyway
Do you know a way to joint a style and a trained "object" into one so that you can have yourself of whatever in a style you trained. Great video's and great tutorials, well explained.
I would bet you can train one dataset (yourself) into the original model.ckpt to make yourself.ckpt and then train the second dataset (your style) into yourself.ckpt, so you end up with yourself+style.ckpt
@@crimsoncuttlefish8842 That is a good point. That should work. As I'm not skilled with python how would you load yourself into the program instead of the model that is directly downloaded from hugging face as doing local is a no go with only 11gb.
@@lithium534 I think if you reload trained ckpts into dreambooth then the level of corruption of understanding gets too high. You could use one model and then image2img the style on with a different model, or you could train the 2 objects in as 1
Thanks for the awesome simple-to-follow tutorial. I used Google Colab instead, since LastBen's dreambooth notebook doesn't require as much Vram, (used 7 GB) to train a model
Thanks for the video , great content. I don't understand why you don't train the model locally using AUTOMATIC111 and dreambooth , it would be a bit simpler to do that ?
OMG, this is what I have been waiting for as I am not wanting to do models for now I hav many artists not in SD that I wanted to do DB on. Thank you for this.
For newcomers, I'd advise not using community cloud on runpod. It downloads at incredibly slow speeds that you will probably end up wasting time and money compared to secure. It also just seems to get stuck when running certain cells and you can't tell if it's doing anything or not. It sucks the availability is so bad, but it is what it is
Dear god absolutely this. I just spent an hour training the images to then find out Runpod wants to upload the file to google drive at 200 kbps. DO NOT USE THE COMMUNITY SERVERS PEOPLE
Thank you for your video, since there are so many models, is there any method to merge all the different models(object, style etc), that will be more convinient in the future.😀
No need to convert embeddings to ckpt. As of the latest 1111 version u can use .pt and .bin files ditectly by placingvthem into the /embeddings folder and usimg the filename in prompt. Even multiple embeddings at once works.
This may sound silly but which is better, but what is better produce images, use a trained style in image2image or do all of the training to then create images in the style with text prompts from the original imagss
This what I've been waiting for. Time to train Cutesexyrobutts' style! The next big leap would be the ability to update and grow my CKPT so I can add other people's trained libraries to my existing file without losing what I already have. Is that a thing yet?
When you download (or upload) the SD 1.4 model as a base, that's basically what you can do moving forward if you've always used SD 1.4's model. Just take the last trained model and use that as the new base. At some point it'll get super diluted, and you'd have to use new tokens for every single new style/person/etc, but they'll all be in there.
First of all thank you your videos are great :) ... I wanted to ask you I run into a problem ... when I did everything with disco diffusion and my Menu have model check points in left top corner not in settings :/ ... and if I choose disco diffusion model there it show : ( Error verifying pickled file from D:\Ai\StableDiffusion\stable-diffusion-webui-master\stable-diffusion-webui\models\Stable-diffusion\discodiffusion.ckpt: ) can you help? :D sorry to bother tho :/
You can try adding this command --disable-safe-unpickle in the webui user bat file so it looks like this set COMMANDLINE_ARGS=--disable-safe-unpickle then navigate to C:/users/user/ .cache/ and then rename the folder 'huggingface' to 'huggingfacebackup.'
There is an error when I try to login; Failed to load model class 'VBoxModel' from module '@jupyter-widgets/controls' Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is
Awesome tutorial. I see lot's of potential here. I'm wondering if it's possible to train a specific situation/pose, and then on top of that use this model with another model I've trained. For example, Mario slapping Luigi(character models) in the face(situation models). Would that be checkpoint merger?
Sorry to sound dumb, however how can i know exactly what to put in the prompt to utilize the model? You said model and class name but how do you know what these are in other models? And what other classes are there? In this one its "style" but for example if i download any model will it be in the style class? or did i just miss in the video were you defined all of this?
I explain this when downloading the disco model from hugging face, it also depends on what kind of model you download and where you download it form...your question is too vague to be explained correctly
@@Aitrepreneur sorry, for example the Wiafu ckpt is pretty popular. What would you type in the prompt to utilize one like this instead of “midjourneyart style” like shown in the video?
I am wondering if it is possible to use multiple Models (for classes, styles etc), and use them in single prompt. That would open up new Worlds all together as then we can truly create something unique from our own inspirations, styles and mix and combine things together. Good work nonethless, I love how quick you are in updating things, and I like the quality of your tutorials and the hard work you do. :D
@@pabloescaparo6511 What do you mean? What I meant is something like this as a Prompt- Me(name) Person(class) holding a Ice Katana(sword's name) Prop/Object(class) within Zebra(building name) Building(class) in the My(style name) Style(class). In this, I am using 4 self trained classes, Person, Object, Building and Style all within one Prompt. Possible?
@@werewolfpreyan You can merge models easily in Automatic1111, but that will also mean that the styles will compromise each other. But try it, you might get some new exciting results mashed together :)
I didn't see why it's necessary to change any of those names you mentioned at 4:07, because those names already show up on your python link below the way you say they should look.
Your tutorials are so helpful, thank you! I have been running the web ui on RunPod like you demo'd but I find that the UI freezes constantly, especially when using batches. I haven't found a way to fix it, other than refreshing the browser and losing all the prompt details. Have you experienced this or found a way to improve performance? Thanks
Anyone managed to do this locally without runpod? Runpod seems to be using very obscure versions of the requirements that makes it almost impossible to run elsewhere.
Hello guys! Is is possible to use 2 models at the same time? For example, I have a trained model of me and a model of a style. How can I combine them to make a portrait in the trained style?
Would be cool to have a video showing how to add yourself to stablediffusion + apply a style to yourself because for now on this video, I can't add myself + a style I want
Nice vid, but a question is : how can you use both a trained model for a person done with Dreambooth and apply a trained style la Discodiffusion or Midjourney to this new trained person ? It involves using two ckpt at once ... because in the setting of SD you can only choose one model at a time ... Should merging the two ckpt files be a solution to have simultaniously the trained person and the trained style to prompt ?? Thanks
I did everything that you said, but when i try to open the webui-user.bat there is an error " The file may be malicious, so the program is not going to read it. You can skip this check with --disable-safe-unpickle commandline argument." How can i fix this issue? Im trying to install disco diffusion btw
if i wanted to train both a style and a person to run in the same ckpt is that possible so i would beable to prompt for both a person and as style without as many characters needed
3:30 TIP for "python not found" command error thing: if the command window says python not found even though you've installed python, you may need to add your python installation path to the windows environment variables path. it's pretty simple, and there are quick tutorials online for adding python to the windows path. once I did this, both the pip command, and the python command, worked just fine.
Wow! - amazing tutorial, thank you Ai Overlord. I've Subscribed to your channel and look forward to your follow up videos :) Q1: is there any part of your tutorial that I could run on my MacBook Air M1 2020 (8MB RAM) that allows me to train a stable diffusion model with my own pictures? - and if not Q2: could I get someone else to create CKPT files for me that I could then upload to DreamStudio?
1. No not as of right now as far as I know 2. You can ask someone to create one for you but you cannot upload it to Dreamstudio, you can watch my previous video where I show you how to use a new model on runpod, it is cheap and fast
@@Aitrepreneur - thank you for your quick response, much appreciated. Is this the video (DREAMBOOTH Free CKPT File With Google Colab BUT Is it Worth it? Comparison Notebook Vs Colab!)?
Q: if its such video don't mind to correct me. Have you show how to perform training locally without all those gpu rent and colabs and books? Second thing would be, it would be possible to show how use multi gpu systems? I found info about use nvidia docker
Switching between checkpoints in the webui settings doesn't work for me. It still generates using the one that it uses when it first loads. For example, it'll default to one ckpt and only use that, even when I select others from the list. I have to remove the other models from the models folder in order to use a specific one.
Thank you for this and other awesome tutorials. I am wondering if we have a model with custom face and a model with custom style, how do we use them together? In the settings you can only choose one model at a time.
Hey Aitrepreneur, is there a chance you could upload your regularization images for "style" class so we can download them? Thank you for the great video!
Thank you a lot for the video. I used the ddfusion style like you did in the video, but for some reason it always only uses the default one, no matter if I put its name in the prompt or if I change the model in the settings. Any hints on what I might have done wrong?
Make sure you either add "git pull" to the WebUI.bat launcher file, or manually "git pull" to keep your project updated. That and definitely remember to click "Apply" for your changes to save in the Settings menu.
Great content! I am trying to install on my D: drive (not my C drive) however I can't seem to get past installing PyTorch to the D drive . I've done it both ways using your suggested command and another method; however when I try to run the convert script it says - ModuleNotFoundError : No module named 'torch' Should I reinstall everything under the C drive? Any guidance is appreciated.
Two things... 1. The mega zips are corrupted and won't extract in winrar or 7zip 2. After copying the ckpt to the /models/stable-diffusion folder (or subfolders) errors are being thrown after trying to swap to that ckpt in the GUI. Too bad, this was looking like a cool tool to try.
The imgur pictures don't show up like yours do, not sure what I've done wrong, i've changed the end of the url links to my imgur links, still does not work?
hey i tried this part and pip install torch wont work and it says ('pip' is not recognized as an internal or external command, operable program or batch file.) so it's not environment variable so did i type it wrong?
@Aitrepreneur Did you train on images you generated yourself? Because if not that’s not allowed without consent from the user that generated them. I asked the mods about this because I myself wanted to share MJ trained models with others.
The notebook "download normalisation images" asks for GitHub Username and password, which cannot be passed to the git clone url since the authentication with password is discontinued. Also, it should not be required at all anyway.
Would you mind explaining the sd 1 4 model file correlation? Like why did u add bridges and stuff to the 1500 refs of persons when the ai doesnt know that those are bridges? Or did u just hope that it might mix the normal person output with bridges and stuff? Just asking to better understand what i'd have to put in the original model file before training to enhance the outcome Thank you!!
On my Vast-ai machine, the google drive trick couldn't work, my acces was denied... But surprisingly, the "normal" download only took a minute or so (with "download as a zip, the simple download returned an error).
Hello K ! I have some trouble using your Regularization images on RunPod as the Regularization cell basically don't clone anything as soon as I put your repo inside. The training cell don't do anything either. I check for any mistake on my side but everything seems correct. For now, I switched back to djbielejeski repo. Thanks for any help.
Do you know that your channel is the only one that helps people to learn all these difficult tasks to do it ourselves, you are amazing that you choose this way of helping people to learn and experiment, i appreciate you so much and love this channel for its educational purpose.
Glad to help
I totally agree. I'm a complete beginner in the AI field, but the Ai Overlord has shown a bright light at the end of a very long and changing tunnel 🤜🤛
I agree he’s awesome, but technically I know there are at least a couple others who produce similar content.
I might be able to help you friend...
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
but then... how are we going to combine my 'person'al dreambooth with my 'style' class dreambooth?
You can maybe try checkpoint merger?
@@Aitrepreneur is there such a thing :D yeay! :D
Im in!!! Thanks for the tutorials, but can you make one to use it in local machine ? I have a 3090 and i want to know how to install it and use it in local , with my 3090.
There are a lot of info to try it in a free way or a cloud way... No info about how to train with our own 3090.
Thanks for all
Thank you for this awesome tutorial and all the other great work you've done recently. I am getting so much joy and satisfaction out of mastering these amazing bleeding edge tools.
I seriously don't know how you figure this out this quickly and then get a vid up within days of release. Really amazing.
Thank you. I would never have gotten this together on my own. This is the best channel for understanding the Ai experience. You are the best. Keep up the good work.
Glad to help!
The golden nugget for me was to learn that you can type 'cmd' in the path bar to open a command prompt at that location!
Great info but especially mad props for having concise easy to understand videos. Not enough ppl do that anymore.
Thanks!
Best video till now, regarding Stable Diffusion Models... Keep it up 👍👍
First of all thanks for sharing the models, also I think now more than ever "Checkpoint Merger" in AUTOMATIC1111 GUI (which can combine two model files into one) is more important than ever, but it isn't really straightforward, I looked around and it's not clear what each option or choice does, hopefully you can make a tutorial for how to use it properly now that people might want to combine their own trained models with their likeness with the Waifu Diffusion models or with your Midjourney or DiscoDiffusion style models.
I will yes
@@Aitrepreneur Ty!
Just to give you some quick help:
The .ckpt merger doesn't just combine two models together; it mashes them with a loss rate close to the slider percentage you input in the WebUI. This means that you're diluting the data of both models heavily, and the resulting model won't be as good as either of the initial models by themselves. THAT SAID, it's very useful; I do use it a lot.
- Click on the tab > See two dropdowns and a slider bar.
- Choose two models to combine using the dropdowns.
- (Optional) Choose a name for the new file.
- Move the slider bar along the track to adjust what percent (roughly, it's not exactly how it works internally) of each .ckpt model the new model will have information from.
- - Example: SD 1.4 + WD 1.3 with the slider bar at 0.3 will be 30% SD 1.4, and 70% WD 1.3
- Click merge and wait for the message box on the right to say "Success"
You have to go to Settings and click Reload at the base of the page, or restart the WebUI.bat launcher to use the new model. It automatically saves to the models folder.
@@StillNight77 Thanks a lot, but what about the options below called "interpolation method" which has three different types/ways to mix the files, as well as the "Save as float 16" option
Any tips on dataset used for training for style? Like how many landscapes characters objects should be there? What type
Good job picking samples for the Midjourney model. It's a great style. Very rich. Very beautiful. Situationally valuable.
I like it too, the sky is the limit with the style training
Thanks for the video, I was already experimenting with this it's really nice to know how someone else more knowledgeable than I does it.
Another great video, thanks! Being able to upload to google is a great addition as well. I was trying to figure it out myself but couldn't get it to work 👍👍
This was very gracious of you to provide the trained CKPT files.
Thanks a *LOT*, really.
You make these videos so quickly, very impressive!
Glad you like them!
Seriously though, good job on staying up to date!
10k Bro, congrats
Thanks ;)
Aitrepreneur, if you could please explain what regularization images are and how they affect the final model, it would be greatly helpful!
in your window you don't need to type cmd, you can just do `git clone repo` in the future.
I would also recommend others to run the pip install torch in venv (virtual environments) for python modules, so you're not breaking anything in the future.
What is regularization, and what does it do? And most importantly how do I get my own regularization for training styles?
Also what I have concluded in some experiments I did, is that you don't need to specify the token and class in the prompt, you just need the model loaded.
Also you don't need to restart Automatic1111's WEBUI after changing the model, just wait for it to be loaded (see console) and you are good to go.
Disco diffusion works! Thank you.
What I learned from the video:
1. It seems the `gdrive` CLI can both upload and download files to/from Google Drive. It doesn't only download files. And it's also fast. I used to run `runpodctl send` to upload files to colab because I thought I couldn't upload files to Google Drive directly. This is a game changer.
2. TheLastBen repo doesn't have a way to specify token class which is different from this video so I'm not sure why.
Sorry, I can't find how to enter the username in GitHub when I try to download your regularization images never download the files, I just can see: Github username.
Can you help me please?
Great job!
Thanks for sharing.
Pablo.
You are an absolute gentle-robot. Thanks for the videos!
Thanks for watching!
Thank you, I was hoping someone would the train midjourney style into SD. It works great btw. :)
Thanks for the wonderful tutorial. I want to know if the pre-generated regularization images are important? What I want to do is to train a style of a specific anime(for example attack on titan,Jojo and Ghibli). So is it better if I prepare some pictures of those anime as pre-generated regularization images before I train my style?
Thanks for sharing! Stunning, top row share right there. Its really nice that your making these videos.
My pleasure!
I find myself just sitting here waiting on your next video to see what it is, very nice, I will definitely try this.
Thank you.
Hope you enjoy!
This is a fantastic tutorial. Thank you thousand times!
I joined your discord community!! You are awesome!!
Very useful videos thanks a lot! I was wondering maybe you could do a video about "Stable Diffusion Infinity"? It allows infinite outpainting. I think many people would be interested in that.
It's on the list yes
Yes, thanks for your work man...I can not follow everyday because of family, work... but I tried the colab for training person model and IT WORKED, and you are right, not to much quality although it should be interesting someone playing around with samplers to get good stuff from colab trained models... I will try training in runpod for comparison as you did... THANKS again..., I see your numbers of suscriptors raising everyday... I really aprreciatte you sharing your trained models by another site aside mega.... thanks anyway
Thanks ;)
Do you know a way to joint a style and a trained "object" into one so that you can have yourself of whatever in a style you trained.
Great video's and great tutorials, well explained.
I would bet you can train one dataset (yourself) into the original model.ckpt to make yourself.ckpt and then train the second dataset (your style) into yourself.ckpt, so you end up with yourself+style.ckpt
Checkpoint merger?
@@crimsoncuttlefish8842 That is a good point. That should work.
As I'm not skilled with python how would you load yourself into the program instead of the model that is directly downloaded from hugging face as doing local is a no go with only 11gb.
@@Aitrepreneur OK. Idk what that is but will google it tomorrow.
Thanks.
@@lithium534 I think if you reload trained ckpts into dreambooth then the level of corruption of understanding gets too high. You could use one model and then image2img the style on with a different model, or you could train the 2 objects in as 1
Very useful video, thank you men!
Can you make a video that explains how to combine 2 ckpt files together because I have 2 characters that I want to put together?
Yes that's a good idea
@@Aitrepreneur Hey can you copy what should inference cell look like when loading ckpt model from drive, I keep getting syntax error
Check my previous video, I show that I think
@@Aitrepreneur is shows running it on runpod not collab
thank you so much for the files bro!
Absolutely fantastic! Thank u so much for this tutorial and the shared folder! :D
Subscribed for the incredible help u gave me!
Glad it helped!
@@Aitrepreneur You sure did, you sure did! Thank u again
Thanks for the awesome simple-to-follow tutorial. I used Google Colab instead, since LastBen's dreambooth notebook doesn't require as much Vram, (used 7 GB) to train a model
Hey Im stuck at the part arpund 9:15, its asking me to log into git hub in the output, but i cant type anything
I am also having the exact same issue. It says "Username for: github"
Thank you... really enjoy and learn so much with yours videos..!!! 😀👍
My pleasure!
can you do a Google Collab version? or is it the same as the previous video?
Thank you for your work. You're the best!
Thanks for the video , great content. I don't understand why you don't train the model locally using AUTOMATIC111 and dreambooth , it would be a bit simpler to do that ?
OMG, this is what I have been waiting for as I am not wanting to do models for now I hav many artists not in SD that I wanted to do DB on. Thank you for this.
For newcomers, I'd advise not using community cloud on runpod. It downloads at incredibly slow speeds that you will probably end up wasting time and money compared to secure. It also just seems to get stuck when running certain cells and you can't tell if it's doing anything or not. It sucks the availability is so bad, but it is what it is
Better this than nothing
Dear god absolutely this. I just spent an hour training the images to then find out Runpod wants to upload the file to google drive at 200 kbps. DO NOT USE THE COMMUNITY SERVERS PEOPLE
I really like your videos, and I am studying DreamBooth. What I didn't see in the video is what Learning Rate you used for the training. 5e-6 or 1e-6?
Thank you for your video, since there are so many models, is there any method to merge all the different models(object, style etc), that will be more convinient in the future.😀
No, unfortunately that's not how this works...
No need to convert embeddings to ckpt. As of the latest 1111 version u can use .pt and .bin files ditectly by placingvthem into the /embeddings folder and usimg the filename in prompt. Even multiple embeddings at once works.
This is not textual inversion, this is dreambooth
That’s what I had understood as well , tried it and it seems to work.
Hi great tutorial and work, keep it up
The checkpoint selector has been moved to the top left of the screen
This may sound silly but which is better, but what is better produce images, use a trained style in image2image or do all of the training to then create images in the style with text prompts from the original imagss
Thank you for your amazing work.
I cannot find the "Stable diffusion checkpoint list". Do you know if it has been replaced in newer versions?
It's on the top left side of the ui now
Already found it, thank you.
Cheers for the video ♥
This what I've been waiting for. Time to train Cutesexyrobutts' style! The next big leap would be the ability to update and grow my CKPT so I can add other people's trained libraries to my existing file without losing what I already have. Is that a thing yet?
No, not yet
When you download (or upload) the SD 1.4 model as a base, that's basically what you can do moving forward if you've always used SD 1.4's model. Just take the last trained model and use that as the new base. At some point it'll get super diluted, and you'd have to use new tokens for every single new style/person/etc, but they'll all be in there.
Thanks man. If you make anymore ckpt files. Those are awesome.
Love you so much man!!! Youare my Programming God!!!! Thank you sooooooo much!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! OOOh yeaaaa!!!!!
First of all thank you your videos are great :) ... I wanted to ask you I run into a problem ... when I did everything with disco diffusion and my Menu have model check points in left top corner not in settings :/ ... and if I choose disco diffusion model there it show : ( Error verifying pickled file from D:\Ai\StableDiffusion\stable-diffusion-webui-master\stable-diffusion-webui\models\Stable-diffusion\discodiffusion.ckpt: ) can you help? :D sorry to bother tho :/
You can try adding this command --disable-safe-unpickle
in the webui user bat file so it looks like this set COMMANDLINE_ARGS=--disable-safe-unpickle
then navigate to C:/users/user/ .cache/ and then rename the folder 'huggingface' to 'huggingfacebackup.'
@@Aitrepreneur Thanks it worked great :) ... Love your work keep it up ;) ...
Just awesome thanks so much !!
There is an error when I try to login;
Failed to load model class 'VBoxModel' from module '@jupyter-widgets/controls'
Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is
No idea where this error comes from, where do this happen?
I am also having this problem it happens where the huggingface logo should be on the login step
Could you please post your prompts in the description? (for this and for future videos).
Thanks!
Awesome tutorial. I see lot's of potential here. I'm wondering if it's possible to train a specific situation/pose, and then on top of that use this model with another model I've trained. For example, Mario slapping Luigi(character models) in the face(situation models). Would that be checkpoint merger?
Great Video Like always, im still trying to run dreambooth locally
You can do it!
You are really absoluting amazing!!! Thank You very much
Glad it helped!
bro its saying FileNotFoundError: No such file or directory: '.\\discodefstyle\\unet\\diffusion_pytorch_model.bin'
me too
Is there a video guide for doing this all on ones PC, no services on the internet?
Sorry to sound dumb, however how can i know exactly what to put in the prompt to utilize the model? You said model and class name but how do you know what these are in other models? And what other classes are there? In this one its "style" but for example if i download any model will it be in the style class? or did i just miss in the video were you defined all of this?
I explain this when downloading the disco model from hugging face, it also depends on what kind of model you download and where you download it form...your question is too vague to be explained correctly
@@Aitrepreneur sorry, for example the Wiafu ckpt is pretty popular. What would you type in the prompt to utilize one like this instead of “midjourneyart style” like shown in the video?
When will Stable Diffusion 1.5 come public? We're all training on this old technology...
No one knows, but the 1.5 isn't that great tbh just a tiny bit better than 1.4
@@Aitrepreneur that's good to know.
I am wondering if it is possible to use multiple Models (for classes, styles etc), and use them in single prompt. That would open up new Worlds all together as then we can truly create something unique from our own inspirations, styles and mix and combine things together. Good work nonethless, I love how quick you are in updating things, and I like the quality of your tutorials and the hard work you do. :D
Yes. Just merge weights of trained checkpoints.
@@pabloescaparo6511 What do you mean? What I meant is something like this as a Prompt- Me(name) Person(class) holding a Ice Katana(sword's name) Prop/Object(class) within Zebra(building name) Building(class) in the My(style name) Style(class). In this, I am using 4 self trained classes, Person, Object, Building and Style all within one Prompt. Possible?
@@werewolfpreyan You can merge models easily in Automatic1111, but that will also mean that the styles will compromise each other. But try it, you might get some new exciting results mashed together :)
I didn't see why it's necessary to change any of those names you mentioned at 4:07, because those names already show up on your python link below the way you say they should look.
Great video
But I didn't get the regularisation part very well
like should we make our own for every style or the one you provided is good enough?
You can just use mine
@@Aitrepreneur Thanks
Your tutorials are so helpful, thank you! I have been running the web ui on RunPod like you demo'd but I find that the UI freezes constantly, especially when using batches. I haven't found a way to fix it, other than refreshing the browser and losing all the prompt details. Have you experienced this or found a way to improve performance? Thanks
Yes, it happens but it's working in the background, you can check the final images in the output folder
Anyone managed to do this locally without runpod? Runpod seems to be using very obscure versions of the requirements that makes it almost impossible to run elsewhere.
Hello guys! Is is possible to use 2 models at the same time? For example, I have a trained model of me and a model of a style. How can I combine them to make a portrait in the trained style?
In AUTOMATIC1111 webui you could merge the 2 models into one. Didn't work for me because not enough memory though
@@specialK_23 thx, I’ll try it out
Would be cool to have a video showing how to add yourself to stablediffusion + apply a style to yourself because for now on this video, I can't add myself + a style I want
Nice vid, but a question is : how can you use both a trained model for a person done with Dreambooth and apply a trained style la Discodiffusion or Midjourney to this new trained person ? It involves using two ckpt at once ... because in the setting of SD you can only choose one model at a time ... Should merging the two ckpt files be a solution to have simultaniously the trained person and the trained style to prompt ?? Thanks
I did everything that you said, but when i try to open the webui-user.bat there is an error " The file may be malicious, so the program is not going to read it.
You can skip this check with --disable-safe-unpickle commandline argument." How can i fix this issue? Im trying to install disco diffusion btw
if i wanted to train both a style and a person to run in the same ckpt is that possible so i would beable to prompt for both a person and as style without as many characters needed
3:30 TIP for "python not found" command error thing: if the command window says python not found even though you've installed python, you may need to add your python installation path to the windows environment variables path. it's pretty simple, and there are quick tutorials online for adding python to the windows path. once I did this, both the pip command, and the python command, worked just fine.
2:50 stuck in git clone here, unpacking objects 100% but nothing like filtering but the process still going without any command written in cmd
my cmd stuck when it should be updating line command
is it can be the style i try to clone not responding?
Wow! - amazing tutorial, thank you Ai Overlord. I've Subscribed to your channel and look forward to your follow up videos :) Q1: is there any part of your tutorial that I could run on my MacBook Air M1 2020 (8MB RAM) that allows me to train a stable diffusion model with my own pictures? - and if not Q2: could I get someone else to create CKPT files for me that I could then upload to DreamStudio?
1. No not as of right now as far as I know
2. You can ask someone to create one for you but you cannot upload it to Dreamstudio, you can watch my previous video where I show you how to use a new model on runpod, it is cheap and fast
@@Aitrepreneur - thank you for your quick response, much appreciated. Is this the video (DREAMBOOTH Free CKPT File With Google Colab BUT Is it Worth it? Comparison Notebook Vs Colab!)?
Q: if its such video don't mind to correct me. Have you show how to perform training locally without all those gpu rent and colabs and books?
Second thing would be, it would be possible to show how use multi gpu systems? I found info about use nvidia docker
I don't have a powerful GPU so I can't show how to install it locally
Damn you are amazing!!!
Switching between checkpoints in the webui settings doesn't work for me. It still generates using the one that it uses when it first loads. For example, it'll default to one ckpt and only use that, even when I select others from the list. I have to remove the other models from the models folder in order to use a specific one.
Maybe update to the latest version or relaunch SD
Thank you for this and other awesome tutorials. I am wondering if we have a model with custom face and a model with custom style, how do we use them together? In the settings you can only choose one model at a time.
You can use checkpoint merger, I'll make a video about that soon
@@Aitrepreneur Fantastic! I have been going through your videos one by one. Great stuff. Thanks again,
Hey Aitrepreneur, is there a chance you could upload your regularization images for "style" class so we can download them?
Thank you for the great video!
It's on my github
@@Aitrepreneur They deleted 700 pictures from your list. They allow only 1000 files in one directory.
@@semaforrob have you found any alternative reuglarization images?
Thank you a lot for the video. I used the ddfusion style like you did in the video, but for some reason it always only uses the default one, no matter if I put its name in the prompt or if I change the model in the settings. Any hints on what I might have done wrong?
As far I know, previous versions of web ui had problems with changing models. After you pick a model in UI try to kill and restart app ;)
Make sure you either add "git pull" to the WebUI.bat launcher file, or manually "git pull" to keep your project updated.
That and definitely remember to click "Apply" for your changes to save in the Settings menu.
Is it possible to have both a new style you create and another class (like your own image) in the same model/ckpt file?
Great content! I am trying to install on my D: drive (not my C drive) however I can't seem to get past installing PyTorch to the D drive . I've done it both ways using your suggested command and another method; however when I try to run the convert script it says - ModuleNotFoundError : No module named 'torch' Should I reinstall everything under the C drive? Any guidance is appreciated.
Two things...
1. The mega zips are corrupted and won't extract in winrar or 7zip
2. After copying the ckpt to the /models/stable-diffusion folder (or subfolders) errors are being thrown after trying to swap to that ckpt in the GUI.
Too bad, this was looking like a cool tool to try.
The imgur pictures don't show up like yours do, not sure what I've done wrong, i've changed the end of the url links to my imgur links, still does not work?
did you figure this out?
does this work for IMG2IMG as well or only text to image
what is the difference between hypernetwork, google ai, and dreambooth when it comes to training?
hey i tried this part and pip install torch wont work and it says ('pip' is not recognized as an internal or external command,
operable program or batch file.) so it's not environment variable so did i type it wrong?
if you feed it style like all super mario world sprite, can one hope to have it accomplish new sprite in this stye?
will it work if my images aren't 512X512?
Great video !
Thanks!
@Aitrepreneur Did you train on images you generated yourself? Because if not that’s not allowed without consent from the user that generated them. I asked the mods about this because I myself wanted to share MJ trained models with others.
I found all the images on Google, there is no copyright rights on images generated with Ai so you don't need permission
The notebook "download normalisation images" asks for GitHub Username and password, which cannot be passed to the git clone url since the authentication with password is discontinued. Also, it should not be required at all anyway.
Would you mind explaining the sd 1 4 model file correlation?
Like why did u add bridges and stuff to the 1500 refs of persons when the ai doesnt know that those are bridges? Or did u just hope that it might mix the normal person output with bridges and stuff?
Just asking to better understand what i'd have to put in the original model file before training to enhance the outcome
Thank you!!
Jus thought that a little more varied images would allow for better outcome, simple as that
On my Vast-ai machine, the google drive trick couldn't work, my acces was denied... But surprisingly, the "normal" download only took a minute or so (with "download as a zip, the simple download returned an error).
Hello K ! I have some trouble using your Regularization images on RunPod as the Regularization cell basically don't clone anything as soon as I put your repo inside. The training cell don't do anything either. I check for any mistake on my side but everything seems correct. For now, I switched back to djbielejeski repo. Thanks for any help.
Check my description for the right command
@@Aitrepreneur Yes thx, I missed this one sorry !
Nice tempo!