#### Link from my Video #### Join my Discord: discord.gg/XKAk7GUzAW Bulk Resize: bulkresizephotos.com/en Kohya-ss Install: github.com/bmaltais/kohya_ss Booru Dataset Tag Manager: github.com/starik222/BooruDatasetTagManager/releases/tag/v1.6.3 SD 1.5 Training Model: huggingface.co/runwayml/stable-diffusion-v1-5/tree/main
Great video, but aren't HyperNetwork's also an option? Is there any reason why we'd use Lora's over Hyper Networks? So far I've gotten better results training HyperNetworks, but admittedly Lora's had only just arrived on the scene when I was experimenting.
do you know how feasible it is to install Stable Diffusion on a Mac? for example Mac Mini M2 Pro? Will it work by running it on a virtual machine to windows or will it be better via bootcamp? or is there a Mac direct version?
All your videos, and especially this, are invaluable to the AI community. Foremost the beginners, but also the enthusiasts that are kept up to date with the rapidly changing and improving technology. And explicitly worth mentioning: Love your enunciation! Your well-phrased, concise use of language and well prepared scripting makes your videos efficient and a pleasure to watch! Thank you so much for your efforts and contribution!
the taging changes a fair bit depending on if your trying to train a topic or a style or a specific person and it would be nice to show the differences there. i see in on another video they explained how you should tag everything you DONT want to be a part of your lora unless you are training for a single person or thing, and I wold like to see a more in depth dive into tagging and its effects
The tagging strategy is a pretty confusing. I've also seen tutorials that say you should tag only the things you DO want to be part of your lora/model. And some tutorials say to not use tags at all and let the AI figure it all out. And should you use a unique "keyword" tag to train it to that particular unique "keyword" to use in the prompts? I've tried different the approaches but didn't get any conclusive results. They were all pretty bad and didn't work as expected. 🤨 A video from Olivio covering just the tagging aspect with a demo for each approach would be awesome.
@@Beltalowda55 captions are what a model uses to try and predict what the picture should look like during training. Difference between predicted and actual image is used to adjust the model in the right direction. So it should be obvious that you want to describe everything that is not a part of the concept you're trying to learn, to help the model to make a correct prediction and narrow the direction for the learning. Keyword is added for the model to associate the concept with that keyword, that helps to preserve sanity. It sould be noted though that the training process does not actually differentiate what you're trying to learn in any way. All it does is it tries to match a predicted image with the actual image. That means it can associate to your concept any other or multiple tags you used in unpredictable ways, basically splitting the concept between them. To mitigate that, no tag except a keyword should be used too frequently (in my experience 30-50% is the limit)
@@fritt_wastakenI wish you had some videos up to talk about this. I think I understand what you're saying but some of it is a little fuzzy. Thank you for the info nonetheless.
A really helpful and informative video. In addition, and I find this particularly important, your pronunciation is also really super. For me, as a non-native speaker, this is very important. In many videos, people just mumble to themselves, or you can not understand them so well for technical reasons. With your videos I found the first time someone I can really understand excellent and crystal clear. 😊
24:08 the same when you said "512x512" is in the past, the same goes to folder structure, is not necessary to create 3 folders, only the image folder and the inside folder with your dataset, for model and log you can leave the main folder "Neon" and the files will be saved there with no problems, I discovered that by accident and no issues after.
You are my hero today! ❤ I was searching for exact this topic weekly for last 2 months 😂 Last time was this morning. Not sure if I go to sleep now or give it a try 😅
Te trouble with all these tutorials is that when I spend an entire day following them, the software I see does not ever seem to match the software in the video. When I reach an error screen, I never know why and I have no comprehension of what to do about it. It's quite daunting for an artist who thought the only thing he would have to master in life was using a pencil and a brush. :(
@@ScaryStoriesNYC I absolutly understand that. Software is developing so fast and you have to make tutorial more modular, to change the parts, whenever something changes in Software.
Agreed, he gets the pace just right. Not easy to do, some people take too much time with no benefit, some too fast and hard to follow. The balance here is just right 👌
Hello friend, is there going to be anything new on Lora training program for people? I haven't heard anything in a long time, maybe there are some new software chips
Nice VDO as usual. You mentioned in the VDO that you will cover both Dreambooth & Dreamboot LoRA; however, there has been no Dreamboot LoRA covered. I am not sure whether both Dreambooth & Dreamboot LoRA will have the same process & pretrain model or not? For art painting style training using Dreamboot LoRA, could you please suggest an appropriate Pretrained Model, and min. total step for each Epoch. I tried 100 Steps for sample 20 images (2000 in total) but when I tested with text prompts used in the training, the results were still far from original images. Also tried with LyCORIS/LoCon but with no success. I would be very appreciated if you could demonstrate how to train a painting style like Van Gogh painting style (though it is already available in some models) showing that the result from prompts used in the training process can reproduce images used in the training.
Dreambooth doesn't make LORAs. It trains a model, then you can extract a LORA from that by comparing the original base model and the trained model. You can see the tabs for that in the video when he's looking over the Dreambooth GUI.
Good evening Olivio, So I just followed your guide to create my first LORA but I find myself with 22 hours of training! I must have made a mistake somewhere ;-) However, I have an RTX 4070 TI and a Core I9 13900k. 120 photos of the same model in high quality (FHD) half are faces and the other is the full length model. I chose 10 steps, I think my concern comes from there. I'm still going to let it finish just out of curiosity. Thank you for the guide which will have nevertheless been very useful to me.
Danke! :) Thanks Olivio for your many great explanation videos and tutorials and that you made the effort to really go through every step of the setup and first usage of kohya. I needed exactly this to get to a starting point of the mysterious journey of training my own models/loras and now I can finally experiment on my own.
Thanks for the video Olivio! I got a runtime error when trying to train a lora and this is how i (think) i fixed it. When installing kohya, Olivio said that the questions prompts are correctly pre selected. This wasnt the case for me. The question prompt that has a selection for fp16 had to be selected during installation. It seemed that otherwise kohya was trying to use my CPU instead of my GPU. To fix my error, i deleted kohya install folder, reinstalled kohya, uninstalled and reinstalled torch when prompted, and selected fp16 when prompted. i started training a lora before writing this post and its still working so if you have a runtime error like i did, try this maybe. hope this helps!
@@plattepus Try installing and running SD, and watch for the error in the console, It will give you a pretty noticeable verbose error saying Pip is not installed. If so just run the python install again and make sure to tick the box to install pip in the menu. Hope that helps.
@@addermoth appreciate it man. Also, I'm only seeing "gui.bat" in the "kohya_ss" folder, and not "user gui.bat", but it still seems to launch it fine. And I can't tell if it installed cudnn, I put the cuddn_windows into kohya_ss, ran the command to install it, but it never said "done" or anything.
The Kohya page and download isn't the same as in this video and the setup.bat file has been updated. It doesn't ask me the same questions as it does him in the video. How did you get it to work?
What would be the most likely reason that when i double click on the setup.bat file, as mentioned at 20:30, that the questions displayed are different?
When using booru manager and on the right hand side it has stuff like rocks and cars is that classed as associated with your character? I'm trying to make a comic book character stuff like what she's wearing or her long brown hair is associated but all the random stuff in the background what about that?
24:15 When Olli said "The images are not in the images folder" I got really confused. Eventually I figured out that you still have to copy your source images into the new folder, otherwise it won't have anything to tag.
This is awesome, thanks. I think you did a great job on disseminating the critical information concisely, I wish more tutorials were done like this. I have a question, how would training for an illustration style, using real photos for face reference differ, if at all?
my instal did NOT give option to update all & i have a different interface from vid, so i tried update myself, but told it is current version? not the simple navigation as i see with the vid & many folk online, everything is Below, not aligned landscape, anyone know how to remedy this, i am on windows It Is super detailed info regardless, so thankful for this amazing master of my newly discovered favorite program
Great, but you skipped some confusing stuff....do I choose no distributed choice with a 4090 card, do I choose to optimize the script with dynomo, and which dynamo if so? Do I use DEEPSPEED, etc...I have no idea about these cryptic choices and what would be optimal... 😞 help?
The problem with Kohya is that is updating often and the guy like to move around and change things in installation, interface, parameters how to name things. This is the 5th tutorial I try to follow... all of them have something different from the Kohya I have installed yesterday. But all of them were usefull to puzzle out how to make it... storta work. I was able to make the first lora with bad results, but probably as I'm not able to load the reference pictures for some reason. It gives an error and do not use them.
After choosing a version of torch to install, I'm getting "'pip' is not recognized as an internal or external command, operable program or batch file" Edit: Okay, I did what you said at 20:54 and deleted the folder and tried again with option 1 (torch 1.12.1) and that seemed to work
Hi Olivio, thanks for this guide, it's a bundle of usefull informations. I have a question: at 30:56, when there's your Kohya full screen, I read "Learning rate: 0.00001", the default should be 0.0001, is this correct?
Hi, thanks for your amazing videos. But I had a lot of problems. first, the setup: you didnt show us all the ssettings in the cmd, i was really confused, second the picture tagging: it just spit out some errors. and I am really stuck at this point. So I couldnt train any modells :(
Correct, but don't use convert RGBA To RGB and transparent image to white Background, because it's make ur trigger background situation or etc, at your prompt be blank or just white or just template take a sample data.
Hey Legend! I fell in love with your tutorials! I have a question: this is not working on my laptop and on my server as well :( I have 2080 RTX in both. Can this be the reason? In my server there are 2x2070... do you have any other method to train models for stable diffusion?
hello something is wrong in my gui...at around 23:07 in your video there are 3 option under the source model tab. mine are only "quick model pick" and "save trained model as" the "pretrained model name or path" is missing so i cant use the v1-5-pruned.safetensors...
Great video Olivio! As I am watching this, I wonder what goes into the "log" folder and "model" folder? I didn't see what those are for. Also, do we need regularization images? If so, how many?
Pleaseplease~~~Hello, I followed the steps to train, and after the training was completed, there were other training files in the OUTPUT folder, but why was there no file model of Lora?
i'm getting an out of memory error for the Booru Tag Manager thing when I load the image folder. Not sure why considering i have more than enough VRAM for it. was really hoping to use it but can't find a way to get it to work properly and google has nothing EDIT: I also later found that no matter what I do, I cannot train on Dreambooth. I had used Kohya for LORAs in the past and was hoping to try the model/merge combo you did with the neon. I tried to just do a LORA and it got to work just fine. I'm not sure if it's my VRAM or not since the error when trying to use Dreambooth is it's out of memory 7.3/8GB already allocated. At first I thought it was because pruned-SD-1.5 was about that size and that was taking all the memory but the lora had no issue. So there is some sort of minimum VRAM needed to make a dreambooth model or it's something else. If anyone has any insights or ideas, feel free to point it out to me. Thanks.
I had 3060Ti 8GB and could not train. so I went out and bought the regular 3060 12GB for like $280 and it's training no problem. We're just living in a time where minimum VRAM requirements are evolving.
Thank you Olivio for this tutorial as well as for all your others. Great help. When I run my LORA models on my Automatic 1111, I get “ValueError: not enough values to unpack (expected 2, got 1)”. How can I fix that? Thanks for your help.
Is the step of "merge" the models also necessary with Lora? I got confused because I thought you were going to create a Lora but you ended up creating a Checkpoint.
Great video! But I prefer to train Textual Inversion for characters (a specific person) in Automatic1111: it's way faster, simpler, I get the same result than a Lora and spend less HDD space and only 2 tokens for the prompt!
Thank you for the video! You mentioned you could do one with an online server, please, pleeeeeeease would you be so kind as to show us how to train a LORA step by step with RunPod ? 😭😭😭
I don't understand it. I've already tried several times. But there are always problems with Cuda, if it's not Cuda then it's Torch... I don't know what to do any more. I have already uninstalled everything and installed python-3.10.11-amd64 and cuda_11.8.0_522.06_windows. CMD writes: Error loading "F:\Lora Train\kohya_ss\venv\lib\sitepackages\torch\lib\cudnn_adv_infer64_8.dll" or one of its dependencies
#### Link from my Video ####
Join my Discord: discord.gg/XKAk7GUzAW
Bulk Resize: bulkresizephotos.com/en
Kohya-ss Install: github.com/bmaltais/kohya_ss
Booru Dataset Tag Manager: github.com/starik222/BooruDatasetTagManager/releases/tag/v1.6.3
SD 1.5 Training Model: huggingface.co/runwayml/stable-diffusion-v1-5/tree/main
make tutorial finetune on kohya please..
Great video, but aren't HyperNetwork's also an option? Is there any reason why we'd use Lora's over Hyper Networks? So far I've gotten better results training HyperNetworks, but admittedly Lora's had only just arrived on the scene when I was experimenting.
everytime I paste the cmd i get ''fatal: could not create leading directories of 'kohya_ss.\setup.bat': Invalid argument'' can anyone help
do you know how feasible it is to install Stable Diffusion on a Mac? for example Mac Mini M2 Pro? Will it work by running it on a virtual machine to windows or will it be better via bootcamp? or is there a Mac direct version?
I like this guy, he's like a nerdy hugging bear man you wana hold and thank for making such cool helpful channel.
Best comment of the day 🥇
Sometimes i think he is AI
"Hugging face guy" lol
Down bad
Seems like a sweet guy
Dude I watch all your videos even if I already know what it's about just to bump your numbers. Same reason for this comment. You're killing it.
lets goooooo !!!!
Of everything I learned, you blew my mind with the f2 trick
Your guide on LORA and Checkpoint Model Training is a game-changer. The merging trick you shared is pure gold. Subscribed! 🙌
All your videos, and especially this, are invaluable to the AI community. Foremost the beginners, but also the enthusiasts that are kept up to date with the rapidly changing and improving technology. And explicitly worth mentioning: Love your enunciation! Your well-phrased, concise use of language and well prepared scripting makes your videos efficient and a pleasure to watch! Thank you so much for your efforts and contribution!
6:33 Lora and Models
17:50 Lora Training: files and folders
18:53 KoyaSS: the software to train the models
tnx
the taging changes a fair bit depending on if your trying to train a topic or a style or a specific person and it would be nice to show the differences there. i see in on another video they explained how you should tag everything you DONT want to be a part of your lora unless you are training for a single person or thing, and I wold like to see a more in depth dive into tagging and its effects
The tagging strategy is a pretty confusing. I've also seen tutorials that say you should tag only the things you DO want to be part of your lora/model. And some tutorials say to not use tags at all and let the AI figure it all out. And should you use a unique "keyword" tag to train it to that particular unique "keyword" to use in the prompts? I've tried different the approaches but didn't get any conclusive results. They were all pretty bad and didn't work as expected. 🤨
A video from Olivio covering just the tagging aspect with a demo for each approach would be awesome.
@@Beltalowda55 captions are what a model uses to try and predict what the picture should look like during training. Difference between predicted and actual image is used to adjust the model in the right direction. So it should be obvious that you want to describe everything that is not a part of the concept you're trying to learn, to help the model to make a correct prediction and narrow the direction for the learning. Keyword is added for the model to associate the concept with that keyword, that helps to preserve sanity.
It sould be noted though that the training process does not actually differentiate what you're trying to learn in any way. All it does is it tries to match a predicted image with the actual image. That means it can associate to your concept any other or multiple tags you used in unpredictable ways, basically splitting the concept between them. To mitigate that, no tag except a keyword should be used too frequently (in my experience 30-50% is the limit)
I don't even bother tagging anymore when training a person.
@@fritt_wastakenI wish you had some videos up to talk about this. I think I understand what you're saying but some of it is a little fuzzy. Thank you for the info nonetheless.
A really helpful and informative video. In addition, and I find this particularly important, your pronunciation is also really super. For me, as a non-native speaker, this is very important. In many videos, people just mumble to themselves, or you can not understand them so well for technical reasons. With your videos I found the first time someone I can really understand excellent and crystal clear. 😊
Thanks!
Thank you very much 🥰👍
I may have missed the Lora training, I followed the Kohya installation and the checkpoint training. Thanks for all the excellent content, Olivio!
I am so grateful for your detailed tutorials, step by step instructions. A big thank you.
Thanks Olivio, I was just thinking I needed a guide like this after getting tired of google colab. This will be invaluable - your're a super star!
Thank you very much. Your detailed introduction is very rare, and many of the details have been very helpful to me.
You are a Rock Star... when it comes to Ai Images... Keep Up the amazing videos that you do man... I for one, do appreciate your vids!!!😊
This is the most comprehensive model training tutorial I've found on RUclips. Thank you for making this video!
24:08 the same when you said "512x512" is in the past, the same goes to folder structure, is not necessary to create 3 folders, only the image folder and the inside folder with your dataset, for model and log you can leave the main folder "Neon" and the files will be saved there with no problems, I discovered that by accident and no issues after.
One of your best videos so far!!! And currently my favorite RUclipsr for SD.
Greetings to Vienna from Stuttgart, Germany.
Sascha
Thanks for the F2 rename files trick... I didn't know about it, you just saved me hours of renaming 😄
You are my hero today! ❤
I was searching for exact this topic weekly for last 2 months 😂 Last time was this morning. Not sure if I go to sleep now or give it a try 😅
Allez dormir heureux si vous etes fatigué. Vous serez frais le lendemain pour rêvé éveillé.
Te trouble with all these tutorials is that when I spend an entire day following them, the software I see does not ever seem to match the software in the video. When I reach an error screen, I never know why and I have no comprehension of what to do about it. It's quite daunting for an artist who thought the only thing he would have to master in life was using a pencil and a brush. :(
@@ScaryStoriesNYC I absolutly understand that. Software is developing so fast and you have to make tutorial more modular, to change the parts, whenever something changes in Software.
Excellent breakdown on understanding how to train Loras, thank you Olivio! I'm currently training with Fluxgym in Pinokio, makes it super easy.
This IS Masterful! Thanking YOU
this video helped me understand a lot for another project im working on for langague model to use LoRAs to beef and nudge the actual model.
Fantastic job Olivio! I really appreciate the work you do!
Lol I was just looking for an updated LoRA guide, great timing thank you!
Wow wow wow what a man. I was waiting for this video
Hope you enjoy it and it help you a lot :)
@@OlivioSarikas thx oli
Super cool! Please continue describe this topic for us in your next videos, I am waiting for those
You take your time, talk with no rush but also go straight to the point. Thanks a lot, really helping me now that most tutorials are outdated
Agreed, he gets the pace just right. Not easy to do, some people take too much time with no benefit, some too fast and hard to follow. The balance here is just right 👌
I've been waiting on a video like this for ages! Thank you so much
That tag manager is really cool, thanks for that.
Thank you for guiding me with this excellent video!
Really love your content, thank you for taking the time to produce and share it!
So cool, thanks for the indepth explanation 👍
nice ive been waiting for this video !
Took me forever to record this ;)
Hello friend, is there going to be anything new on Lora training program for people? I haven't heard anything in a long time, maybe there are some new software chips
Thank you very very much! This lecture has answered a lot of my questions.
Thanks for this informative video. I have been trying to figure this out for awhile, so this was right on time
Nice VDO as usual. You mentioned in the VDO that you will cover both Dreambooth & Dreamboot LoRA; however, there has been no Dreamboot LoRA covered. I am not sure whether both Dreambooth & Dreamboot LoRA will have the same process & pretrain model or not?
For art painting style training using Dreamboot LoRA, could you please suggest an appropriate Pretrained Model, and min. total step for each Epoch. I tried 100 Steps for sample 20 images (2000 in total) but when I tested with text prompts used in the training, the results were still far from original images. Also tried with LyCORIS/LoCon but with no success.
I would be very appreciated if you could demonstrate how to train a painting style like Van Gogh painting style (though it is already available in some models) showing that the result from prompts used in the training process can reproduce images used in the training.
Dreambooth doesn't make LORAs. It trains a model, then you can extract a LORA from that by comparing the original base model and the trained model. You can see the tabs for that in the video when he's looking over the Dreambooth GUI.
Thank you for this awesome guide! Anything about regularization images would also be fantastic.
Great job Daddy :) I would like to thank you as someone who can explain such a difficult to understand subject so well.
Good evening Olivio,
So I just followed your guide to create my first LORA but I find myself with 22 hours of training!
I must have made a mistake somewhere ;-) However, I have an RTX 4070 TI and a Core I9 13900k.
120 photos of the same model in high quality (FHD) half are faces and the other is the full length model.
I chose 10 steps, I think my concern comes from there. I'm still going to let it finish just out of curiosity.
Thank you for the guide which will have nevertheless been very useful to me.
lovely vid, keep on these good works!
Would love to see something like this but from the perspective of objects, not portraits.
Same
Danke! :)
Thanks Olivio for your many great explanation videos and tutorials and that you made the effort to really go through every step of the setup and first usage of kohya.
I needed exactly this to get to a starting point of the mysterious journey of training my own models/loras and now I can finally experiment on my own.
Thanks for the video Olivio! I got a runtime error when trying to train a lora and this is how i (think) i fixed it. When installing kohya, Olivio said that the questions prompts are correctly pre selected. This wasnt the case for me. The question prompt that has a selection for fp16 had to be selected during installation. It seemed that otherwise kohya was trying to use my CPU instead of my GPU. To fix my error, i deleted kohya install folder, reinstalled kohya, uninstalled and reinstalled torch when prompted, and selected fp16 when prompted. i started training a lora before writing this post and its still working so if you have a runtime error like i did, try this maybe. hope this helps!
Thank you so much! I wish this was specified in the video.
Did you have to restart the whole process?
A little tip, when installing python make sure to check the box that adds pip to environment variables, for some reason mine did not by default.
mine also. I just uninstalled Python and reinstalled it over the Microsoft store. It will install pip correctly
how would I know if I installed it without checking the box?
@@plattepus Try installing and running SD, and watch for the error in the console, It will give you a pretty noticeable verbose error saying Pip is not installed. If so just run the python install again and make sure to tick the box to install pip in the menu. Hope that helps.
@@addermoth appreciate it man. Also, I'm only seeing "gui.bat" in the "kohya_ss" folder, and not "user gui.bat", but it still seems to launch it fine. And I can't tell if it installed cudnn, I put the cuddn_windows into kohya_ss, ran the command to install it, but it never said "done" or anything.
Excellent Tutorial!!!!
Thanks a lot I get more information about training now 😊
This video was really needed. Thanks for that
THX! I learned so much! You are amazing
Thank you so much sir, I love your YT channel ! =)
Thank you :)
mostly installation instructions very clear but not what I was expecting
The Kohya page and download isn't the same as in this video and the setup.bat file has been updated. It doesn't ask me the same questions as it does him in the video. How did you get it to work?
What would be the most likely reason that when i double click on the setup.bat file, as mentioned at 20:30, that the questions displayed are different?
can you fix?
When using booru manager and on the right hand side it has stuff like rocks and cars is that classed as associated with your character? I'm trying to make a comic book character stuff like what she's wearing or her long brown hair is associated but all the random stuff in the background what about that?
Thanks so much for this video! really appreciate it
28:22 isn't SD 1.5 already in Kohya?
Hey Olivio! I love this tutorial! Do you have any plans to update it? Have a great day!
24:15 When Olli said "The images are not in the images folder" I got really confused. Eventually I figured out that you still have to copy your source images into the new folder, otherwise it won't have anything to tag.
Me too
This is awesome, thanks. I think you did a great job on disseminating the critical information concisely, I wish more tutorials were done like this.
I have a question, how would training for an illustration style, using real photos for face reference differ, if at all?
Watch so many of your vids and I thought I've already subscribed to you. Until today I find out😅😅 Sub'ed anyways
Thanks for your sharing! But WD14 captioning doesn't work. Do you have any ideas why?
Im having the same issue.
my instal did NOT give option to update all & i have a different interface from vid, so i tried update myself, but told it is current version? not the simple navigation as i see with the vid & many folk online, everything is Below, not aligned landscape, anyone know how to remedy this, i am on windows
It Is super detailed info regardless, so thankful for this amazing master of my newly discovered favorite program
Great, but you skipped some confusing stuff....do I choose no distributed choice with a 4090 card, do I choose to optimize the script with dynomo, and which dynamo if so? Do I use DEEPSPEED, etc...I have no idea about these cryptic choices and what would be optimal... 😞 help?
...also, you have a 4090 so when you get to choose pick bf16. The f16 is for older cards, like my 1070
was waiting on this update !
24:26 weren't the images in source folder before? 18:33
18:41....👀I didn't know that trick 👍🏻 Thanks
Thank you for explaining very clearly all the concepts that apply to creating LORAs. I really appreciate the info in the video.
Thank you! Can you create the video with the colab as well? don't know if the old ones around still work, this things change so fast!
The problem with Kohya is that is updating often and the guy like to move around and change things in installation, interface, parameters how to name things.
This is the 5th tutorial I try to follow... all of them have something different from the Kohya I have installed yesterday.
But all of them were usefull to puzzle out how to make it... storta work. I was able to make the first lora with bad results, but probably as I'm not able to load the reference pictures for some reason. It gives an error and do not use them.
After choosing a version of torch to install, I'm getting "'pip' is not recognized as an internal or external command, operable program or batch file"
Edit: Okay, I did what you said at 20:54 and deleted the folder and tried again with option 1 (torch 1.12.1) and that seemed to work
Video is super simple compared to others :)
Is there a more up to date version of this that isn't a year old. Some of the tabs on this isn't the same as the current version and can be confusing.
Thank you so much for this ❤
Hi Olivio, thanks for this guide, it's a bundle of usefull informations. I have a question: at 30:56, when there's your Kohya full screen, I read "Learning rate: 0.00001", the default should be 0.0001, is this correct?
Very nice on , now on my testcomputer it seems to work now..
What i am still missing is how to create a LoRA with this tool..
Hi, thanks for your amazing videos. But I had a lot of problems. first, the setup: you didnt show us all the ssettings in the cmd, i was really confused, second the picture tagging: it just spit out some errors. and I am really stuck at this point. So I couldnt train any modells :(
Great explanation!
My hero ! many thanks
I would suggest to always select PNG over JPG since PNG will not lose image information, and the file is small enough anyway.
Correct, but don't use convert RGBA To RGB and transparent image to white Background, because it's make ur trigger background situation or etc, at your prompt be blank or just white or just template take a sample data.
Hey Legend! I fell in love with your tutorials! I have a question: this is not working on my laptop and on my server as well :( I have 2080 RTX in both. Can this be the reason? In my server there are 2x2070... do you have any other method to train models for stable diffusion?
hello something is wrong in my gui...at around 23:07 in your video there are 3 option under the source model tab. mine are only "quick model pick" and "save trained model as" the "pretrained model name or path" is missing so i cant use the v1-5-pruned.safetensors...
solved
@@randomvideochamber1723 how?
when i press train. it shows this error " Not all images folders have proper name patterns "
you are awesome. internet is getting better bcs of u 🎉
Great video Olivio! As I am watching this, I wonder what goes into the "log" folder and "model" folder? I didn't see what those are for. Also, do we need regularization images? If so, how many?
Pleaseplease~~~Hello, I followed the steps to train, and after the training was completed, there were other training files in the OUTPUT folder, but why was there no file model of Lora?
i'm getting an out of memory error for the Booru Tag Manager thing when I load the image folder. Not sure why considering i have more than enough VRAM for it. was really hoping to use it but can't find a way to get it to work properly and google has nothing
EDIT: I also later found that no matter what I do, I cannot train on Dreambooth. I had used Kohya for LORAs in the past and was hoping to try the model/merge combo you did with the neon. I tried to just do a LORA and it got to work just fine. I'm not sure if it's my VRAM or not since the error when trying to use Dreambooth is it's out of memory 7.3/8GB already allocated. At first I thought it was because pruned-SD-1.5 was about that size and that was taking all the memory but the lora had no issue. So there is some sort of minimum VRAM needed to make a dreambooth model or it's something else. If anyone has any insights or ideas, feel free to point it out to me. Thanks.
I had 3060Ti 8GB and could not train. so I went out and bought the regular 3060 12GB for like $280 and it's training no problem. We're just living in a time where minimum VRAM requirements are evolving.
Thank you Olivio for this tutorial as well as for all your others. Great help. When I run my LORA models on my Automatic 1111, I get “ValueError: not enough values to unpack (expected 2, got 1)”. How can I fix that? Thanks for your help.
Is the step of "merge" the models also necessary with Lora? I got confused because I thought you were going to create a Lora but you ended up creating a Checkpoint.
I thought about that too...
yes, great vid, following everything easily. I went with LORA too (dreambooth Lora tab) and now have the model but not sure what to do next. Thanks!
I was not able to train a lora with this tutorial, I have tried to ask for help on your discord server but did not get any up to now..
I get the final safe tensor file of 144MB but it cannot generate the image it was trained for , seems like lora training method is broken ...
i would like to know about the online service for training these models
Great video! But I prefer to train Textual Inversion for characters (a specific person) in Automatic1111: it's way faster, simpler, I get the same result than a Lora and spend less HDD space and only 2 tokens for the prompt!
Is the likeness almost 95% to the original person training on TI? Is it better than lora for specific character only?
Do you have resources to learn more about this? I'm trying to train an anime styled OC
Do you have a video I can follow?
I would like to know more about that too if you would be kind enough to develop!
Hai thanks for the videos I'm new on the AI generated field, so what is the difference with Checkpoint and LoRa?
Thank you for the video! You mentioned you could do one with an online server, please, pleeeeeeease would you be so kind as to show us how to train a LORA step by step with RunPod ? 😭😭😭
I don't understand it. I've already tried several times. But there are always problems with Cuda, if it's not Cuda then it's Torch... I don't know what to do any more. I have already uninstalled everything and installed python-3.10.11-amd64 and cuda_11.8.0_522.06_windows. CMD writes: Error loading "F:\Lora Train\kohya_ss\venv\lib\sitepackages\torch\lib\cudnn_adv_infer64_8.dll" or one of its dependencies
Thank you for explaining
Just once I would like one of these tutorials to actually work and properly correspond to what I see when following the steps.
Great stuff, thanks
Can this be done with furniture. When I make a kitchen the handle are always all over the place????????????