The reason "l4ura" works better then "laura" ist that the instance prompt should be a unique string. It is my understanding that the model already has an "idea" of "laura" since it has propably been trained with 100.000s of pictures of women (or other "things") called laura or associated with the name laura and mixes those in. So to achieve a highest possible likeness try using a prompt which is unlikely to have been used somewhere else in the dataset. Even "l4ura" might not be quite obscure enough, though i am sure it already works much better. Maybe also add your modified last name in or something more uncommon. This could be special enough to not get mixed up with things the model already "knows". Maybe it does not make a huge difference anymore. But it cant hurt, I guess. :) Anyway: This is the best Lora-Tutorial I have come across so far and I am only halfway through. Thanks for explaining so thorougly. It helps a lot. :)
With my limited knowledge this is also my understanding. I usually try to add not just numbers but special characters so the AI can completely differentiate from anything it may have been trained on.
What if I want to make use of the idea of what to model already knows? Like further fine tune the likeness of a celerity. Does this also need a unique identifier?
It depends on what you want to do. If the model has a lot of data for "laura", it will take a lot of time to train over that, but at the same time, you could benefit from the training as "laura", since the data may contain a lot that can mix with your data. You might end up with a lot of different poses, styles and what ever the latent space have learned from the world "laura". Training "l4ura" will just take less time since that word is not represented so much in the latent space, but you may get less variety and ultimately if you train long enough, the "l4ura" word will mostly produce your input pictures.
Just want to thank you for the simplest, clear and easy to understand explanation on how to create a LoRa model "locally" using Kohya! I was finally, after previous attempts, to create a great LoRa Model. It took 3 days on my limited video RAM laptop and I don't know why it created 30 epochs when I set the settings at 15, but it was successful.
Wow, this video is absolutely amazing! I am so impressed with how thoroughly it explains all the topics related to Lora training in stable diffusion. The content is incredibly informative and has definitely made me more aware of the subject. However, since it's quite a long video, I would suggest breaking it down into chapters. This way, viewers can easily navigate through different sections and find specific information they're interested in. Overall, fantastic job on creating such an informative and comprehensive video!
🎯 Key Takeaways for quick navigation: 00:00 🧠 Fine-tuning means generating a new model from an already trained model to improve performance or adapt it for a specific task. 01:09 📊 Understanding the main parameters in a neural network, such as in Stable Diffusion, helps in making informed decisions for model training. 03:00 🖼️ For training a Laura model, you need training images and regularization images. Training images represent the subject, and regularization images represent the class. 05:04 📝 Captioning images is important for training; you can use Koya to create image descriptions. 08:57 🧞 Generating regularization images, even in large numbers, significantly improves model performance and diversity. 15:09 ⚙️ Setting up Koya's parameters, including the source model, training data folders, and prompts, is crucial before training a model. 19:30 🧰 Preparing training data in Koya creates the necessary structure for training, including folders for images, logs, and model output. 21:18 🎯 Understanding neural networks and batch training helps optimize the training process and improve model accuracy. 25:12 🔄 An epoch in neural network training is the process of going through all batches of data. Iterations refer to one update of the model's parameters within an epoch. 25:51 📉 The goal of training a neural network is to minimize the loss function, which represents the model's performance. This involves finding the lowest point in the function, ideally the global minimum. 26:44 🚶♂️ The size of steps taken toward the minimum during training is determined by the learning rate. It's essential to strike a balance between a high learning rate for faster convergence and a low learning rate for precision. 27:52 💼 Training parameters include batch size (number of data batches), the number of epochs, learning rate, mixed precision (for speed and memory optimization), and more. 30:07 📈 Learning rate schedulers adjust the learning rate during training, with options like cosine scheduling or constant rate. The choice depends on whether you're fine-tuning or training from scratch. 31:11 💡 The max resolution setting should match the image resolution used for training in models like KoYaGAN. It's important for generating high-quality images. 32:19 🧩 To use a trained model with Stable Diffusion, you'll need to link the model file generated during training to the Stable Diffusion web UI, allowing you to generate images with specific prompts.
Thank you for the tutorial, and especially for doing it in English! Your voice is easy on the ears. I’ve been watching a lot of Lora tutorials for the past few weeks, and I feel like your video has been most effective. Subbed! Oh, and thank you for the illustrations!
Well done! I'm only 2 weeks in from knowing nothing and have learned a thing or two from this channel. Subbed. Thank you Laura. Very good pace and clear explanations.
I trained a few times for fun already and this tutorial is seriously great! A lot of things that I were not sure about are explained in more detail so I can better wrap my head around it. I also like the presentation and clear explanations. While I like me some memes I am really glad that you don't put silly stuff in your video and keep it very focused.
Definitely one of the best videos I’ve seen so far in regards to explaining neural networks and how to train models for stable diffusion. Thank you so much!
I was exited to learn the right button function to generate indefinitely and than investigate in a statistical (“inverse AI”) way the differences in output for a category prompt like “modern design” between the different models. HIGHLY POLITICAL !!
Thanks Laura and L4ura for your Lora explanations. I have tried a few times and got mid results. Thanks to your efforts I understand a lot more now. Best Wishes and much love. I'm off to play with my newly attained knowledge. Keep up the good work!
I have been watching 3 videos together to learn how to make a LoRa. In the end I liked your video best. Its not too technical, while still giving a lot of information. I have learned a lot and just finished making my first LoRA and its very exciting!
Little Summary of différents Steps. And Thank you very much, i try a lot of vidéo and your are reallygreat 05:00 Step 1 Blip Captioning 06:27 Step 2 Rename Caption for better Results 09:11 Step 3 Folder Préparation / Dataset Preparation 13:10 Step 4 Source Model 16:09 Step 5 Tool -> DataSet Preparation 20:30 Step 6 Check Info in Training Folder Tabs, change model output name 27:50 Step 7 Training Parameters Tabs
Your videos are amazing! You explain everything so clearly and have been so helpful to me learning how to use StableDiffusion. I plan to make a LoRA for my wife and surprise her with some (what I hope will be) awesome images!
Exactly what I have been looking for! This video is excellent: it makes so many thing every so clear. Thank you Laura!! ❤❤❤ I will finally endeavour to make my own LoRAs....
This video is so valuable. Thank you for being brief but thorough, and using plain language. I really appreciate it. I will probably be watching it multiple times as my go-to for LoRAs. Even though my interface is different than yours by the time my comment is made, your explanations made it easy for me to follow anyways and find what I need.
I realize you did this a month ago, but so much of this is out of date already, wow. Time fly's in the Ai tech world. I hope you will take the time to update this video, you did a fantastic job. You obviously have more patients than I do to read all the garbally goop stuff. I just want something simple, and I hope you delivered, it's still training on the new SDXL model so not 20 min. Mine says 2 hours. We will see if it worked. Thanks Laura for your hard work.
Laura, as i could see, the captions that you created didnt get used in the training. The console showed the error message when you started training. This is because you must set the file extension to ".txt". This setting is in kohya, Lora, Training, Parameters, Basic, "Caption Extension". Set it to ".txt"
Hey Laura :) I'm so happy to stumbled upon your channel. Your explanations and energy are beautiful as you are. I'm in love 🥰Take care and keep shining bright
You are the best. The only who really explain the things (and probably the only who really knows on RUclips). Can you make a video explaining how the IA reconstruct the image from the noise? I mean, something like, Noise x Pixels relationship? Thanks for everything.
The small white page next to picking the model in the lora menu is to find and pick your custom model, edit: it's great you talk about regularization images not many youtubers talk about how beneficial they are when creating loras, and some suggest to not use them which is fair but the loras are much better when they are used
@@jr-wg6os I tried to use reg images yesterday, but somehow I'm unable to prompt the subject ... I get only very vague similarities to the face I tried to train. Any idea what could cause this?
@@equilibrium964 I just had the same problem... trained on myself and the images are nothing like me, I have to specifically point out my skin colour, hair colour and all sorts of basica details which I didn't put in captions (so I shoulldn't have to prompt them). Only after doing that does it start to very slightly resemble me. Doing it without the reg images it instantly looks 100x more like me... Not sure what I'm missing. If you work it out let me know please!
@@equilibrium964 sometimes it's also the model your using to generate the images I've noticed "REALISTIC cartoon/anime" models are much more flexible in terms of generating likenesses but worse at things like skin tones and details, but it also matters how clear the pictures and so on, for instance I noticed much better results with images closer to 512,512 or 768,768 then if inuse 4k images even though you can enable buckets I seem to get better results with more regular quality images then high Def ones.
Not sure why, but my LoRa didn't work with regularization images. Without reg it worked. With time hopefully I'll figure out how to improve its versatility.
If you want better results you can set your Network Rank also to 50-256 this will also make your model size bigger but will give you more accurate results to you or help. And Thanks your video was also a good help to training models.
For me, training without regulation img make largely better results, also without captioning, I did a training with 100 regularization images in 34 passes and 68 reference images in 5 passes, and I ended up with a model that was completely off. I did a second test without regularization images and without captions, and in 13 minutes I got better results than a 2-hour training session. This is the third time that regularization images have just messed up my training. Maybe I misunderstood something, but it seems like they are just being used as training data, which is not the point, Anyway, thanks for the tutorial, it's the most comprehensive one on YT
Great video, I was waiting for this after your last Kohya video... A small notes, think at the end of the video the training parameters section (27:55 ish) are not from Dreambooth LoRA, rather only from Dreambooth, and that's why there is no settings appearing for Network Rank and Network Alpha... also the Caption Extension setting is empty and this would lead the training not loading captions.. might want to fix that, even on LoRA you still need to enter TXT or whatever extension you are using (in fact in your terminal it says "No caption found for your 25 images") ... also I think bf16 works with Nvidia 30 and 40 series well... I would also be interested in your thoughts on what to set for Class Prompt when training an artistic style, for example a style of photography from particular artist.
Hi, for styles I would guess that you can use "style" as the class prompt because when you prompt for a specific style you can use "in the style of " or " style" so now you would prompt for " style".
If you had a large batch of images all taken from the same session, such as your 'red hoodie' set, you could use a text modification program such as sed or AWK to do a bulk update of keywords of things common to all pictures. For example, you could add in 'earrings' to all pictures.
thanks you so much for the video, i created my own lora with base model SD 1.5 successfully, however there are many sampling method, DPM++ 2M, 3M, euler, heun...etc, how do i know which sampler work best with my lora?
I would suggest to look at what other creators used to generate their models - in CivitAI, you can look for what model is similar to what you would like to generate. Hope it helps
Amazing video, thank you so much for the tutorial, I have a question in my case around 5% of the portraits I created looks like me, should I use these new "photos" and placed them in the new regularization images, or do I need to add them in the training images for getting better and accurate/results?
You still have it in the "depracated" tab. You can also add this under the Lora> Training > Folders > "training comments": trigger: xxxx swapping xxxx with the trigger word
Hi Laura, I really like your videos, it is very helpful. One question, may I know how many GB of VRAM you have to run this training? I only have 4GB currently and intend to buy a new RTX. Hence, the question. Thank you!
Thanks very helpful tutorial. Although I'd advise you to install the extension WD14 tagger, it's enormously better than CLIP (or Danbooru) to generate helpful vocabulary to help describe the dataset images.
Hi Laura, Thanks a million for your efforts and your tutorial, I watched so many tricks thanks to your video! I was wondering I could I create a specific part of a body I would like to focus on man's chest (muscles and hair) and I was wondering if I need to takejust chest training images or the full body pictures of a man. And what about regularization pics? Should I take just chest or face or the full body? I'm a bit confused. Thanks a million!
Hi, I'm pretty new to generative stuff and I have difficulties to get a good framing. Could LoRa be a way to teach SD some filmmaking vocabulary? If so, how would you set a training? Many trainings one after another, each with a bunch of pictures with a specific shot, then another specific shot, and another, etc. Or goods captions could be enough for one big training?
Hello Laura, I followed your excellent presentation step by step, but I cannot obtain models under the "safetensors" extension! In the "Models" folder there is only one .json file... Do you have a solution to offer me? Thanking you.
i try making training whitout a human face or body so when i came to blip step it wont work normal how should i make it (im trying to make a instrument model)
Hi, thanks for the content. Are you Italian? I ask you, why if I use 1024x512 diffusion it gives me two people next to me and not one? If I want a landscape image, why two people?
Thanks for sharing this useful video, I am curious about your GPU setup (VRAM, n gpus, model) since it seems pretty fast, I tried searching in older videos but I could not find that mentioned. Since my setup with two Tesla T4 (16 gb each) is much slower than yours, and I want to understand if that is a problem of configuration on my side. Thanks in advance.
I think the result might be worse actually. It would be better if use different backgrounds, a subject/object from different perspectives in different places - key is to describe the surrounding. You could give a go and let us know ;)
@@LaCarnevali I asked chatgpt and it gave me the answer, that about 70% shall be with diffrent bg and about 30% should be transparent. this 100% are then 80%-90% because you will need 10%-20% of controll images (wrong images). ... i havent tried yet, but i will tell you about the result.
Great video! I have a qeuestion, if I wanted to have the lora be of a person in different poses (headshot, stadning up full body shown, side view seated etc. basically any positon) How would I accomplish this? I want my modal to have the same proportions in different generations
I've been struggling to get good results with Dadaptation and the more recent prodigy (very hard to find info on the latter). Could you please someday have a look at those training optimizers that are supposed to help big time the calibration of learning rates ? It's hard to configure (at least for me so far) and the training time compared to a basic LoRA seems insane (but I'm sure I got a few things wrong thus my asking). Thanks for your lovely tutorials. You really explain all that nicely. Cheers.
Hello, a big thank you for your great video. The installation went well. However, how can I change the model? I only have a dropdown menu with several choices. There's no option to reference another model. Can you help me?
The reason "l4ura" works better then "laura" ist that the instance prompt should be a unique string. It is my understanding that the model already has an "idea" of "laura" since it has propably been trained with 100.000s of pictures of women (or other "things") called laura or associated with the name laura and mixes those in. So to achieve a highest possible likeness try using a prompt which is unlikely to have been used somewhere else in the dataset. Even "l4ura" might not be quite obscure enough, though i am sure it already works much better. Maybe also add your modified last name in or something more uncommon. This could be special enough to not get mixed up with things the model already "knows". Maybe it does not make a huge difference anymore. But it cant hurt, I guess. :)
Anyway: This is the best Lora-Tutorial I have come across so far and I am only halfway through. Thanks for explaining so thorougly. It helps a lot. :)
Thank you so much! It actually makes more sense! Pinned ;)
Попробуйте ещё датасет из лиц в формате png с прозрачным фоном
With my limited knowledge this is also my understanding. I usually try to add not just numbers but special characters so the AI can completely differentiate from anything it may have been trained on.
What if I want to make use of the idea of what to model already knows? Like further fine tune the likeness of a celerity. Does this also need a unique identifier?
It depends on what you want to do. If the model has a lot of data for "laura", it will take a lot of time to train over that, but at the same time, you could benefit from the training as "laura", since the data may contain a lot that can mix with your data. You might end up with a lot of different poses, styles and what ever the latent space have learned from the world "laura". Training "l4ura" will just take less time since that word is not represented so much in the latent space, but you may get less variety and ultimately if you train long enough, the "l4ura" word will mostly produce your input pictures.
Thank you Laura for the tips of Lora
Lora by Laura
😂😂
Lora of Laura by Laura
About time someone made that joke… 10 months ago
Those little extra explanations which pros forget all the time and I end up confused are not missing here. Thanks for going so much into detail.
Clearest and most thorough explanation of LoRAs and their creation on YT. Thanks so much.
Just want to thank you for the simplest, clear and easy to understand explanation on how to create a LoRa model "locally" using Kohya! I was finally, after previous attempts, to create a great LoRa Model. It took 3 days on my limited video RAM laptop and I don't know why it created 30 epochs when I set the settings at 15, but it was successful.
Wow, this video is absolutely amazing! I am so impressed with how thoroughly it explains all the topics related to Lora training in stable diffusion. The content is incredibly informative and has definitely made me more aware of the subject. However, since it's quite a long video, I would suggest breaking it down into chapters. This way, viewers can easily navigate through different sections and find specific information they're interested in. Overall, fantastic job on creating such an informative and comprehensive video!
🎯 Key Takeaways for quick navigation:
00:00 🧠 Fine-tuning means generating a new model from an already trained model to improve performance or adapt it for a specific task.
01:09 📊 Understanding the main parameters in a neural network, such as in Stable Diffusion, helps in making informed decisions for model training.
03:00 🖼️ For training a Laura model, you need training images and regularization images. Training images represent the subject, and regularization images represent the class.
05:04 📝 Captioning images is important for training; you can use Koya to create image descriptions.
08:57 🧞 Generating regularization images, even in large numbers, significantly improves model performance and diversity.
15:09 ⚙️ Setting up Koya's parameters, including the source model, training data folders, and prompts, is crucial before training a model.
19:30 🧰 Preparing training data in Koya creates the necessary structure for training, including folders for images, logs, and model output.
21:18 🎯 Understanding neural networks and batch training helps optimize the training process and improve model accuracy.
25:12 🔄 An epoch in neural network training is the process of going through all batches of data. Iterations refer to one update of the model's parameters within an epoch.
25:51 📉 The goal of training a neural network is to minimize the loss function, which represents the model's performance. This involves finding the lowest point in the function, ideally the global minimum.
26:44 🚶♂️ The size of steps taken toward the minimum during training is determined by the learning rate. It's essential to strike a balance between a high learning rate for faster convergence and a low learning rate for precision.
27:52 💼 Training parameters include batch size (number of data batches), the number of epochs, learning rate, mixed precision (for speed and memory optimization), and more.
30:07 📈 Learning rate schedulers adjust the learning rate during training, with options like cosine scheduling or constant rate. The choice depends on whether you're fine-tuning or training from scratch.
31:11 💡 The max resolution setting should match the image resolution used for training in models like KoYaGAN. It's important for generating high-quality images.
32:19 🧩 To use a trained model with Stable Diffusion, you'll need to link the model file generated during training to the Stable Diffusion web UI, allowing you to generate images with specific prompts.
Thank you for the tutorial, and especially for doing it in English! Your voice is easy on the ears. I’ve been watching a lot of Lora tutorials for the past few weeks, and I feel like your video has been most effective. Subbed! Oh, and thank you for the illustrations!
Well done! I'm only 2 weeks in from knowing nothing and have learned a thing or two from this channel. Subbed.
Thank you Laura. Very good pace and clear explanations.
I trained a few times for fun already and this tutorial is seriously great! A lot of things that I were not sure about are explained in more detail so I can better wrap my head around it. I also like the presentation and clear explanations. While I like me some memes I am really glad that you don't put silly stuff in your video and keep it very focused.
Definitely one of the best videos I’ve seen so far in regards to explaining neural networks and how to train models for stable diffusion. Thank you so much!
Great video on lora training! Others I've watched are all over the place and never explained it as concisely or left out info. Well done!
Amazing videos, really clear and detailed, thank you Laura
I was exited to learn the right button function to generate indefinitely and than investigate in a statistical (“inverse AI”) way the differences in output for a category prompt like “modern design” between the different models. HIGHLY POLITICAL !!
Me too, never seen that before
Thanks for this! Loras by Laura! I've been waiting for a good tut on this and you delivered and then some!
Thanks Laura and L4ura for your Lora explanations. I have tried a few times and got mid results. Thanks to your efforts I understand a lot more now. Best Wishes and much love. I'm off to play with my newly attained knowledge. Keep up the good work!
How did it go?
I have been watching 3 videos together to learn how to make a LoRa. In the end I liked your video best. Its not too technical, while still giving a lot of information. I have learned a lot and just finished making my first LoRA and its very exciting!
best LoRA model video I found - thanks for this.
The best explanation of how learning rates work, that I have seen so far. Very useful video, thank you.
I'm just learning about LoRAs and your tutorial is absolutely the best I've seen. 👍 Keep up the good work!
Little Summary of différents Steps. And Thank you very much, i try a lot of vidéo and your are reallygreat
05:00 Step 1 Blip Captioning
06:27 Step 2 Rename Caption for better Results
09:11 Step 3 Folder Préparation / Dataset Preparation
13:10 Step 4 Source Model
16:09 Step 5 Tool -> DataSet Preparation
20:30 Step 6 Check Info in Training Folder Tabs, change model output name
27:50 Step 7 Training Parameters Tabs
Um...no thanks....we all saw the video...your summary sucks.
Thank you! It's the first video i watch with such deep theory description
Your videos are amazing! You explain everything so clearly and have been so helpful to me learning how to use StableDiffusion. I plan to make a LoRA for my wife and surprise her with some (what I hope will be) awesome images!
Finally a tutorial that doesn't say: now click here, do that, click there, type this.... thank you lora ... laura... l4ura🎉
best tutorial for lora training , by far, thanks
you are so wonderful at explaining everything.
such an underrated channel, great video thanks
Exactly what I have been looking for! This video is excellent: it makes so many thing every so clear. Thank you Laura!! ❤❤❤ I will finally endeavour to make my own LoRAs....
NGL your videos about how to use and train image generation models are the best
This video is so valuable. Thank you for being brief but thorough, and using plain language. I really appreciate it. I will probably be watching it multiple times as my go-to for LoRAs. Even though my interface is different than yours by the time my comment is made, your explanations made it easy for me to follow anyways and find what I need.
Would you, could you, please do this for style/artstyle training? Thank you.
Wow I didnt expect a quick rundown with graphs. Thank you!
Fantastic video, very clear with just the right amount of detail to get me started down this path. Many thanks for sharing.
I realize you did this a month ago, but so much of this is out of date already, wow. Time fly's in the Ai tech world. I hope you will take the time to update this video, you did a fantastic job. You obviously have more patients than I do to read all the garbally goop stuff. I just want something simple, and I hope you delivered, it's still training on the new SDXL model so not 20 min. Mine says 2 hours. We will see if it worked. Thanks Laura for your hard work.
Incredible video!!! I understood almost all the theory, and that's pretty much. You have a new subscriber
Good photos it's the best way, your tutorial with good photos = perfect ;p
Excellent video, clearly describing the workflow !
Laura, as i could see, the captions that you created didnt get used in the training. The console showed the error message when you started training. This is because you must set the file extension to ".txt". This setting is in kohya, Lora, Training, Parameters, Basic, "Caption Extension". Set it to ".txt"
Hey Laura :) I'm so happy to stumbled upon your channel. Your explanations and energy are beautiful as you are.
I'm in love 🥰Take care and keep shining bright
best video I've seen so far. You are the best!
Amazing knowledge you're sharing, thank you so much!
You are the best. The only who really explain the things (and probably the only who really knows on RUclips). Can you make a video explaining how the IA reconstruct the image from the noise? I mean, something like, Noise x Pixels relationship? Thanks for everything.
The small white page next to picking the model in the lora menu is to find and pick your custom model, edit: it's great you talk about regularization images not many youtubers talk about how beneficial they are when creating loras, and some suggest to not use them which is fair but the loras are much better when they are used
My personal Lora and girlfriend’s aren’t trained with aregularization folder and look perfect. Maybe I’ll try again for fun!
@SantoValentino same actually but it's helped when I have had less clear images to work with
@@jr-wg6os I tried to use reg images yesterday, but somehow I'm unable to prompt the subject ... I get only very vague similarities to the face I tried to train. Any idea what could cause this?
@@equilibrium964 I just had the same problem... trained on myself and the images are nothing like me, I have to specifically point out my skin colour, hair colour and all sorts of basica details which I didn't put in captions (so I shoulldn't have to prompt them). Only after doing that does it start to very slightly resemble me. Doing it without the reg images it instantly looks 100x more like me... Not sure what I'm missing. If you work it out let me know please!
@@equilibrium964 sometimes it's also the model your using to generate the images I've noticed "REALISTIC cartoon/anime" models are much more flexible in terms of generating likenesses but worse at things like skin tones and details, but it also matters how clear the pictures and so on, for instance I noticed much better results with images closer to 512,512 or 768,768 then if inuse 4k images even though you can enable buckets I seem to get better results with more regular quality images then high Def ones.
Amazing video, Laura! Thank you very much!
Thank you so much with the wonderful tutorial!
Very informative video! Thank you very much! I will really rewatch this.
Thanks so much! I've watched many videos about loras and I think this is the best explained one. Helped me a lot
That was an absolutely fantastic demo
Great tutorial. Grazie. 30:24 was my favorite part.
Not sure why, but my LoRa didn't work with regularization images. Without reg it worked. With time hopefully I'll figure out how to improve its versatility.
Thank you 🎉 Perfect overview and hands on 💪 learned a lot
Nice explanation, thanks!
I love this video. Awesome descriptions!!!
Best Kohya tutorial!
💛💛💛💛💛 my favorite ai teacher
Thanks!
Thankss, I'm glad it helped ☀️
That tip of "generate forever" was a mindblown for me. 😄
Thank you for this very good video. You have explained everything very well and clearly. I'm going to try it out right now.
veramente un signor video, complimenti
If you want better results you can set your Network Rank also to 50-256 this will also make your model size bigger but will give you more accurate results to you or help. And Thanks your video was also a good help to training models.
For me, training without regulation img make largely better results, also without captioning,
I did a training with 100 regularization images in 34 passes and 68 reference images in 5 passes, and I ended up with a model that was completely off. I did a second test without regularization images and without captions, and in 13 minutes I got better results than a 2-hour training session.
This is the third time that regularization images have just messed up my training. Maybe I misunderstood something, but it seems like they are just being used as training data, which is not the point,
Anyway, thanks for the tutorial, it's the most comprehensive one on YT
Great video, I was waiting for this after your last Kohya video... A small notes, think at the end of the video the training parameters section (27:55 ish) are not from Dreambooth LoRA, rather only from Dreambooth, and that's why there is no settings appearing for Network Rank and Network Alpha... also the Caption Extension setting is empty and this would lead the training not loading captions.. might want to fix that, even on LoRA you still need to enter TXT or whatever extension you are using (in fact in your terminal it says "No caption found for your 25 images") ... also I think bf16 works with Nvidia 30 and 40 series well... I would also be interested in your thoughts on what to set for Class Prompt when training an artistic style, for example a style of photography from particular artist.
Hi, for styles I would guess that you can use "style" as the class prompt because when you prompt for a specific style you can use "in the style of " or " style" so now you would prompt for " style".
Or simply don't use a class prompt neither a keyword so your lora always applies when referenced in the prompt
If you had a large batch of images all taken from the same session, such as your 'red hoodie' set, you could use a text modification program such as sed or AWK to do a bulk update of keywords of things common to all pictures. For example, you could add in 'earrings' to all pictures.
thank you laura for all the great content 🥰
Thanks, this was helpful and easy to follow.
Sweet & cute lovely teacher 😊❤😇
This was such a helpful video! Thank you so much, subbed for more schooling.
Hi Laura, thank you! At 16:14... I haven't the "Tool" tab for Lora, I have this "Tool" only on Dreambooth tab o_O (mac M1) Do you need something else?
not sure why. Anyway, I don't think you will be able to run a training on M1...you mac will blow up lol! Better using Colab/RunPod
@@LaCarnevali In the end, after all the work, it crashed :( RunPod for life!
thanks you so much for the video, i created my own lora with base model SD 1.5 successfully, however there are many sampling method, DPM++ 2M, 3M, euler, heun...etc, how do i know which sampler work best with my lora?
Only one way to find out
I would suggest to look at what other creators used to generate their models - in CivitAI, you can look for what model is similar to what you would like to generate. Hope it helps
good job. thanks
Do more videos!! You are great!
thank you....and keep going😀👍
All these comments and no one has said...
So, you've made a Laura Lora, Laura? 😁
Grazie per il video, molto utile :)
Amazing video, thank you so much for the tutorial, I have a question in my case around 5% of the portraits I created looks like me, should I use these new "photos" and placed them in the new regularization images, or do I need to add them in the training images for getting better and accurate/results?
Regularization images should not include photos of yourself. You can use them for training, only if they are actually good.
Instance Prompt and Text prompt fields seem to be gone from the current interface. Makes me wonder how to assign keywords now.
You still have it in the "depracated" tab. You can also add this under the Lora> Training > Folders > "training comments": trigger: xxxx
swapping xxxx with the trigger word
Ты объясняешь лучше, чем сам создатель Kohya! Благодарю.
You explain better than the creator of Kohya himself! Thank you :)
Hi Laura, I really like your videos, it is very helpful. One question, may I know how many GB of VRAM you have to run this training? I only have 4GB currently and intend to buy a new RTX. Hence, the question. Thank you!
Thanks very helpful tutorial. Although I'd advise you to install the extension WD14 tagger, it's enormously better than CLIP (or Danbooru) to generate helpful vocabulary to help describe the dataset images.
Hi Laura, Thanks a million for your efforts and your tutorial, I watched so many tricks thanks to your video!
I was wondering I could I create a specific part of a body
I would like to focus on man's chest (muscles and hair) and I was wondering if I need to takejust chest training images or the full body pictures of a man. And what about regularization pics? Should I take just chest or face or the full body? I'm a bit confused. Thanks a million!
Your Lora model works really great. That’s not something easy to do. Mine sucks 😂 but I will try making one with your tutorial
This is so helpful. Would u mind sharing a bit about Ur computer specs? Crying in macOs rn and thinking about building my own windows-run rig. Ty ❤
NVIDIA RTX 3090. Yeah the mac is not the best, but you could run SD on external GPU, like colab, runpod, think diffusion, diffusion hub
Hi,
I'm pretty new to generative stuff and I have difficulties to get a good framing. Could LoRa be a way to teach SD some filmmaking vocabulary?
If so, how would you set a training? Many trainings one after another, each with a bunch of pictures with a specific shot, then another specific shot, and another, etc. Or goods captions could be enough for one big training?
Hello Laura,
I followed your excellent presentation step by step, but I cannot obtain models under the "safetensors" extension! In the "Models" folder there is only one .json file...
Do you have a solution to offer me?
Thanking you.
hi, Thanks for your tutos.
May I know the trick to have the dark mode? Thanks 🙂
Excellent tutorial for someone who is an AI enthusiast and is just starting out. Do you intend to make videos about ComfyUI?
Thank you very much.
i try making training whitout a human face or body so when i came to blip step it wont work normal how should i make it (im trying to make a instrument model)
Hello laura you can macke a video for sdxl model? We love you you are my dream teacher
Hi, thanks for the content. Are you Italian? I ask you, why if I use 1024x512 diffusion it gives me two people next to me and not one? If I want a landscape image, why two people?
What is the function of image regularization, then what kind of regularization image should be chosen to create a facial lora? thank you
Thanks for sharing this useful video, I am curious about your GPU setup (VRAM, n gpus, model) since it seems pretty fast, I tried searching in older videos but I could not find that mentioned. Since my setup with two Tesla T4 (16 gb each) is much slower than yours, and I want to understand if that is a problem of configuration on my side. Thanks in advance.
Hey Laura, thank your helpful tipps in the video! Do you think it makes sense to train a model for realistic peoples, with transparent background?
I think the result might be worse actually. It would be better if use different backgrounds, a subject/object from different perspectives in different places - key is to describe the surrounding. You could give a go and let us know ;)
@@LaCarnevali I asked chatgpt and it gave me the answer, that about 70% shall be with diffrent bg and about 30% should be transparent. this 100% are then 80%-90% because you will need 10%-20% of controll images (wrong images). ... i havent tried yet, but i will tell you about the result.
10:08 Stable Diffusion? Is it run on web or installed initially?
Can do both
@@LaCarnevali is there a video that explains how to install SD?
I’m getting a crush on you. Lol! Sherrrr.
Thanks for always posting great content to learn from!
Hey Laura great channel, if you put chapters on longer video like this one it will be easier to follow.
Hi Fed, noted, will update :)
Great video! I have a qeuestion, if I wanted to have the lora be of a person in different poses (headshot, stadning up full body shown, side view seated etc. basically any positon) How would I accomplish this? I want my modal to have the same proportions in different generations
need to train the model using pictures of that person in different positions
can this be used with Flux dev? Is a 3060 12VRam enough? How will the configuration be in such case?
Lets go with warpfusion
I've been struggling to get good results with Dadaptation and the more recent prodigy (very hard to find info on the latter). Could you please someday have a look at those training optimizers that are supposed to help big time the calibration of learning rates ? It's hard to configure (at least for me so far) and the training time compared to a basic LoRA seems insane (but I'm sure I got a few things wrong thus my asking).
Thanks for your lovely tutorials. You really explain all that nicely.
Cheers.
Hello, a big thank you for your great video. The installation went well. However, how can I change the model? I only have a dropdown menu with several choices. There's no option to reference another model. Can you help me?
Hi, you need to add your new model in the stable diffusion models folder