ULTIMATE FREE LORA Training In Stable Diffusion! Less Than 7GB VRAM!
HTML-код
- Опубликовано: 25 ноя 2024
- LORA is a fantastic and pretty recent way of training a subject using your own images for stable diffusion. Say goodbye to expensive VRAM requirements and hello to this innovative new way of fine-tuning! In this video I will show you how to train a LORA weight using the kohya ss GUI with less than 7GB of VRAM and how you can then use those LORA weights inside stable diffusion!
Did you manage to train a LORA weight? Let me know in the comments!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
SOCIAL MEDIA LINKS!
✨ Support my work on Patreon: / aitrepreneur
⚔️ Join the Discord server: bit.ly/aitdiscord
🧠 My Second Channel THE MAKER LAIR: bit.ly/themake...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Runpod: bit.ly/runpodAi
Kohya GUI: github.com/bma...
CUDNN 8.6 mega link: mega.nz/file/b...
Basic settings .JSON file: mega.nz/file/n...
Low Vram settings .JSON file: mega.nz/file/z...
Big thank you to SPYBG for his cool tricks:
/ @spybgtoolkit
Big thank to Bernard Maltais for his LORA videos and GUI:
/ @bernardmaltais
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
►► My PC & Favorite Gear:
i9-12900K: amzn.to/3L03tLG
RTX 3090 Gigabyte Vision OC : amzn.to/40ANaue
SAMSUNG 980 PRO SSD 2TB PCIe NVMe: amzn.to/3oBR0WO
Kingston FURY Beast 64GB 3200MHz DDR4 : amzn.to/3osdZ6z
iCUE 4000X - White: amzn.to/40y9BAk
ASRock Z690 DDR4 : amzn.to/3Amcxph
Corsair RM850 - White : amzn.to/3NbXlm2
Corsair iCUE SP120 : amzn.to/43WR9nW
Noctua NH-D15 chromax.Black : amzn.to/3H7qQSa
EDUP PCIe WiFi 6E Card Bluetooth : amzn.to/40t5Lsk
Recording Gear:
Rode PodMic : amzn.to/43ZvYlm
Rode AI-1 USB Audio Interface : amzn.to/3N6ybFk
Rode WS2 Microphone Pop Filter : amzn.to/3oIo9Qw
Elgato Wave Mic Arm : amzn.to/3LosH7D
Stagg XLR Cable - Black - 6M : amzn.to/3L5Fuue
FetHead Microphone Preamp : amzn.to/41TWQ4o
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Special thanks to Royal Emperor:
BSM
Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
#stablediffusion #lora #stablediffusiontutorial
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
WATCH MY MOST POPULAR VIDEOS:
RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
►► bit.ly/stabled...
RECOMMENDED WATCHING - My "Tutorial" Playlist:
►► bit.ly/TuTPlay...
Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.
could you make a tutorial how to train a style?
I generated a LoRA model in 12 minutes with my 3060Ti (8GB VRAM) using 30 images and the Basic Settings config file. Just had to adjust "Mixed Precision" and "Save Precision" to "FP16" instead of "BF16" and select "AdamW" in the "Optimizer" dropdown instead of "AdamW8bit".
Thanks Aitrepeneur for this tutorial and all other tutorials you have made. They were very helpful to me. Much appreciated!
I had the problem "returned non-zero exit status 1" Your changes solved the problem ;)
this worked for me as well - thanks!
Thanks. I was wondering if my Rtx 3060ti could handle it. Now I will try it too.
thx, but in my pc its give me adamW errors, so i disabled it and now it works.
im using 2060 super
this worked for me too! Thank you! but I have a few question, what is the difference? better training results? it is faster? because I still don't notice a difference
I literally JUST asked some of the guys in the Stable Diffusion Reddit how to train a Lora and figured I'd check here to see if there was a video, and as usual you do not disappoint. My hats off to you, good sir! Thanks again for all your hard work!!!!
I don't know what it is, but something about the way you explain things quickly but with such great detail is exactly perfect. I'm a little older than the rest of you guys, but I am able to (pretty much) keep up with the new methods and tools for SD thanks primarily to THIS channel. My 16 y.o. grandson now calls ME with SD questions. Thank you, Aitrepreneur. You are very much appreciated!
same same. He explains just so well and doesnt leave out steps, other people assume you "just know"
I wish my grandson was old enough to come to me with questions about SD!! I can't wait until he's old enough to understand art, I plan on teaching him everything I know. He'll be 5 this year.
ikr it's way better than reading tbh
@@aaronhhill I mean there is a lot "for now knowledge", the field is moving so fast. In 1 year most of the "how you do this" stuff will be way different and easier.
@@Utoko This is very true. I mean, most of the core principles may remain, and obviously the core principles of art will stay the same: perspective, composition, style, but yes the way things are done now will be very different in a year and far easier. My hope is that diffusion will change or become something radically different in the same way that GAN and Diffusion are different.
I can't believe how easy it is to make your own lora. I've wanted to generate images of lesser known characters for months now but no loras existed because they aren't popular enough, thank you for this.
just a note the provided config file appears to set "mixed precision" and "saved precision" to bf16 which leads to an error, and needs to be changed to fp16 manually
I was wondering why didn't work on me and was that! thanks !!! also it defaul call the model "adams"
@@maxnami yeah, and the default paths have K's settings, but otherwise it all works great!
thank you! i was wondering what i did wrong..... 😄
I read somewhere bf16 is not supported for some hardware. Mine is working
I was staring at my screen wondering what the hell was wrong, started looking at the comments and saw this, you just saved me hours of wondering what I was doing wrong.
IMPORTANT NOTE : If you don't choose a custom model specifically, this process downloads a LOT of data, which isn't explained in this video. I clicked on "Train model", and it's started to download multiple files that are more than 3GB each. Ther's more than 15GB to download so the training works.
Thank you so much for this, was extremely annoying trying to do this with my bandwidth and I already had SD 1.5 locally as well.
Hi ! Where I need to go for do this ? how /were I choose that custom model ?
this
Also note that if you want to launch cmd or powershell from inside a specific folder without having to copy the directory and use "cd" or "cd /d" you can click on the the file path when you have that folder open and just type either "cmd" or "powershell" into the address bar then press enter. That will open cmd or powershell with that folder already set as its directory for executing your commands.
It's an easier way to get things done.
I had to do this because for some reason it wouldn't allow me to do so inside of powershell but, if I ran powershell from the directory itself, worked just fine.
Or shift + right-click and select "open powershell window here".
THANKSSSS
how do you do that but with the admin cmd
i'm a simple man, i'm admin, i click
small note: make sure you have +14GB in the C drive because when you run it for the first time it will download many files (it will download them in a folder inside your user folder called ".cache" without quotes.)
also, if you faced this error "Error no kernel image is available for execution on the device", then just disable the 8bit adam option.
finally, don't forget to switch bf16 to pf16 in the precision options.
Thank you so much it wouldn't work for me until I disabled 8bit adam although I don't recall getting that exact error.
Yup, 8 bit adam was a problem for me too.
@@TheWizardBattle Glad to help. I did run into that error too and spent a lot of time looking for a solution, so I shared this to help others save time.
thanks i had the same issues with the drive space then the 8bit error haha, also if the lora is not quite there yet how can i retrain the same lora or do i have to get different images and retrain again? or should i mess with the learning rates?
Thanks!
Excellent Video Thank you! One slight snag though, your json config file also changes all the directory folders to your personal file paths so people may get an error unless they reset the paths in the folders tab to their own file paths.
Gotta say I love seeing the community come together to create content like this. Thanks 🙏
Thanks, managed to train my LoRA in 7 minutes with your config, 150 steps - 14 images
Your videos are really clear for instructions. Love it. Unfortunately, this software has evolved a lot and lots of this does not apply. It would be great if you could do an updated version. OMW to check out your other pages to see if there is something posted on those.
Omg I went from 400 steps taking 3 hours to 6minutes with these settings compared to one of your other tutorials. You're an absolute legend.
3:23: Instead just write "Powershell" or "cmd" in the URL-bar of the folder you're in
I think I must have a different version of Powershell, because the commands didn't work there. Had to use "cmd" instead.
@@Anonymous-gu2pk 🙏🙏
or you could Shift + Right Click "Open Powershell Window Here".
Thaaaaank you!!! I went through so many videos and text guides and nothing worked because there was some obscure error just sabotaging everything. Now I'm up and running with a functional GUI thanks to you. Again, thank you! Great guide!
If you use the red button under Generate to show the Lora files, like I do, you don't need to install the Additional network extension, it uses a completely different folder for lora files (it looks for them in the extension folder, not in the model folder). I train my lora files with the Kohya gui and don't use his extension, and the files work anyway
Honestly, you need to talk about Kohya gui in general. A local dreambooth installation with a visual ui is exactly what i've been looking for.
Just fixed my problem, had to swich bp16 to fp16 mixed precision. Thank you for that tutorial!
What card?
@@Y0y0Jester rtx 2070 8G
@@lurker668 nice, exactly the same here.
Awesome video! Thank you for making this. This should help bring the power of LoRA to many more folks out there!
i saw your videos before but couldn't make it run. now it freaking work thx to you and Kei.
awesome work !
Thank you Bernard for making the hard work for everyone ;)
lora and hypernet are really good for changing the style of a model.
Thank you also! So awesome for you to even maintain a github repo.
Information in this tutorial no longer corresponds to the links provided.
About 9 min with an old RTX 2080 8GB and 19 images. Great Tutorial, thanks :)
Any chance you can make an updated tutorial. The GitHub is completely different now and following the new instructions I can’t get it to work at all.
In case anyone is following along, you no longer need to Set-ExecutionPolicy unrestricted for the newer versions.
at 3:00 mark
3:23 [WIN10] to start the PowerShell in the open folder, just hold SHIFT and then right click on it, and choose "Open PowerShell in this folder" from the popup menu.
This is great! Thanks for keeping us up to date on these things. I honestly prefer having the training separate from my stable diffusion installation. Very easy to follow tutorial. I know the work it takes for these video's, much appreciated.
Can you please make an up-to-date version of this video. The setup process is vastly different.
It's pretty similar, the biggest difference is the Kohya setup, and if you just read the messages in powershell, you should get it down easily
This dude does such a great job walking through the steps and explaining everything.
Thank you for all the info. So far I'll say the "Save training state" option is also pretty useful for if you get something workable but it needs a little more training just give it another round using the last state saved in the model folder.
As a heads up though the next training needs either the same amount or same images as the first or it won't start. I figured that out the hard way when I tried lowering the amount of images for one pass.
I followed these steps, used the config file, and had much more success than trying to figure out settings on my own. Thank you. The trick for me was limiting the number of photos to 20 and using more repeats (100) like you suggested to get close to the 1500 step count with only 1 epoch. I also used only 512x512 images for both img and reg. This creates a usable LORA in about 20 minutes on a 3060 card, at about 1.8 it/sec.
are you use 3060 12GB?
My basic system security alarms go off like crazy watching most these videos XD
Again... I have followed many tutorials online, and yours is the only one that worked. Great job on explaining things, the others always skipped explanations.
one thing, on windows 11 (idk about windows 10) u can just type 'cmd' or 'powershell' without brackets in folder address bar and press enter and it will open in that folder so no need to go to it via CD and address paste
Works also in Win10
I knew for cmd but not for powershell, thanks!
You can just shift click inside the folder and click launch powershell from this location. Just a quick tip for the future
From those of use with meager cards, thank you for including the minimum VRAM in title!!!
Hate getting all excited about new AI only to find out after hours of trying that its not possible on my card. (yet)
Did it work for you? I don't want to make another installation attempt so that it doesn't work at the end XD. I only have 4GB of VRAM
@@alejin9646 Pretty sure 4GB will not be enough. I have 8GB and it sometimes crashes due to low memory.
This is BY FAR, not only the best tutorial but also the best (and also easiest for some reason???) method to train a face.
I did it with my wife and i CANNOT distinguish between the real one and the AI one, craaaaazy!
Head's up, for those of you with as short attention span as I have:
Loading the config file will overwrite the selected folders.
Load your config FIRST, then you can set your folders.
Your tutorials are very good. I am brand new to all of this and have been able to easily follow along. You move and speak quite fast, but that's ok - this is youtube and I can keep rewinding. Your style is much preferred to some who talk too slow and take 25 minutes to get to the point, or others who show just results without explaining details. You do both very well. I can rewind, pause and take notes, and follow along. Thank you very much for this! Next I want to find out if I can continue to further train lora like we did SD model training like in your other videos. What I mean is: can you add more training to an existing lora? Can I start a basic one then train it further?
I love the way you say GUI
Please K do another tutorial for training local LORA's:) we will appreciate that:p
The pre-made json files helped a bunch because I was seeing different results even though I followed all the settings. Before using the config file the loss rate was jumping all over the place and after using your config file the loss rate stabilized. I think the erratic loss rate was why my loras weren't working out. I dunno what settings were different to mine but i use your config file for a base.
For anyone with the "Error no kernel image is available for execution on the device" error. The "8bit adam" checkbox has been moved to "Optimizer" section as a dropdown instead on 2023/03/05 (v21.1.4). I have no idea which one to select but I tried until one worked.
I changed to the first one "AdamW".
TY VERY MUCH
Absolutely amazing results!! Much better than training embeddings as previous explained method!! Thanks a lot!!
embeddings are so unpredictable sometimes....this is so on the point it's impressive.
the git files have changed i think the tutorial is outdated by now,we need a new one
Thanks a lot! I was able to create a Lora for a character that sadly has too few fanarts. The result was mind blowing!
hey how long did it take you if you don't mind me asking
@@buketerbil4778 about 10-15 minutes.
I used only 10 images. Ryzen 7 5800X3D RTX 4070Ti
They simplified installation a lot since you posted this
Should I follow the github instructions instead?
Thanks for this video. I've finally trained my first LORA. Other guides were not working on my side.
Unfortunately, the installation has changed completely in the meantime. Could you please make an update video? Although the installation instructions are actually clear, I can't get it to work because several errors occur. Thanks for your videos
Did you make it work?
im new to this and try to learn everything important. your channel is such a BIG help! THANK YOU
This is pretty cool but I would like to mention setting Execution Policies to not be signed is almost asking for trouble. Unless you are a power user I would NOT recommend doing thing. It make it way too easy to run malicious scripts/code.
So what do we need to do in the process, plz?
I used this method with impressive results for people (me and my wife) and my two cats (15 images each).
Is there any difference to creating a concept instead of a character? Can you do a tutorial on making clothes/poses/styles as well.
The only tutorial that works on 2024, with 24 gb of memory ram and 4060 8gb, take 16 hours but the result is amazing, thankss
Great video! Quick Q, how would the training method change for:
a) A style
b) a specific object in a picture
The methods for person/face/object/style/concept/etc are pretty similar. The main change I notice is how you prompt to change focus. In the txt file you want to describe everything that you don't want trained as a concept that also appears in the image. It's hard to properly explain without examples, @kasukanra does a great job of describing this in their video "kasucast #13".
Bro I love your explanation... You are the best! cause you explain things in proper detail
where others Lora training video I watch those dumbshit start anywhere and like there tutorial like they bloody doing favor on us
they don't even mention why the heck those folders so for viewer we think there is something inside in those folders. but you are the guy who explain you created it so that big help for viewer to understand
cant get this working, they changed the way you install it, would love a follow up video for LORAs, cheers !
Extremely useful and very well documented tutorial. Thank you! 👍
Ran this initially and managed to train a model. However each time after that, even with low VRAM settings, I get CUDA: out of memory error no matter what settings I use, even if I drop the image size. I feel like this is a bug somewhere.
bro I have the same problem. Idk what to do.
You can also open windows powershell to the folder you are in by Shift+Right Clicking anywhere inside the folder window and selecting "Open Windows Powershell Here."
3:48 in poweshell cd doesn't need /d do change drive. PS doesn't even uses slashes for parameters, it uses dashes like bash
Thank you, it wasn't working with the /d and then I tried with just "cd E:\Kohya" and it worked
Would you kindly make an update for this topic? In the past I've followed this video and it worked but apparently the newest updates to Kohya SS broke something. Thanks for the great content!
i hope they give us a linux version soon. most people work on windows but linux does a better job and we cant use this on colab or gradient. i tried using this on runpod with docker but no luck yet! 😓
me too :(
I love that you say GUII like you'd say it's ya boiii
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
牛逼!
A few tips. No need to install the additional network extension if you use the built-in auto1111 LoRA support... but the extension does bring up a nice set of tools you can use to augment the experience. Another one, I suggest to leave the bucketing option always turned on. If you use pre cropped images it will not touch them... but if inadvertently you have one with size, say 512x513... instead of failing to run, it will just resize it to 512x512 and keep going. Don't ask me why I know ;-)
No one is doing style training tutorials and tutorials focused on how to do annotations of images for different type of image content. Please do one, if you know acually how to do it properly. Ty.
@@BernardMaltais Why you know?
@@sv_gravity I have a general isea for the annotations/captioning. Essentially you want to describe everything you don't want to always associate to the person or style you are training. For style it is a bit more difficult and depend on what it is. I have nor done lora styles yet but I have an interesting dataset I could try... might learn a thing or two while trying.
in the explore address bar highlight it and type pwsh and hit enter, and that will open powershell right there in that folder for you. similarly you can do the same thing for command prompt by typing cmd and hitting enter.
The kohya tutorial section is completely incorrect now. Does anyone know how to use it properly? Do I install to my Stable diffusion folder? and if so where, does it matter?
Got my first ever lora model made! Thanks a lot supreme Ai Overlord!
No matter what I do and how low I set the settings (even training 256x256) , I keep getting the RuntimeError "CUDA out of memory" issue. And it worked the first time I followed this tutorial, I don't get it... (it says that most of my 8.00GiB total capacity is already allocated).
Same here, seems this tutorial is a one-time-only one!
My god, i tought i was the only one! The first time it worked like a charm, now i cant do anything with it!
Legend! Learnt so much from your videos my guy. Keep up gods work. 🙌
great tutorial, thank you! Anything you recommend for training styles using this method?
Hopefully he makes a vid on that as well. Definitely interested in styles for LoRA
as a first time user I would not suggest following this steps as yo would most likely run in to issues. rather try to as many vids as possible to familiarize yourself with what is actually happening and the process of training. and play around with the software.
What's the security risk of setting the execution policy to A when using powershell? Shouldn't that be reverted back after the gui is installed?
Wondering the same, seems insecure
when you are done using it, you can use "Set-ExecutionPolicy Restricted" to go back to default setting
@Aitrepreneur can you please pin this comment thread? It's not safe to leave your machine settings in a state where any powershell script can be run with admin policies unrestricted. Temporary is fine if you trust the script, but should be changed back once its complete.
i think they conveinetly dont tell you that so they can get you to download malicious files in the future. this is a very clear and present security risk and he host files in a mega, he probably hopes people will not change it back and he can trick them into downloading the mega files in the future for exploiting.
Probably better to just run without admin privileges and do: "Set-ExecutionPolicy unrestricted process"
This way, you're not giving full admin access for the session, and you're only allowing scripts until you close PowerShell
Thank you for the information.
Nice implement, i get this error
raise ValueError(err.format(mode="bf16", requirement="PyTorch >= 1.10 and a supported device."))
ValueError: bf16 mixed precision requires PyTorch >= 1.10 and a supported device.
any idea =?
same :/
@@hfoxhaxfox1841 manually change "mixed precision" and "saved precision" to "fp16" -- the config file messed that up
@@sazarod Thanks fixed :D
@@sazarod THIS
Been waiting on this video!
at 3:02 we turn off some script security thing, should we turn that back on after we installed everything?
If your having issues do each command line by line in powershell that is what ii had to do and it worked
Can you update the tutorial? Thank you 🥰
Love you videos my friend. I've learned a lot!
Can you make another tutorial for the Dreambooth section of the Kohya GUI please? Thanks in Advance
I plan to yes :)
@@Aitrepreneur Is your basic settings file not set up for using a GPU? When I followed the steps to this video, it rained on my CPU and not my GPU. Is it Dreambooth LoRA that defaults to CPU? I noticed the setting for CPU cores butt nothing for GPU.
Good results and fairly quick on a 8gb 3060 using your settings json with 1 batch, thanks
If you have a 1660ti you need to select no for mixed precision and find an older version otherwise the loss=nan
thanks! you saved me
thank you brother
Wait, is the mixed percision important?
@@sollekram I have no idea, but i cannot get it to work without unselect it. So bad result is better than nothing 😅
@@suebphatt you have bad results ?
Damn, Dude! THAT was painful. (Not your fault) Your low vram got me one step further and I "thought" it was going to work ... then it crashed and burned. *heartbroken* Thanks for the attempt.
This video is very outdated! why lora not saving as safestensors file? instead it making .json in model file? why? all day with this sorry
Is this still the best method for today?
Did SD XL change anything?
Hi, I keep getting "RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.00 GiB total capacity; 9.13 GiB already allocated; 0 bytes free; 9.26 GiB reserved in total by PyTorch)". I have NVIDIA GeForce RTX 3080 with 10G memory. I also used the low vram config. What should I do? Thank you!
Hello. Could you make an updated video on how to use stable diffusion to train models with low Vram? Updates are coming out really fast and I haven't been able to replicate your tutorial. I have an RTX 3060 and I have difficulties to create good models, "I still have problems due to lack of memory". Your videos have helped me a lot. Thanks for your attention.
6GB will always be OOM from the looks of it. So time to git rich and get a 3070, lol.
@@rh906 3060 is 12 GB
I have a Asus RTX3060 12 GB VRam. I use 512x512 images (15-30) and I can make a LoRA file in 30 minutes.
I got a mistake pressing keybord arrow during choosing mixed precision. As a noncoder lol it took some time to understand that I just need to use numbers instead of arrows. Maybe it'll help someone with the same problem.
Just 30 minutes after you posted, you helped me, thanks man!
Best tutorial for beginners. ❤
How many input photos is best? Is more better or there is a limit where results can downgrade? Edit: so im experimenting with amount (100 at the moment) of training photos. And best results get when i use lower number than 100_subject catalog name. Im using 30 at the moment and get best results into mixing face into model. Could be that i overtrain at some point but theere is no way to preview training live. Or there is but i cant find the catalog with generations.
Hangs on "Caching Latents."
Tried every single possible combination of settings, tried making all my images the same size, and down to 512, tried a second time with another clean install without the CUDNN addon.
Literally no go. RTX 4070 Ti and Windows 11.
Hey ! instead of training a caracter, can we train a style, to create our won style ? Its relly important for me 🙏
ditto to other recent comments, your tutorial seems very thorough but the installation process is very different now
It appears the instructions on the github page and the way this is setup for Kohya SS is completely different and changed today vs what you showed in your video. I watched your video through before going to the github page and now I'm sad because all of your instructions don't line up with what's on the github page and now I'm confused and not sure if I can get this installed correctly. I guess your video is completely outdated now. on the setup portion. :(
thank you very much!!!! it turned out really great!
Great tutorial, can you please share with us how to keep training our first model, to get more similar results? or if you already explained it on another video, point us to the correct path! Thank you again!
worked!! subscribed!
Great video. What would be different if I wanted to train a Style instead of a face?
How does LORA training compare against Dreambooth? Are the results on par or is DB still better?
I also see a lot of LORAs extracted from checkpoints, I never found how to do that.
You should lower the repeats to around 2 to 10 depends on the dataset, higher the epochs number and lower unet learning rate and text encoder lr when training style.
@@taihuynhuc3135 this is useful, thanks man
My G, perfectly explained and easy to follow. solid like and subscribe
What advice would you have if you need to create LoRa for different characters in the same style?
For example, the basic Stable Diffusion models don't know how to generate scenes from Futurama. How can I train one LoRa model to be able to get Futurama characters by including their names in the prompt?
In Tagger I can make a text description for each source picture, the style 'futurama' and names of the characters can be entered there, but how to use descriptions to pictures in Kohya GUI?