😕LoRA vs Dreambooth vs Textual Inversion vs Hypernetworks
HTML-код
- Опубликовано: 14 янв 2023
- There are 5 methods for teaching specific concepts, objects of styles to your Stable Diffusion: Textual Inversion, Dreambooth, Hypernetworks, LoRA and Aesthetic Gradients. The question is: which one should you use?
In this video we review 3 key research papers, look at the underlying mathematical mechanics behind each method, analyze data from civitai to arrive at an informed and final conclusion.
Discord: / discord
Live Stream in 8 hours: • 😕LoRA vs Dreambooth vs...
======= Links =======
Spreadsheet: docs.google.com/spreadsheets/...
LoRA paper: arxiv.org/abs/2106.09685
Dreambooth Paper: arxiv.org/abs/2208.12242
Textual Inversion Paper: arxiv.org/abs/2208.01618
Dreaming Tulpa: / dreamingtulpa
Driving a machine insane with Dreambooth: • I drove a Machine Insane
Good Tutorials:
Dreambooth tutorial by OlivioSarikas: • DreamBooth for Automat...
Hypernetworks tutorial by Aitrepreneur: • HYPERNETWORK: Train St...
Textual Inversion tutorial by Aitrepreneur: • ULTIMATE FREE TEXTUAL ...
Textual Inversion Paper Walkthough by me: • Textual Inversion with...
LoRA tutorial by me: • 7GB RAM Dreambooth wit...
LoRA tutorial by Nerdy Rodent: • LORA for Stable Diffus...
Aesthetic Embedings tutorial: • How to use Aesthetic G...
======= Music =======
From RUclips Audio Library:
Escapism Yung Logos
Music from freetousemusic.com
‘Late Morning’ by ‘LuKremBo’: • (no copyright music) c...
‘Marshmallow’ by ‘LuKremBo’: • lukrembo - marshmallow...
‘Rose’ by ‘LuKremBo’: • lukrembo - rose (royal...
‘Snow’ by LuKremBo: • lukrembo - snow (royal...
‘Sunset’ by ‘LuKremBo’: • (no copyright music) j...
‘Travel’ by ‘LuKremBo’: • lukrembo - travel (roy...
‘Branch’ by ‘LuKremBo’: • (no copyright music) c...
#stablediffusion #aiart #ai #machinelearning #dreambooth #textual-inversion #hypernetworks #lora #aesthetic-gradients #tutorials #resarch #aesthetic-embeddings - Наука
The thing about textual inversions is that they create embeddings that are cross combatable with the base models. A textual inversion trained with SD 1.5 will work with all 1.5 based models, and here is the kicker, you can combine them without having to do any model merging. That is HUGE.
yeah, the flexibility of textual inversion is a big factor, also it's really cool conceptually!!
The video really should have mentioned this, it's an incredible advantage for embeddings that was just left out.
Yes, combining two, three or more Dreambooth models is possible, but it takes time and generates yet another 2GB+ model that you need to save somewhere.
Whilst textual inversions can be used flexibly within the prompts in any combination, including weighting them, using as negative prompts, all on the fly with no extra file management
However, textual inversion cannot learn to output things that the base model is not able to do at all. So depending on the base model, it may not be possible to train a textual inversion for a specific concept.
@@expodemita I do not think they are compatible between 1.4/1.5 and 2.0 2.1. 2.0 and 2.1 should be compatible.
@@infocyde2024 2.0 and 2.1 are for sure
Much appreciated, having someone clever distil all of this dense information down and explain it succinctly and with so much enthusiasm is so refreshing!
Thanks for explaining these so well, your visual diagrams are great!
Thanks for your hard work putting this together, very helpful to evolve my understanding of the different approaches. Much appreciated!
Thanks so much for this. Very underrated channel, literally was thinking something like this would be really helpful.
Awesome comparison mate, great addition with the statistics, thanks a lot
I love it, love your promises on what we are going to get from your video at the very starting few seconds of the video itself, keep it going man, love your channel and your energy.
you deserve more subscribers, only channel I found that actually delivers what you need to know
Such an EXCELLENT video. Very very well researched and perfectly presented. Thanks for sharing all your findings and appreciate the time it took.
This video is exactly what I needed, and you went about it in the best way possible. Thanks for this
I love your videos, always on point !
Liked and Subscribed, Thank you for all the hard work!
Superb video. I don't think I've ever seen a tutorial/explaination for anything that is this good.
This was a very informative video in fact, thank you! And I like your very dramatic delivery of the content! :)
Outstanding job explaining these concepts! Well done!
You explained that amazingly, very easy to understand - also things move fast because it seems like LoRA is now the most popular.
I greatly appreciate this video sir! It is really helpful for me to have context of how things actually work behind the scenes to make mental connections and improve how I interact with the external program.
Thanks for the input, good research!!
i'm a total beginner to AI, and i suck at math, but you somehow managed to clear a shit ton of confusion. I was hooked on Dreambooth tutorials and trust me, you don't want that. I literally thought i was not going to be able to get started simply because of the massive resources it required.
Trust me, you are really good at explaning things :)
Really appreaciate the help
I’ve been trying to install dream booth for 3 days now. No success. Ready to walk in front of a bus
Thanks a ton for this breakdown, I've been struggling with this same question for a few weeks now. I had already come to a similar conclusion myself, but this was very validating.
Dreambooth is preferred, but the models sizes make it so cumbersome and challenging to test different versions. With textual inversion, the file sizes are insignificant, and you can stack them on top of each other, making them very flexible.
I haven't actually evaluated embeddedings (textual inversion) yet for quality because the animation notebook I use doesn't support them, but the developer just made it compatible, so I'm looking forward to testing it out more.
What a great video! Thanks for your academic sharing and empirical results❤
Textual inversion is far better on 2.1 than 1.5, and i think that's why they don't get the same love dreambooth receives. You can also speed up textual inversion training if you spend a few minutes getting the initializing text right so the vectors start in relatively close proximity to their final resting place. The best part imo, is you can combine many embeddings together, something which dreamtbooth doesn't really allow.
How can you get the initializing text right before the training?
@@leonardom862 By running image to text I suppose ?
Ironically I haven't been able to run dreambooth yet,I switched to linux for AI... something broken with PyTorch2.0 and Cuda11.7 only thing affected is dreambooth training. Turn on gradient checkpoint and it cant train, turn it off and I cant make it to the first epoch without running out of 24GB of vram? I hope this gets fixed soon.
Totally agree. Suuper crucial to be able to call multiple embeddings in a prompt!
Best video I ever seen.
Best vibes!
Thanks so much
Wow. Thanks for this video, esp. the first part which gave just enough detail to understand the trade-offs and underlying approaches.
You are really good at explaining this stuff. Thanks!
Thank You a lot. This has been a really good explanation that I felt missing.
Great explanation! I've learned more about how AI art works from this video alone than all my previous watched videos combined. Everyone tends to say how to configure things without explaining how it works.
This is great. Thank you for sharing your knowledge, and about Excalidraw.
Awesome explanation, thank you!
You're an absolute legend. Great video
Thank you so much for this video! 🙏
Absolutely incredible video, thank you!
Loved this breakdown. You need more followers!
Incredable explanation! Thanks a lot.
this is the best video about the topic ive ever seen, thanks so much
thanks for the short explanation. Loved it!
That was an insanely good explanation. Thank you!
One interesting piece of data is Lora has quite a high faves per download rating while only being out for a short period of time
yeah, I saw that too.... good sign!
This video is AMAZING! Thank you SO MUCH.
Very nice introduction! Thank you!
Thank you so much for this video! Amazing work
Great tutorial mate, thank you!
Thanks for making such great and informative video. Keep up the good work
Really great video, thanks for sharing all your research!
glad it helped!
Very nice summary- thank you 🙏
THANK YOU THANK YOU THANK YOU!! Great video. Great insight.
That was a really helpful video that definitely saved me a bunch of time trying to understand these differences by myself :P
saving people time makes me super happy, thanks!
brilliant video, thank you for all the efforts!
thanks for making those complex concepts easy to understand!
Awesome explanantion for everyone!
Thank you, sir, for this explanation!
This is so well taught man thank you so much
This video is dope. Super clear and informative. Thank you!!!
Thank you for an informative and engaging video!
Hey Koiboi. Great video. When you made this video, as you said yourself, LoRA was still very new and the stats are probably not accurate. Now that a good amount of time has passed, I would love to watch an updated analysis video on the effectiveness of LoRA compared to Dreambooth and Textual Inversion.
Either way, this is the most informative video I've watched so far comparing these fine-tuning models. Liked and subbed 👍.
That's so much work! Thank you man
clearly explained, much appreciated!
thank you for the great explanation!
Very clear explanation, thank you!
Really appreciate this, so many videos showing how to do this stuff, but not how it works, and specially not how it works dumbed down to a level I can understand. Very cool, thankyou
LORA works only with an extension, and many people don't know how to use it yet, hence lower ratings. Great video btw! Visual comparision would have been great as well! As far as I can remember, there was one in LORA blogpost, showing how textual inversion may be less flexible than dreambooth or lora, and the latter two were showing comparatively similar results.
Auto added compatibility now! But it was only added recently. (I still use the extension, I find the drop-down much easier to use than how auto implemented it, plus it gives you the ability to tweak the weight of both U-Net and the Text Encoder -- super cool!)
i did everything i should do and i never get lora to run. it was no issue with the textual inversion, tho.
controlnet has almost made lora obsolete for anything other than oddities
This is awesome! thank you!
That is the type of quality content that I'm digging for.
I want to understand Stable Diffusion and everything related. But my attention span/knowledge about programming is not enough that I can just read papers about it. So I need videos, with visuals, and easy explainations. And your video was Perfect. Liked + Subscribed :)
Very great job! Thank you!🥰
I've been messing with Loras and they seem to work really well. You can also do a good amount of mix and matching with loras whereas a full model checkpoint only allows you to use that one model at a time. if I had a fruits lora and a vegetables lora, then I could just turn them both on to get fruits and vegies in my random prompt that doesn't ask for fruits or vegies. If I later just want fruit then I could just remove the vegies lora.
I think loras are going to be big going forward, most people just don't know about them yet.
I get the feeling that Textual Inversion is the go-to for when you have a new idea you want to teach the model (like a specific character or subject), and Lora is great for when you have a concept you don't want to stop and explain to the model, or may have difficulty doing so. They're very similar things, but not quite the same.
For example; loras are great for mimicking a specific art style, because instead of having to describe "I want a painted animation style like this specific style, but with eyes drawn just so", you can train a lora and then just say "" at the end of your prompt, and since it isn't actually part of the prompt, this clears up tokens for describing the actual thing you want depicted in that style.
This is exactly how i feel about lora. It’s disappointing that people don’t seem to gasp the same values of how beneficial loras can be.
@@treyslider6954 And also, as described in the Automatic1111 docs, Textual Inversion can't teach COMPLETELY new concepts.
The example they gave is that if you trained a model that only knew how to make apples on images of bananas, it wouldn't learn what a banana is, it would just make long yellow apples (in the best-case scenario). Because it's not actually changing model weights, it's better for teaching a style than a new subject, because unless the subject is very similar to something it's seen, it can't learn it.
LoRAs can teach a model something it's never seen before, because they are directly inserting weights into the model, meaning it's actually modifying the model and not the input going into it.
Basically, Textual Inversion for simple styles, LoRA for anything complicated.
What I was looking for in that video was the comparison of usability in different scenarios. Which model is good for faces which one for style transfer etc. I'm missing that, other than that quite comprehensive comparison. Good job!
Why am I only now seeing this? Great video and thanks for the feature ❤
Thanks koiboi! I absolutely love these in depth videos.
Any plans to give ControlNet this sort of treatment?
Great work!
Amazing work thank you. New sub.
Great Video! Thank you very much!
wow that was great, thank you so much!
really great video! finally understand the differences. just the conclusion is already out of date, since we're moving so incredibly fast. lora, is the most popular format on civitai now. understandable, since training is the quickest, even though ti's end-result is much smaller.
Great info! And coincides with what I learned on Computerphile's channel. Slowly but surely my mind is able to wrap around with what we're dealing with.
Brilliant explanation, thank you. By chance are you diagrams available somewhere?
Great video, really helped me understand this
Great video. Big thanks
I just started with the whole txt2img models and therefore I really thank you for this video!
It was great to watch and I got a lot of information. I really appreciate it.
Also, would you mind sharing the excalidraw board? Either as a exported png or as a file, it would be a great resource for my own documentation, if it's not too much to ask.
Great video! Thanks!
Thank you for model explanation. Really loved your content so far.
At the end of civitai comparison, I’m curious if we split data to use cases, object embedding vs style embedding would have different performance/preference.
that's a super hard question to answer :(
All different now! LoRA is by far the best all round method now and hugely gaining popularity... Great video by the way, excellent explanations!
amazing content, insta subbed !
Great video and explanation! I really want TI to be the future but I agree, the quality of dreambooth training is usually better.
thank's for the data point!
Bro, this video really helpful!
what a great list of checkpoints you have, a man of culture 🤣
thanks a lot for your effort, great job
Amazing educational video!
Muy buen Video! Gracias
ok you had me @00:27 , would be cool to see a video on civitai
insane work and attention here
You explained this so well. My smooth brain couldn't understand these different methods for the longest time \uwu/
man you're a hero
Amazing, thank you
could you please do a part 2 where you
- explain aesthetic gradients for educational purposes, and maybe provide data on user feedback like you did at the end for the others.
- explain lycoris, which from what i understand is lora + 4 random good ideas, but id love to see someone on your level break it down a bit better.
- give us updated data on the other forms now that more feedback is available (you mentioned not having a big enough sample size to judge the newest tech).
that would be insanely helpful. thanks!
Thanks you for really nice explonation
Great breakdown of the details, and pros and cons of each technology! Which model would you use if you plan to train a few characters, then re-use them a lot together to write a visual story frame by frame (as in colored manga)? On a low to mid end laptop GPU...
Much appreciated! Having this nodes of wisdom to operate with AI models is a huge contribution to society! Props to you.
Amazing explanation
What an insanely helpful video! I'm still holding out hope the quality of hypernetworks improves (I've had fantastic results with it, but updates often break it and nobody really knows what they're doing so guides are not great)
It shares some of the same advantages as TI (smaller file size, can be transferred between models easily) and I really hate having giant checkpoints just to add single concepts.
I was excited to learn about LoRA, but it looks like it can't be used without first adding it to a checkpoint, so its lost some appeal for me. Can you train multiple concepts to a checkpoint with LoRA one at a time and have them all retain coherency?
Very interesting video, thanks.
Finetuning is using Dreambooth with multiple text/image embedding right ?