Hey, I'm the author of the SDA768 Embed, glad you like it! It's actually one of my earlier embeds, I should probably run a V2 on it at some point. Thanks for the shoutout! 👍🏼
The courts decide that -- and it is different in every country what is legal there. This type of battle happens EVERY SINGLE TIME there is a new big technology. Not everything created by AI is using copyrighted work either. There are bezillions of free and public domain (defined as not copyrighted or out of copyright) works that can be used with these new technologies even if the rest gets ruled unlawful later.
I for one am dying to see how it all pans out, or if it doesn't just continue to go back and forth in courts for the next several decades. In other big cases there have been varying results... settling out of court, laws changing, or a real loss - but sometimes the losing party moved to a country where their thing was not against the law. Internet companies can do that, so something may be deemed illegal where you are, but NOT where they are.
I love your textural inversions. I haven't gotten quite to the point of training one myself, but I really want a negative text/words/letters inversion ;)
Just wanted to point out that Empire also has a negative embedding file you can also download. That will likely be the reason for the difference. For some reason the negative embedding improves my outputs.
Dude, thank you for going over the technical details. I was raging with confusion as to why textual inversion takes massively more time than making a new model. It makes sense that having to train something, essentially from scratch, would take longer than to build on existing knowledge. I also was not aware that dreambooth is a destructive process. It's very obvious to me now that you said it, but WOW I did not make that connection before.
@koiboi Please make a video about training an embedding, there is already some videos out there about TI and how to train one but no one is giving some detailed info about settings, learning rate, steps, ideal amount of images and why, some TI templates for a person face or a style There is so much to be mentioned but everyone is like setting learning rate to 0.005 and steps to 15000 and results are horrible
Amazing tools and workflow---> spend time on trying to replicate a seed with copy paste only to think its garbage because its not a copy paste result of the embedding example. Great art, you must have an amazing paintbrush.
Great video and explanation, though Textual Inversion came before Dreambooth, it was originally the only way to easily teach a new concept into Stable Diffusion with a limited dataset, then Dreambooth came out after and was implemented for SD instead of Imagen.
Textual inversion - best for training one very specific object or person that you'd like to use on multiple models. Models - Best for training a larger "class" of persons or objects or a certain style.
Great explaination! I am finally understanding how it all works. I have made heaps of successful ckpt models but embeddings have been a challenge. They only produce the images I train them on. I ask for something different and it spits out the same images. You mentioned using 3-5 images while I have been using ~20. Perhaps that is my issue? 🤪
@@lewingtonn I went with 5 and it worked much better. I also went with fewer steps. 7500 worked better than 20k. Rather counterintuitive but I'm thinking I overtrained the earlier attempts.
Thanks very much for such an excellent video as usual, question for you. I your description of how DreamBooth works, how do the regularization images work within that explanation. Thanks!!!
Keep in mind that your model has a massive impact on the TI/embed, both in terms of its creation and its end use. You're using the base SD1.5 model, which is more or less garbage. A decent model will significantly improve your results with any embed you add on top of it.
I really like the thought of a few kb's worth of data that's easily shareable but the quality just isn't the same with dreambooth or general fine tuning... So for now, I'll have to make do with 2GB models or look into merging to cut them down a bit
I read somewhere that textual inversion works much better on Stable Diffusion 2.0 and 2.1 (I haven't tried though). Maybe on future model versions its quality will improve even more?
@@juanjesusligero391 thats true, 2.1 embeddings are already powerful but for my use case (specific art styles) it's still not as good as a 1.5 dreambooth/finetune. hopefully it gets even better though :)
i seem to be having issues creating hypernetworks in 2.1... as i monitor my textual-inversion images they are all identical even though i have a training set and everything else setup
@@juanjesusligero391 I'm using a1111 typical method (the standard train tab), just to decrease vram usage go to setting and under training check "Move VAE and CLIP to RAM when training if possible. Saves VRAM." and "Use cross attention optimizations while training" When training set "Save an image to log directory every N steps, 0 to disable" to 0, this way it will not generate any images while training to save some vram. For the other settings I wouldn't claim I'm expert at that but I found 15,000 steps with 0.00005 of embedding learning rate fine but not the best.
They fundamentally work in different ways, textual inversion doesn't add any new images to the model itself. Except it takes parts of text that corelates with features of an image. It's like -> you give a bunch of images to the TI script, TI identifies features of the images and then make associations with the text prompt/keyword. Whereas dreambooth burns new images into the model itself. By converting images to noise, and then model making those noise back into the images. The model has a new concept introduced. That's dreambooth. Just a token, that's it. So fundamentally if TI doesn't introduce new images, then it's pretty much not recommended to train a non celebrity person inside it. The native SD model simply doesn't have our pictures. Though if you wanna introduce new artstyles, sure! Since SD has a lot of artstyles in it, and TI only makes better connections/word associations.
@@pipinstallyp 🙏 thanks for the explanation! Are you saying that TI would work better for styles than personal character? And Dreambooth is better at character than style?
7:48 I should mention that those details are user generated during upload. CivitAI doesn't auto generate that based on special data, it's the user's responsibility to put the correct info when they upload. There are a lot of users who input incorrect settings. One glaring example are people who make NAI based merges, and recommend clip skip of 2, but they list "SD 1.5" as the base model EDIT: Yeah the user clearly did not put the correct checkpoint. There's no fkn way they got that wtih SD1.5, it's very obviously an NAI based model they used.
I love using AI to create by using my own material to create models. But using existing IP without permission, then sell it is no different than stealing a bread. Or thinking that tickets for a concert of your favorite band should be free. These models exist because the artists who made them possible could buy and eat bread. Lets wait until AI gets hungry. I trust it will be smart enough to know whom to eat first😎
Hey, I'm the author of the SDA768 Embed, glad you like it! It's actually one of my earlier embeds, I should probably run a V2 on it at some point. Thanks for the shoutout! 👍🏼
AI "art" will never replace human art. What you're doing is illegal
The courts decide that -- and it is different in every country what is legal there. This type of battle happens EVERY SINGLE TIME there is a new big technology. Not everything created by AI is using copyrighted work either. There are bezillions of free and public domain (defined as not copyrighted or out of copyright) works that can be used with these new technologies even if the rest gets ruled unlawful later.
I for one am dying to see how it all pans out, or if it doesn't just continue to go back and forth in courts for the next several decades. In other big cases there have been varying results... settling out of court, laws changing, or a real loss - but sometimes the losing party moved to a country where their thing was not against the law. Internet companies can do that, so something may be deemed illegal where you are, but NOT where they are.
I love your textural inversions. I haven't gotten quite to the point of training one myself, but I really want a negative text/words/letters inversion ;)
I love making embeddings as I released a few now, and I am working on purely negative prompt embeddings with amazing results.
if you have any cool embeddings pls share on the discord!
Just wanted to point out that Empire also has a negative embedding file you can also download. That will likely be the reason for the difference. For some reason the negative embedding improves my outputs.
Dude, thank you for going over the technical details.
I was raging with confusion as to why textual inversion takes massively more time than making a new model.
It makes sense that having to train something, essentially from scratch, would take longer than to build on existing knowledge.
I also was not aware that dreambooth is a destructive process. It's very obvious to me now that you said it, but WOW I did not make that connection before.
2.0 Embeddings works great with 2.1 model
@koiboi Please make a video about training an embedding, there is already some videos out there about TI and how to train one but no one is giving some detailed info about settings, learning rate, steps, ideal amount of images and why, some TI templates for a person face or a style
There is so much to be mentioned but everyone is like setting learning rate to 0.005 and steps to 15000 and results are horrible
Nice work as always.
You're one of the best RUclipsrs of all time. Don't change a thing.
Thanks for the great explanation!
Thanks for the recap!! 😇
Amazing tools and workflow---> spend time on trying to replicate a seed with copy paste only to think its garbage because its not a copy paste result of the embedding example. Great art, you must have an amazing paintbrush.
Donates a wndows key to the man.
I was looking for exactly this kind of comment XD
Great video and explanation, though Textual Inversion came before Dreambooth, it was originally the only way to easily teach a new concept into Stable Diffusion with a limited dataset, then Dreambooth came out after and was implemented for SD instead of Imagen.
Nice one....Looking forward to a how to for Textural Inversion 😃
thanks for reading the paper
Textual inversion - best for training one very specific object or person that you'd like to use on multiple models.
Models - Best for training a larger "class" of persons or objects or a certain style.
Great explaination! I am finally understanding how it all works. I have made heaps of successful ckpt models but embeddings have been a challenge. They only produce the images I train them on. I ask for something different and it spits out the same images. You mentioned using 3-5 images while I have been using ~20. Perhaps that is my issue? 🤪
there are a lot of ways you can do training wrong sadly, definitely have a go with 3-5 images, maybe that will help
@@lewingtonn I went with 5 and it worked much better. I also went with fewer steps. 7500 worked better than 20k. Rather counterintuitive but I'm thinking I overtrained the earlier attempts.
@@Nalestech that's siiiiiiick!!!!
thoughts about Textual Inversion vs Lora?
Thanks very much for such an excellent video as usual, question for you. I your description of how DreamBooth works, how do the regularization images work within that explanation. Thanks!!!
What about hypernetworks? Also what's the diffrence between dreambooth and real finetuning?
Keep in mind that your model has a massive impact on the TI/embed, both in terms of its creation and its end use. You're using the base SD1.5 model, which is more or less garbage. A decent model will significantly improve your results with any embed you add on top of it.
If anything, watching the images generated as the TI is being trained is pretty funny. Especially if you're training your own face.
I really like the thought of a few kb's worth of data that's easily shareable but the quality just isn't the same with dreambooth or general fine tuning... So for now, I'll have to make do with 2GB models or look into merging to cut them down a bit
I read somewhere that textual inversion works much better on Stable Diffusion 2.0 and 2.1 (I haven't tried though). Maybe on future model versions its quality will improve even more?
@@juanjesusligero391 thats true, 2.1 embeddings are already powerful but for my use case (specific art styles) it's still not as good as a 1.5 dreambooth/finetune. hopefully it gets even better though :)
i seem to be having issues creating hypernetworks in 2.1... as i monitor my textual-inversion images they are all identical even though i have a training set and everything else setup
10:00 maybe is the size, cause wasn't a square.. i dunno
lol, maybe, there are a lot of ways to make it NOT work
@@lewingtonn hjahahah true buddy
230 would like to hear more on these
Are they training for a specific sampler if so how?
How much VRAM do I need to create textual embeddings? (I've got an Nvidia with 8GB VRAM, I hope I can! ^^)
I'm doing it on rtx 2060 (6 gb)
@@Z10T10 Those are really good news for me! Thank you very much! :D Are you using automatic1111 repo to create them? Or maybe another method?
@@juanjesusligero391 I'm using a1111 typical method (the standard train tab), just to decrease vram usage go to setting and under training check "Move VAE and CLIP to RAM when training if possible. Saves VRAM." and "Use cross attention optimizations while training"
When training set "Save an image to log directory every N steps, 0 to disable" to 0, this way it will not generate any images while training to save some vram. For the other settings I wouldn't claim I'm expert at that but I found 15,000 steps with 0.00005 of embedding learning rate fine but not the best.
Can u explain how we can add different checkpoint in Automatic 1111 Google Collab?
Can it replace dreambooth ?? Can u compare with real person , not just style
They fundamentally work in different ways, textual inversion doesn't add any new images to the model itself. Except it takes parts of text that corelates with features of an image.
It's like -> you give a bunch of images to the TI script, TI identifies features of the images and then make associations with the text prompt/keyword.
Whereas dreambooth burns new images into the model itself. By converting images to noise, and then model making those noise back into the images. The model has a new concept introduced. That's dreambooth. Just a token, that's it.
So fundamentally if TI doesn't introduce new images, then it's pretty much not recommended to train a non celebrity person inside it. The native SD model simply doesn't have our pictures. Though if you wanna introduce new artstyles, sure! Since SD has a lot of artstyles in it, and TI only makes better connections/word associations.
@@pipinstallyp 🙏 thanks for the explanation! Are you saying that TI would work better for styles than personal character? And Dreambooth is better at character than style?
I might be dumb but i think your first test was off because of the hight x width.
7:48
I should mention that those details are user generated during upload. CivitAI doesn't auto generate that based on special data, it's the user's responsibility to put the correct info when they upload.
There are a lot of users who input incorrect settings.
One glaring example are people who make NAI based merges, and recommend clip skip of 2, but they list "SD 1.5" as the base model
EDIT: Yeah the user clearly did not put the correct checkpoint. There's no fkn way they got that wtih SD1.5, it's very obviously an NAI based model they used.
I love using AI to create by using my own material to create models. But using existing IP without permission, then sell it is no different than stealing a bread. Or thinking that tickets for a concert of your favorite band should be free. These models exist because the artists who made them possible could buy and eat bread. Lets wait until AI gets hungry. I trust it will be smart enough to know whom to eat first😎