i did this process back when it first came out and you had to rent a monster GPU online, its cool to see it becoming a more user friendly process now. thanks for sharing!
Tried this when it first came out, I got better results when using more varied lighting, poses, makeup, etc. trying to get more variation in the dataset made the results feel more like me. epic video btw, glad to see more tuts popping up for this stuff 👍
Hey! Even though I typed the name of the "Create Embedding" section, I cannot select the name of the "Train" section and it gives an "error" in the console.
I’m not understanding how some people are getting it to spit out these so quickly, I have a 1060 and 16gb ram and am doing all of these settings, using only 8 pictures, and it’s saying it’s going to take 700 hours
Discovered, you don't need to go into the inversions folder and copy a PT file, the embedding was already created in the embeddings folder, so just type the name of the embedding already in there.
I put in 20 images of my face, and the test images came out showing a fucking TRAIN! Then they got a bit better and showed warped versions of my face, merged with a CANARY! One image was a sewing machine. What the hell? The images I put in were all head shots, very similar to the ones Thom used. How they hell did stable diffusion decide that my face looks like an old steam locomotive?
@@hemu8452 Nope. I tried a totally different series of faces and just got really random stuff. Sofas, cats, cars... Nothing that looked anything like what I had put in.
Hello Thomtutorial, great work as usual. I have a question, you mentioned that you can put your own style of art if you have one (im not sure if is in this video or another haha). Is the proccess somewhat like that?
EDIT: got it working with 512x512, had to go to settings in the sd-web ui and go to "Training" then check the following - Move VAE and CLIP to RAM when training if possible. Saves VRAM - Use cross attention optimizations while training
Funny, I just did the same yesterday and almost came to the same workflow. But I used preprocess images to label my pictures accordingly using the BLIP for caption and be able to tell the network what on the picture belongs to me
so now i have several Questions. First would be... can i train the model to one person, than train the next one, and then merge the models somehow?, so i could say an image with Me and My wife sitting at the beach for example? Next would be.. What s the difference from dreambooth to train, to lora.. what is what , and when do i use what?
I just keep getting the same "error", when i do batch of 8, it just instantly spits out: "Training finished at 0 steps. Embedding saved to...". It works with batch of 1. I have pretty high pc specs, what could cause this problem? Thanks.
This was great. Easy to follow. Thank you. If I wanted to train my drawing style would I do everything the same but choose ‘style’ instead of ‘subject’? Would the embed go to same location in folders as where you put yr embed from portrait training? This was a great tutorial. Thankyou.
Man, I did the training up to 1850 steps and the results I got were amazing. However putting the embedding file I created in it's folder I get errors and it wont load. No one online seems to have this issue so I'm kinda SOL.
Depends on the method you used to generate said dataset if your talking about sending the dataset into A1111 no it will not go to the cloud as A1111 is just a WebUI for Stable diffusion model running on python on your own local machine (unless you are running A1111 in the cloud then A: why tf would you do that and B: get a proper computer if your using the cloud for computation services its not worth it). If you are using third party solutions for your dataset then there is no guarantee
Honestly Im really dissapointed to see you get so into this AI stuff, i followed you for your genuienly good 3d tutorials but I just cannot condone AI art as it stands right now, its just rife with moral issues
It's a tool like any other, the morality depends on how it's used. If someone uses it to create art that is a rip off of another and just pass that off on their own, it's not the Ai that's at fault, it's the human using it to do that. There also some really cool uses for it like ANIME ROCK, PAPER, SCISSORS
There's so many tutorials out there on midjourney. But I only trust ONE channel. Thanks fam.
its not midjourney
@@blushingbutterfly7742 I know. I realized as I typed it. Too lazy to fix it. Thanks anyway default!
@@theplayerformerlyknownasmo3711too lazy to fix it, but not too lazy to reply with an explanation?? FIX IT NOW!!
@@theplayerformerlyknownasmo3711FIX THE FERN BACK
@@callum6224 plz
i did this process back when it first came out and you had to rent a monster GPU online, its cool to see it becoming a more user friendly process now. thanks for sharing!
Tried this when it first came out, I got better results when using more varied lighting, poses, makeup, etc. trying to get more variation in the dataset made the results feel more like me.
epic video btw, glad to see more tuts popping up for this stuff 👍
Now I can finally make kinky deepfakes of CGMatter. ( ͡° ͜ʖ ͡°) Thanks bro.
Hey! Even though I typed the name of the "Create Embedding" section, I cannot select the name of the "Train" section and it gives an "error" in the console.
Never really had any damn luck with training, hopefully I'll get something this time with your help
good stuff brother...
I'm getting "Training finished at 0 steps."
How to fix it?
use batch size 1
ive tried, it dosent work@@OldToby53
This, combined with Lora and controlnet👌😏
I’m not understanding how some people are getting it to spit out these so quickly, I have a 1060 and 16gb ram and am doing all of these settings, using only 8 pictures, and it’s saying it’s going to take 700 hours
hmm... 1. install stable diffusion, 2. generate not existing hot woman, 3. start onlyfans with virtual woman that look real, ???, get rich (maybe)
Discovered, you don't need to go into the inversions folder and copy a PT file, the embedding was already created in the embeddings folder, so just type the name of the embedding already in there.
I put in 20 images of my face, and the test images came out showing a fucking TRAIN! Then they got a bit better and showed warped versions of my face, merged with a CANARY! One image was a sewing machine. What the hell? The images I put in were all head shots, very similar to the ones Thom used. How they hell did stable diffusion decide that my face looks like an old steam locomotive?
i have the same isssue. Solved it?
@@hemu8452 Nope. I tried a totally different series of faces and just got really random stuff. Sofas, cats, cars...
Nothing that looked anything like what I had put in.
+
I love the tuxedo hoodie at the end
VERY good step-by-step tutoriial, please do more of that!
Dude! You’re in the matrix… now you’re immortal… Woah!
What hardware are you using? Image generation looks buttery smmoth. I feel like the training part might blow up my 8 gig RAM Macbook Air M1.
You rock that fake mustache!
Are you planning on teaching us Dreambooth, master? I'd really love to make LoRAs
I always have to click "Train embeding" every 15 steps or so. Is there a reason it does not go on its own?
Hello Thomtutorial, great work as usual. I have a question, you mentioned that you can put your own style of art if you have one (im not sure if is in this video or another haha). Is the proccess somewhat like that?
Although I have 16 gb of vram, "CUDA out of memory" Is there any way to prevent this?
EDIT: got it working with 512x512, had to go to settings in the sd-web ui and go to "Training" then check the following
- Move VAE and CLIP to RAM when training if possible. Saves VRAM
- Use cross attention optimizations while training
Funny, I just did the same yesterday and almost came to the same workflow. But I used preprocess images to label my pictures accordingly using the BLIP for caption and be able to tell the network what on the picture belongs to me
what GPU are you using ? mine takes like 1.5 minutes minimum on rx580 but here you just clicking
so now i have several Questions.
First would be... can i train the model to one person, than train the next one, and then merge the models somehow?,
so i could say an image with Me and My wife sitting at the beach for example?
Next would be..
What s the difference from dreambooth to train, to lora.. what is what , and when do i use what?
Training finished at 0 steps. 🙁what am i doing wrong? followed every step (RTX 2070 ti)
Try batch size 1
@@kotsylwester5572 and set "Use cross attention optimizations while training" on settings/training to off
@Riya Singh see if there are empty new line characters in your subject.txt file. when I got rid of them it fixed the problem
Best man!
If you use restore faces, does it overwrite your face and it no longer looks like you? Or does it just fix any errors like weird eyes?
I just keep getting the same "error", when i do batch of 8, it just instantly spits out: "Training finished at 0 steps. Embedding saved to...".
It works with batch of 1. I have pretty high pc specs, what could cause this problem? Thanks.
you found a solution?
Getting same error
@@riyasingh9280 see if there are empty new line characters in your subject.txt file. when I got rid of them it fixed the problem
Try less batch size
@@jaysee6320what d you mean "new line characters"?
This was great. Easy to follow. Thank you. If I wanted to train my drawing style would I do everything the same but choose ‘style’ instead of ‘subject’? Would the embed go to same location in folders as where you put yr embed from portrait training? This was a great tutorial. Thankyou.
can't find textural inversion? Do I need to be logged into hugging face or something?
gpu cuda memory runs out any suggestions installed and tried different versions github repos sd sets nothing working
I just know it's veiny and thicc and curved slightly to the right
Man, I did the training up to 1850 steps and the results I got were amazing. However putting the embedding file I created in it's folder I get errors and it wont load. No one online seems to have this issue so I'm kinda SOL.
You say these are 512x512 resolution images while tooltip says they are actually 800x800 😄
Woops
@@CGMatter What is the right one?
Lora or embeddings?
Training finished at 0 steps.😐
Found solution?
@Riya Singh no I'm afraid not
@Riya Singh see if there are empty new line characters in your subject.txt file. when I got rid of them it fixed the problem
@@jaysee6320 You're right, as simple as that.
Does anyone know if the dataset you generate yourself stays on your local machine or does it transfer to some server in the cloud as well ?
Good question. F
Depends on the method you used to generate said dataset if your talking about sending the dataset into A1111 no it will not go to the cloud as A1111 is just a WebUI for Stable diffusion model running on python on your own local machine (unless you are running A1111 in the cloud then A: why tf would you do that and B: get a proper computer if your using the cloud for computation services its not worth it). If you are using third party solutions for your dataset then there is no guarantee
@@ipodtouchiscoollol alright makes sense.
Your name is thom
thank youuuuuuuuuuuuuuuuuuuuuuuuuu
👑
I appreciate that you explain what all the numbers actually mean 👍 great video mr. matter!
Honestly Im really dissapointed to see you get so into this AI stuff, i followed you for your genuienly good 3d tutorials but I just cannot condone AI art as it stands right now, its just rife with moral issues
It's a tool like any other, the morality depends on how it's used. If someone uses it to create art that is a rip off of another and just pass that off on their own, it's not the Ai that's at fault, it's the human using it to do that. There also some really cool uses for it like ANIME ROCK, PAPER, SCISSORS
Cry me a river.
sad?