One hint I want to share: QR monster considers RGB #888888 as “neutral” or “transparent”. So if you’re using a black logo on a white background, it will often result in overexposed pictures that are too bright - maybe that’s what you want but that’s the effect of the white background. Try putting logos on a grey background for a more neutral process
You're a legend. I was having issues with text in my generations and the high-res fix was the EXACT thing I needed!! Also I was upscaling my images before as well and you've saved me an extra step. Thank you!!
Thanks man, used this tutorial to create an epic metal t-shirt. black and white skull in controlnet, prompt: lost city of atlantis and one of your styles.
Hey man. I don't see the "controlnet" folder under extensions folder. I only have "put extensions here" .txt file. I was watching your video about installing it, I didn't installed any of the models from Civitai though, because I wanted to do only the logo task you show here. Is it necessary to install some models from CivitAI to go forward with it? Best!!
hey man, i have seen multiple SD webui users who used some kind of text-correction/text-suggestion when writing the prompt text. i can find how to enable that feature anywhere. can you help
Can I do this in image 2 image tab instead? For example, I found a really cool scenery wallpaper and I want to use this to get an apple logo on it. Is this possible? Thanks as always
How do I open the program? I have followed all the steps But I couldn't find anything in the file extension 😢 I did everything But I can't find the same files as yours I ask for help. It is two o'clock in the morning, but to no avail
I've followed your steps, but there is not a hint of my logo in any of my attempts. Even when the blurry image first appears, my logo is not there at all. Ideas? The only thing I'm missing from what you did is the "cinematic" option. I have no options when I click that box
When I make the control weight 2, and select "ControlNet is more important", I get just a hint of my logo shape coming through. Do I need to weight the logo more, somehow? Is the fact that my graphics card is only 6gb a factor?
Hey brother, i followed your video about how to install this and now came to this video. You are putting files in extension folder but in my case my extension folder is empty so how can i put this file into models??? please help!!!!
You can create the folders. But if you have no extensions folder, you probably have no extensions. Go into extensions tab and install ControlNet first.
Love this! But for my tries the Logos are very often very faint, almost blurry. Any idea how to fix this? Is this more likely about the model, the vae, cfg, steps, control weight,...?
In my extensions folder I don't have the folders ' sd-webui-controlnet ' like you do. I have downloaded the two files on hugging face. Am I missing something? Do I just create sd-webui-controlnet and models and drag the files into there?
Doesn't seem to work, my stable diffusion checkpoint is the control_vlp qr code monster tag. Is this correct? if not, what other checkpoint should be there instead? It gives me an error message of 'runtime error 'LayerNorKernellmpl' not implemented for 'half'. What does this mean?
Use a SD base 1.5 model, and within Controlnet, select the QR Monster. You'll need to use the Controlnet models within a Controlnet/models folder in the extensions, not directly as a Stable diffusion checkpoint. Also note You can't use the 1.5 Controlnet models with SDXL. You'll need to download the T2I or other XL Controlnet models.
Hey this is great, im just wondering can you link me to the Epic Realism model download. I cant seem to find a link - or maybe im just not seeing the right files i need. Would be a massive help thank you.
hello Kamph, thank you for all these useful videos. I tried your method and did everything like you but i am getting this error : RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320) how can i fix this please ?
@@sebastiankamph thank you it worked perfectly ! keep the good videos flowing we getting addict hh i suugest a topic : how to use deforum on InvokeAI ?
does the input photo require a background/have to be the same dimensions as the output generation also? (e.g. square input image if outputting 512x512)
I don't know if you'll ever read this but i'm from germany and have a Samsung Galaxy S10+(released in 2019) and i tried downloading some pictures and messing around with it. So i downloaded three illusion pictures and wanted to make a slide with it in tik tok. But while i was creating the photoslide my S10+ went nuts, started flickering on and off, not responding, green flickering lines started to appear but luckily it stopped when i exited tik tok in that one moment where the screen was on. My heart almost stopped, i thought i just lost my phone in that moment i wanted to cry. Please could you tell me why this happened to me?
Sorry to hear that. I have no idea. Most likely one of 2 things. You had a software error, ie an app that was broken, or your phone is starting to give up.
OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 4.00 GiB total capacity; 3.02 GiB already allocated; 0 bytes free; 3.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I got this error. What should I do next?
You said "twitter"... :-) I think you meant "X"?!?!?! LOL JK! 🙂 I think we are all conditioned to Twitter still!!! Very informative video as well!! youre a goat!
@@sebastiankamph there are so many Contant creator’s making videos on stable diffusion and technology is what it is. There is not much more you can add. But Warp has so many different possibility’s of settings, and there is no Contant creator that is making advance tutorials on how to warp only the basics.
@@sebastiankamph I noticed this while watch way too many videos about running LLMs locally. A common task was to write a simple web app that tells jokes when you press a button. It seems like half the time the joke would be this one ("atoms make up everything"). Btw, running local LLMs is getting a lot better.
huhh what a shit pitty maaaan why my side qr monster sucks i just type 13 words long text then make it png with white background and black text then after that i try none then linerart 2nd one u show in control net then i pick always qr monster in CN model tried dreamshaper model but my results always super crap not even illusion hpnd why dont u do it with text long text plzzzz
Get early access to videos as a Patreon supporter www.patreon.com/sebastiankamph
Happy to be supporting your great work - Thank you!
Thank you very much! That means a lot to me 🥰🌟
can you try the same thing for a person ?? what about 3 people ? with 3 shadows ? and what happens if you mix it with seg?
One hint I want to share: QR monster considers RGB #888888 as “neutral” or “transparent”. So if you’re using a black logo on a white background, it will often result in overexposed pictures that are too bright - maybe that’s what you want but that’s the effect of the white background. Try putting logos on a grey background for a more neutral process
You're a legend. I was having issues with text in my generations and the high-res fix was the EXACT thing I needed!! Also I was upscaling my images before as well and you've saved me an extra step. Thank you!!
Another good video, you provide the clearest instructions. One hour into using this model and I already got offered a job using it!
Wow, that's fantastic! And thank you for the kind words 😊🌟
My man spends 50% investigating new Stable Diffusion topics and the other 50% researching for more dad jokes 😂
I can assure you it's more like 25/75... 😅
The atom dad joke is the go-to dad joke for ChatGPT.... I doubt it takes him long!
Man, I didn't even know. I rarely use AI for the dadjokes! 😅
there's a new sd extension out that automatically converts dad jokes into realistic images
Please do a ComfyUi tutorial on this. Thanks!
big thanks to you, i would really appreciate a video to how to do this with comfyui
Kamph, your my hero... I learn so much by replicating some experiments and exploring the ideas further.
Good one - for what you're doing, the T2I Canny XL model works well, with the Canny preprocessor enabled.
That dad-joke really got me. Also everyone knows Atoms got made up aswell though ;)
Amazing crerative exploration, thank you!
LOVE IT
Just downloaded that cn model for qrs. And 10 min later you post this 😂😂 will try it out for sure!
Thank you very much for the tutorials and styles.
My pleasure!
Great job Sebastian. Thank you so much
Would be awsome if you did an updated video of this but with ComfyUI! :)
Is there a dimension choice?
This is so awesome and well explained. Thank you!
Thank you very much, happy to help :)🌟😊
great video, quick question, why do we need the yaml files as well as the regular one?
Amazing tutorial!
Glad you think so! And thank you 😊🌟
Amazing idea love it !
Thank you!
@@sebastiankamph I would love to see a new version of this tuto but with ComfyUI ! What would you do as a workflow :O
@@TheBlackBaku14 I would think it's exactly the same, just built in comfy
SD have nice options to do list of prompts like u have, can we do something like that in comfy UI? ready prompts and only get freom list?
So far I did similar experiments with the lineart controlnet. In which areas do you think the qr code monster model creates better results?
amazing, thank you!
@sebastiankamph hey man - how do you add styles to sd? Like your cinematic style? Thanks
hey there, has there been any updates to achieve this effect with comfy UI? or we still need to use stable diffusion for this controller model?
Have you had any luck doing this was img to img? It'd be neat to be able to take an existing photo and then drop a logo into it.
Love this, thanks for sharing
Thanks man, used this tutorial to create an epic metal t-shirt. black and white skull in controlnet, prompt: lost city of atlantis and one of your styles.
Hey man. I don't see the "controlnet" folder under extensions folder. I only have "put extensions here" .txt file. I was watching your video about installing it, I didn't installed any of the models from Civitai though, because I wanted to do only the logo task you show here. Is it necessary to install some models from CivitAI to go forward with it? Best!!
thank you!! One thing i didnt understand, what has the qr code model has to do with any of this?
hi, you said in the video the prompt styles are available for free but when i followed the link it requires you to be a paid subscriber.
....again: helpful and inspiring! Big FANX
My pleasure!
what's the best way to get a 16:9 aspect ratio? It does not work with this setup. Thanks.
I’m not getting any image on the right side. Just grey patch.. any idea how to solve that?
hey man, i have seen multiple SD webui users who used some kind of text-correction/text-suggestion when writing the prompt text. i can find how to enable that feature anywhere. can you help
Can I do this in image 2 image tab instead? For example, I found a really cool scenery wallpaper and I want to use this to get an apple logo on it. Is this possible? Thanks as always
You can get a similarly themed result but not as powerful imo.
Love this! 😍
how can I turn an image to black and white? a face, for example...
does anyone know what the purpose of the yaml files are?
Can we use this in ComfyUI and how to create it ? Thank you sir
Great, thank you very much for this tutorial! But What is "cinematic" settings?
Just some of my preset styles I use
How do I open the program? I have followed all the steps But I couldn't find anything in the file extension 😢 I did everything But I can't find the same files as yours I ask for help. It is two o'clock in the morning, but to no avail
Check out my Stable diffusion for beginner's full guide 2023 to get you running
I've done this exact thing with the older controlnet models before. What is the QR model doing differently here?
I'm trying to understand that myself. The QR model was trained specifically for QR codes, but I find using LineArt (with invert) or Canny also works.
Would this be possible on a video?
I've followed your steps, but there is not a hint of my logo in any of my attempts. Even when the blurry image first appears, my logo is not there at all. Ideas? The only thing I'm missing from what you did is the "cinematic" option. I have no options when I click that box
When I make the control weight 2, and select "ControlNet is more important", I get just a hint of my logo shape coming through. Do I need to weight the logo more, somehow? Is the fact that my graphics card is only 6gb a factor?
Did you go through hires fix as well?
Hey brother, i followed your video about how to install this and now came to this video. You are putting files in extension folder but in my case my extension folder is empty so how can i put this file into models??? please help!!!!
You can create the folders. But if you have no extensions folder, you probably have no extensions. Go into extensions tab and install ControlNet first.
it says ModuleNotFoundError: No module named 'cldm' , what should I do?
what's the best way to make the black and white images? can we do that in sd?
yes, grab a coloring book check point or lora.
Thanks for the tip @@Fiqure242
just use monochrome or greyscale in positive prompt
unless you mean for the logos, in which case just use photopea and the paint bucket tool
The prompt styles are not free in your patreon account. How do we get them free?
These are now changed to Patreon subscriber only
Love this! But for my tries the Logos are very often very faint, almost blurry. Any idea how to fix this? Is this more likely about the model, the vae, cfg, steps, control weight,...?
Raise the weight and use hires fix. I talk about this in the video.
good👍
In my extensions folder I don't have the folders ' sd-webui-controlnet ' like you do. I have downloaded the two files on hugging face. Am I missing something? Do I just create sd-webui-controlnet and models and drag the files into there?
You need to install the extension ControlNet. This guide doesn't show that, check my ControlNet install guides.
Doesn't seem to work, my stable diffusion checkpoint is the control_vlp qr code monster tag. Is this correct? if not, what other checkpoint should be there instead? It gives me an error message of 'runtime error 'LayerNorKernellmpl' not implemented for 'half'. What does this mean?
QR Code Monster is for ControlNet. You need a regular SD model as your base model
Use a SD base 1.5 model, and within Controlnet, select the QR Monster.
You'll need to use the Controlnet models within a Controlnet/models folder in the extensions, not directly as a Stable diffusion checkpoint.
Also note You can't use the 1.5 Controlnet models with SDXL. You'll need to download the T2I or other XL Controlnet models.
Hey this is great, im just wondering can you link me to the Epic Realism model download. I cant seem to find a link - or maybe im just not seeing the right files i need.
Would be a massive help thank you.
Civitai.com -> EpicRealism
i've been trying to make qr codes with this but i didnt manage to make even one scannable code, do u have tutorial on that?
Of course ruclips.net/video/HOY5J9UT_lY/видео.html
hello Kamph, thank you for all these useful videos.
I tried your method and did everything like you but i am getting this error : RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)
how can i fix this please ?
Can't mix sdxl and 1.5
@@sebastiankamph thank you it worked perfectly ! keep the good videos flowing we getting addict hh
i suugest a topic : how to use deforum on InvokeAI ?
does the input photo require a background/have to be the same dimensions as the output generation also? (e.g. square input image if outputting 512x512)
No, can use pixel perfect, but that will change where it ends up on your final image.
I don't know if you'll ever read this but i'm from germany and have a Samsung Galaxy S10+(released in 2019) and i tried downloading some pictures and messing around with it. So i downloaded three illusion pictures and wanted to make a slide with it in tik tok. But while i was creating the photoslide my S10+ went nuts, started flickering on and off, not responding, green flickering lines started to appear but luckily it stopped when i exited tik tok in that one moment where the screen was on. My heart almost stopped, i thought i just lost my phone in that moment i wanted to cry. Please could you tell me why this happened to me?
Sorry to hear that. I have no idea. Most likely one of 2 things. You had a software error, ie an app that was broken, or your phone is starting to give up.
i dont have any models in the controlnet tab
You need to download them
How would you go about making it 4k?
ultimate sd upscale with the cnet model?
Probably just a regular upscale. I don't the cnet tile upscale is warranted, but can be used for better results.
Any reason why you downloaded the V1 and not the newer V2 files? The V2 folder is kinda hidden, its a subfolder on huggingface
Oh, that was just a mistake when recording. Feel free to get the v2.
awesome, thanks!@@sebastiankamph
man oh man, its been a while since i touched Stable Diffusion. i think its time to get back at it 😂😂
It's been evolving for sure! 😊🌟
Greetings Sebastian, Thanks for the amazing video/Where can i get a list of styles?
See video description
@@sebastiankamph Thank you 🤗
why it took so long for you to make those images? for me it takes like 2seconds to create one lol
OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 4.00 GiB total capacity; 3.02 GiB already allocated; 0 bytes free; 3.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I got this error. What should I do next?
Buy a new GPU. Or load (in your webui-user.bat file) with arguments in --lowvram
You said "twitter"... :-) I think you meant "X"?!?!?! LOL JK! 🙂 I think we are all conditioned to Twitter still!!! Very informative video as well!! youre a goat!
quick question, what does highres steps do?
The number of steps for your highres pass, ie the 2nd/final pass of the generation.
if we leave it at 0, it matches the normal steps tho@@sebastiankamph
Can this be done in sdxl
Yes, but not with that model. Try the Canny sdxl model. It won't be exactly the same, but similar.
can we have that cinematic style preset?
Yes, it's in the description
I think the discord link is not working... =/
Yeah, lost my boosts. Check in channel profile
Sebastien, I would love to see you shifting gears a little bit and starting to make videos of WarpFusion.
Not a bad idea
@@sebastiankamph there are so many Contant creator’s making videos on stable diffusion and technology is what it is. There is not much more you can add. But Warp has so many different possibility’s of settings, and there is no Contant creator that is making advance tutorials on how to warp only the basics.
1a wie immer. kaum war das Video draußen wurde es auch schon von anderen kopiert. Kann man ja auch als Kompliment nehmen. danke.
Thank you! It's hard work as a smaller creator 😊🌟
LOL! Did you get a letter from apple for your other thumbnail? Or is Mr Beast just everyone's milk cow.
Just trying to find what people will click on 😁
Atoms make up everything has become an AI joke. Artificial Dad Intelligence?
I've learned that now, but I didn't know that before 😅. ADI surely must be a thing, no? 😁
@@sebastiankamph I noticed this while watch way too many videos about running LLMs locally. A common task was to write a simple web app that tells jokes when you press a button. It seems like half the time the joke would be this one ("atoms make up everything"). Btw, running local LLMs is getting a lot better.
Now this dadjoke was fire (it caused mayhem in Hiroshima)
Glad you liked it! 😊🌟
huhh what a shit pitty maaaan why my side qr monster sucks i just type 13 words long text then make it png with white background and black text then after that i try none then linerart 2nd one u show in control net then i pick always qr monster in CN model tried dreamshaper model but my results always super crap not even illusion hpnd why dont u do it with text long text plzzzz
The words need to be big enough. If they're small you can try to increase the weight to the max.
@@sebastiankamph plz plz plz make this same tutorial more easy in/for Defooocus plzzz ??? hmm
I don’t think u know how illusions work, your suppose to lose the image and squint your eyes to see it, that first twitter was good
Feel free to lower the controlnet weight and you will see less of the image :)
Dead birb. 😥
Sadge 😥
So, is the old Twitter logo homeless now? Is it just abandoned? (I'm guessing probably not, because lawyers.)
In the philosophical sense, I guess. From a legal standpoint, they very much still own it 😅