8:27 This step with editing image for outpainting is completely unnecessary. Automatic1111 can do all this internally. 1. Put generated square image in img2img inpaiting. 2. Mask the part of image you want to keep, leaving the edges open. 3. Below check: - Resize mode: Resize and fill - Mask mode: inpaint not masked - Masked content: original - Inpaint area: whole picture 4. Denoising strength: Set to something high like 0.75 and above. 5. Set image width (or height if you need as well) to be wider than the square. Eg. width 1024, height 512 = horizontal format. 6.Optional: Enter prompt what you want to appear in outpainted area. It often works well without a prompt. 7. Generate! NOTE: Inpainting model is not necessary for this method. I found that even regular ones do a good job. You can keep doing this as many time as needed. There are also outpainting extensions in A1111 but above works really well.
Awesome! I'll have to try this asap. I had tried the poor mans outpainting script before and one extension but wasn't very happy with the results so I kept using that manual method though it was getting annoying to do every time. I knew there had to be a better way within A1111 so thank you very much for sharing this!
Thank you! It was not so simple, and spent more then 300 image, but finally I could solve it. The magic key is the Control weight. In my version it is 1.8 Less then this I could get only beautiful images, but no code.. Also the Starting Control Step: 0 Ending Control Steps: 0.79 And the sampling steps: 30 So if your image is not working, play with the Control weigh first!
for scannability u can superimpose the qr code on top of the generated images and keep the transparency to about 9% to make the produced image scannable
I guess that the advice you've given is nothing more than a step back from a real esthetically beautiful ai art to simple and mediocre cheating in order to distort the true meaning of it.
It should work if you are able to add the QR ControlNet models. I know ThinkDiffusion supports ControlNet but I'm not sure if you can add custom models.
Excellent video tutorial. I was following along as closely as I could and finally found my issue: The QR code I was using was too complicated with smaller block sizes (I was trying to make a VCODE QR code). I switched to URL QR code for testing and I immediately had scannable results. Thanks for sharing this!
On iOS, the dedicated, built-in QR Code scanner that can be added to Control Center (via Settings > Control Center) performs much better at detecting QR codes than the native Camera app. I haven't compared its performance against 3rd-party apps, but it's worth giving it a try.
Excellent video tutorial and artworks indeed. It would be really great if you could make a video on comfyui version of this setup and how to make it in Colab.
Thanks and that's a great suggestion! I'm currently trying to figure out Comfy and will definitely look into making AI QR codes with comfy and SDXL. Biggest obstacle right now is that we don't have the required controlnet models for SDXL yet but that's only a matter of time
QR codes work inverted too if you want to change the tone you can invert or change your colors code, but keep in mind to keep a good brigthness between the two colors inverted codes migth turn our much better on some pictures
Yeah you can run this online on several sites and services.Judging from the comments on my videos, thinkdiffusion.com seems to be a popular service but I haven't tried it myself
my stable diffusion outputs do not seem to be reading control net at all, they look nothing like the image I have placed into controlnet, is there something that I am missing?
A good indicator to make sure that ControlNet is working is to check if there are QR code images next to the generated image output (This is the "raw" result of Controlnet before it gets turned into the image). Otherwise make sure you have Hires fix selected.
Great video im having a lot of fun with these. is 512x512 sent to High res fix the same as just generating 1024x1024 images or does high res fix have an added benefit?
Glad you like the video.That's a great question. What I know is that most checkpoints are trained with 512x512 (or 768x768) so in theory, you should be getting better results than with 1024x1024. I guess the same goes for the QR controlnet model.
Two things you could try: 1.Check for potential error messages in the command window. 2.Check the outputs folder and see if any images are stored there
I make all the settings the same, I enter the promt, I add the qr code to the controlnet, but only the image is formed, it does not combine with the qr code. I'm going crazy please help. At the end of the process, there is no qr code there, why is it like that, anyone with an idea, please help
Do you see an image of a normal QR code next to the generated image? This would indicate that ControlNet is working properly. Also check if you have hires fix enabled. I couldn’t get the qr code to merge with the image when I forgot to enable it
@@AiVOICETUTOR Only the image is created for the promt, there is no qr code. I enable controlnet. my version is ControlNet v1.1.231, I don't understand where is the problem. I am using mac m1.
I was finally able to activate the controlnet. I followed the steps in your other video. but unfortunately I couldn't create a readable qr code. What you did in this video is like a dream to me. because no matter what I did with these models, I couldn't even create a qr code. :(
This is amazing but I don’t have a discrete gpu, is there any way to install control net to any of the web based versions of stable diffusion like google collab? Idk what I’m doing tbh.
Cool tutorial! Thanks! But for some reason I’m not able to get same beautiful images like yours, I’m trying your prompts and similar settings and also the same models, but I still get ugly images which do not look like salad or pizza, and also blurry. Any suggestions on what could be the reason? Thanks!
Hi nice videos, I am using Run Diffusion with your same prompts and settings i can not get the same results you are getting, don’t know why, not even close .
Hi, thanks. From what I can tell about Run Diffusion it’s a fully managed instance of automatic1111. So maybe not everything functions the same way as the normal one. Do you see the command window? And if so does it show any errors when running the prompts?
@@AiVOICETUTOR could you please make a video compering run Diffusion vs stable diffusion with the same prompts, settings, controlnet , check point. I am pretty sure run Diffusion is not working as same as stable diffusion and we are paying for it, I am telling you I am using same everything you doing and I am getting shitty images, but I cant compare them because my mac wont run stable diffusion, brother if you do that will help a lot to know if we ar
Hello... I'm really enjoying the video you uploaded. It has been very helpful. Thank you so much. I have a question... How did you create the QR Code inside the car at the end of the video you just mentioned? Thank you so much for your response. Have a great day!🤩
Hello, I'm very glad you liked the video! The QR code in the car was created with the last prompt saying "sports car"in the prompts list (pastebin.com/nqxCh4hP). I made it in 512x512 first and then outpainted to widescreen. Hope this helps and have a nice weekend!
Hey, I have an error "ModuleNotFoundError: No module named 'cldm'", I can use without problem any other models, but this one with the qrcode don't work. Do you have any idea?
Hey did you download the .yaml file of the model? This seems to suggest it's related to the .yaml: github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/7784
Hi, Could you please make a video compering run Diffusion vs stable diffusion with the same prompts, settings, controlnet , check point. I am pretty sure run Diffusion is not working as same as stable diffusion and we are paying for it, I am telling you I am using same everything you doing and I am getting shitty images, but I can't compare them because my mac wont run stable diffusion, brother if you do that will help a lot to know if we are paying for something is not really working 100%.
ни один из пачки нагенеренных qr'ов не считался. ни с каких моделей, разные ссылки, по инструкции шаг за шагом. любой другой qr тел читает на раз-два, но только не с нагенеренных. кое-чем попахивает
Thanks for your feedback and sorry they aren't scanning for you. Best advice I can give is to reduce the size of the codes before trying to scan. And use a third party scanning app.
8:27 This step with editing image for outpainting is completely unnecessary. Automatic1111 can do all this internally.
1. Put generated square image in img2img inpaiting.
2. Mask the part of image you want to keep, leaving the edges open.
3. Below check:
- Resize mode: Resize and fill
- Mask mode: inpaint not masked
- Masked content: original
- Inpaint area: whole picture
4. Denoising strength: Set to something high like 0.75 and above.
5. Set image width (or height if you need as well) to be wider than the square. Eg. width 1024, height 512 = horizontal format.
6.Optional: Enter prompt what you want to appear in outpainted area. It often works well without a prompt.
7. Generate!
NOTE: Inpainting model is not necessary for this method. I found that even regular ones do a good job.
You can keep doing this as many time as needed. There are also outpainting extensions in A1111 but above works really well.
Awesome! I'll have to try this asap. I had tried the poor mans outpainting script before and one extension but wasn't very happy with the results so I kept using that manual method though it was getting annoying to do every time. I knew there had to be a better way within A1111 so thank you very much for sharing this!
thank yoouuu
Thank you!
It was not so simple, and spent more then 300 image, but finally I could solve it.
The magic key is the Control weight. In my version it is 1.8
Less then this I could get only beautiful images, but no code..
Also the Starting Control Step: 0
Ending Control Steps: 0.79
And the sampling steps: 30
So if your image is not working, play with the Control weigh first!
Awesome! Congrats and thanks for sharing that!
Amazing content bro. Keep these QR videos coming. The best on RUclips!
Thanks bro! Glad you like them
are you sure the models don't go in the extensions folder?
for scannability u can superimpose the qr code on top of the generated images and keep the transparency to about 9% to make the produced image scannable
That's great advice! Thank you
I guess that the advice you've given is nothing more than a step back from a real esthetically beautiful ai art to simple and mediocre cheating in order to distort the true meaning of it.
You and Olivio are the best SD teachers out there
Thank you very much for the compliment!
if the UI I got doesnt show "KARRAS" just like that, but I can choose it from a "Schedule type" list... would it work?? :) Im new in this
Please help
They grow up so fast. If you compare those to those of the first generation and this was just a couple of weeks ago
Indeed. Things are moving crazy fast. Wonder what we'll be able to do in a few weeks from now
Can we do this in thinkdiffusion ?
Thanks !
It should work if you are able to add the QR ControlNet models. I know ThinkDiffusion supports ControlNet but I'm not sure if you can add custom models.
Excellent video tutorial. I was following along as closely as I could and finally found my issue: The QR code I was using was too complicated with smaller block sizes (I was trying to make a VCODE QR code). I switched to URL QR code for testing and I immediately had scannable results.
Thanks for sharing this!
Thanks for the compliment and for sharing your experience! Glad that you figured it out!
I love it, just in case people would just look at it rather than scaning it not knowing it was a rickroll
Thank you for confirming that the codes work
@@AiVOICETUTOR some of the QRs link to some of your videos
On iOS, the dedicated, built-in QR Code scanner that can be added to Control Center (via Settings > Control Center) performs much better at detecting QR codes than the native Camera app. I haven't compared its performance against 3rd-party apps, but it's worth giving it a try.
Wow that's a great advice! Never knew that iOS had a built-in QR scanner. Thank you very much for sharing this!
Unbelievably beautiful.
Thank you 🙏
Excellent video tutorial and artworks indeed. It would be really great if you could make a video on comfyui version of this setup and how to make it in Colab.
Thanks and that's a great suggestion! I'm currently trying to figure out Comfy and will definitely look into making AI QR codes with comfy and SDXL. Biggest obstacle right now is that we don't have the required controlnet models for SDXL yet but that's only a matter of time
QR codes work inverted too if you want to change the tone you can invert or change your colors code, but keep in mind to keep a good brigthness between the two colors
inverted codes migth turn our much better on some pictures
Good advice! Thanks for mentioning that
is there any way for me to run this online? I can't install SD and ControlNet on my laptop
Yeah you can run this online on several sites and services.Judging from the comments on my videos, thinkdiffusion.com seems to be a popular service but I haven't tried it myself
my stable diffusion outputs do not seem to be reading control net at all, they look nothing like the image I have placed into controlnet, is there something that I am missing?
A good indicator to make sure that ControlNet is working is to check if there are QR code images next to the generated image output (This is the "raw" result of Controlnet before it gets turned into the image). Otherwise make sure you have Hires fix selected.
excellent video brother but can i have the acces to your drive where all your prompts are written
Thanks, try downloading it again. It should work now
Thanks for the tutorial. Let me try that 👍
Hope you enjoy it and everything works fine 👍
@@AiVOICETUTOR I'm using a 6GB GPU, not enough memory to use Hires.fix. Had to adjust the ControlNet parameters to get it scannable.
Great video im having a lot of fun with these. is 512x512 sent to High res fix the same as just generating 1024x1024 images or does high res fix have an added benefit?
Glad you like the video.That's a great question. What I know is that most checkpoints are trained with 512x512 (or 768x768) so in theory, you should be getting better results than with 1024x1024. I guess the same goes for the QR controlnet model.
everythings working except when it gets to 100% generated it doesnt show anything. it just goes back to how it looked before i pressed 'generate'
Two things you could try: 1.Check for potential error messages in the command window. 2.Check the outputs folder and see if any images are stored there
I make all the settings the same, I enter the promt, I add the qr code to the controlnet, but only the image is formed, it does not combine with the qr code. I'm going crazy please help. At the end of the process, there is no qr code there, why is it like that, anyone with an idea, please help
Do you see an image of a normal QR code next to the generated image? This would indicate that ControlNet is working properly. Also check if you have hires fix enabled. I couldn’t get the qr code to merge with the image when I forgot to enable it
@@AiVOICETUTOR Only the image is created for the promt, there is no qr code. I enable controlnet. my version is ControlNet v1.1.231,
I don't understand where is the problem. I am using mac m1.
I was finally able to activate the controlnet. I followed the steps in your other video. but unfortunately I couldn't create a readable qr code. What you did in this video is like a dream to me. because no matter what I did with these models, I couldn't even create a qr code. :(
So it won't even scan when setting the control weight to 2? With Hires Fix enabled? I'll have to check out the Mac implementation someday
I don't have ControlNet folder can I just create one?
You need to install Controlnet which should add the folder. See here: ruclips.net/video/Y1rYTnupZk4/видео.html
This is amazing but I don’t have a discrete gpu, is there any way to install control net to any of the web based versions of stable diffusion like google collab? Idk what I’m doing tbh.
I am not sure if you can install ControlNet to a Colab manually but there are definitely Colabs that have it installed by default.
@@AiVOICETUTOR Could you link to one or point me in the right direction of how to find them? Not entirely sure how google collab works.
Sure, check out github.com/camenduru/controlnet-colab.Ir you want an even easier solution theres services like www.thinkdiffusion.com. Hope this helps
Cool tutorial! Thanks! But for some reason I’m not able to get same beautiful images like yours, I’m trying your prompts and similar settings and also the same models, but I still get ugly images which do not look like salad or pizza, and also blurry.
Any suggestions on what could be the reason? Thanks!
Sorry you couldn't get it to work yet. Do you have Hires fix enabled?
This is great! Quick question, what would you recommend for the hardware requirements for attempting this?
Thanks! You only need Win 10 and a dedicated GPU with 4gb of ram: stable-diffusion-art.com/install-windows/#Systems_requirements
Hi nice videos, I am using Run Diffusion with your same prompts and settings i can not get the same results you are getting, don’t know why, not even close .
try using the same website for the qr code, the qr code needs white spaces all around it.
Hi, thanks. From what I can tell about Run Diffusion it’s a fully managed instance of automatic1111. So maybe not everything functions the same way as the normal one. Do you see the command window? And if so does it show any errors when running the prompts?
@@AiVOICETUTOR could you please make a video compering run Diffusion vs stable diffusion with the same prompts, settings, controlnet , check point. I am pretty sure run Diffusion is not working as same as stable diffusion and we are paying for it, I am telling you I am using same everything you doing and I am getting shitty images, but I cant compare them because my mac wont run stable diffusion, brother if you do that will help a lot to know if we ar
Hello... I'm really enjoying the video you uploaded. It has been very helpful. Thank you so much. I have a question... How did you create the QR Code inside the car at the end of the video you just mentioned? Thank you so much for your response. Have a great day!🤩
Hello, I'm very glad you liked the video! The QR code in the car was created with the last prompt saying "sports car"in the prompts list (pastebin.com/nqxCh4hP). I made it in 512x512 first and then outpainted to widescreen. Hope this helps and have a nice weekend!
@@AiVOICETUTOR Thank you so much!!! Have a Great Day!!!
@@AiVOICETUTOR Oops, I have a question again... How did you do the moving heart in the QR code?
Motionleap. You can see some of it here: ruclips.net/video/faETMOvFUq8/видео.html
@@AiVOICETUTOR Thank you...!!! 😍
Hi I face a probelm , I can scan the code using the camera but when I save the photo I can't scan it anymore 😭
Hey, I have an error "ModuleNotFoundError: No module named 'cldm'", I can use without problem any other models, but this one with the qrcode don't work. Do you have any idea?
Hey did you download the .yaml file of the model? This seems to suggest it's related to the .yaml: github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/7784
@@AiVOICETUTOR Thx just found out I put it in the wrong folder ... I'm new to IA in general so basically noob error =)
Happens to all of us :) Glad you managed to figure it out!
I'm getting an error that says "RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)"
do you know what I can do?
Make sure you use the same resolution as in the video
sill having the same problem @@AiVOICETUTOR
Sorry for the late reply. Maybe you are using a different model then?
@@AiVOICETUTOR I tried using sd 1.5 and sdxl, with no luck
The best, thanks 🔥✨
Thank you for watching!
Hi, Could you please make a video compering run Diffusion vs stable diffusion with the same prompts, settings, controlnet , check point. I am pretty sure run Diffusion is not working as same as stable diffusion and we are paying for it, I am telling you I am using same everything you doing and I am getting shitty images, but I can't compare them because my mac wont run stable diffusion, brother if you do that will help a lot to know if we are paying for something is not really working 100%.
Hi, I can't promise anything but I'll look into Run Diffusion
you meant that instead of the QR codes you can use other BW images?
For example a text can be made of pizza. Or a company logo!
Try it! :)
Thanks for sharing! Love that theres so many cool things to try out and figure out with those tools :D
Good
Thanks
ни один из пачки нагенеренных qr'ов не считался. ни с каких моделей, разные ссылки, по инструкции шаг за шагом. любой другой qr тел читает на раз-два, но только не с нагенеренных. кое-чем попахивает
WHERE IS THE AI VOICE INFORMATION THAT YOURE USING?
Its in this video: ruclips.net/video/5i_Pyw0gH-M/видео.html&lc=UgwxRUz7ooeVnaWuvqd4AaABAg
wow
None of these scan, so don't actually do this. You need to make the QR code signal a lot clearer if you want these to scan on a normal android phone.
Thanks for your feedback and sorry they aren't scanning for you. Best advice I can give is to reduce the size of the codes before trying to scan. And use a third party scanning app.