EASILY Create Renders From A Sketch With AI - Stable Diffusion and Controlnet Tutorial
HTML-код
- Опубликовано: 26 июн 2024
- Create rough sketches into realistic renders using AI Stable Diffusion. In this video, we'll guide you step-by-step through the process, saving you time and effort. We will use Stable Diffusion, Automatic1111, Controlnet, and Realistic Vision. Don't miss out on this opportunity to enhance your interior and exterior designs. Watch now and start creating stunning visuals!
🛑 STOP Stable Diffusion is OVERLY complicated...
Stable Diffusion Cheat Sheet 👉 / altarch
This simple yet powerful tool is guaranteed to elevate your Stable Diffusion AI experience and help you produce IMPRESSIVE architectural imagery!
Prompt 👉 INSERT STYLE HERE, architecture, 8k uhd, dslr, soft lighting, high quality
00:00 Turn a Sketch to Render With AI Introduction
00:23 Stable Diffusion Cheat Sheet
00:35 Downloads Shortcut
01:02 Hugging Face Registration
01:12 Github Registration
01:20 Install Githubforwindows
01:50 Create AI Folder
02:19 Install Automatic1111
03:00 Download Stable Diffusion
03:33 Download and Install Python
04:25 Relocate Stable Diffusion Model
04:46 Download and Install Realistic Vision
05:19 Download Controlnet Scribble Model
05:34 Install and Start Stable Diffusion
06:48 Install Controlnet Extension
07:28 Install Controlnet Scribble Model
08:00 How to Use Stable Diffusion (Create Renders)
10:00 BONUS TIP!!!
10:16 Important Closing Remarks
Common Errors:
1. Webui-user.bat freezing during installation? You may have accidentally clicked inside the black window and paused it simply click again inside the black window to unpause it. It does take a long time to load so give it at least an hour before troubleshooting.
2. Error message "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check". Open the notepad program, click file, click open, click file type and change it to all files, navigate to the AI folder where webui-user.bat is located, and open it, in the line that says COMMANDLINE_ARGS= type it so it says this "COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test". Save the file then try to reopen webui-user.bat.
___
∴ S U P P O R T
1️⃣ Subscribe 🎁
2️⃣ Like 👍
3️⃣ Support me on Patreon 👉 / altarch
∴ L I N K S
Stable Diffusion Cheat Sheet / altarch
ALL DOWNLOADS ZIP FILE / altarch
Hugging Face huggingface.co/
Github github.com/
Gitforwindows gitforwindows.org/
Automatic 1111 github.com/AUTOMATIC1111/stab...
Stable Diffusion v1-5 huggingface.co/runwayml/stabl...
Python www.python.org/
Realistic Vision (3.0 Update) civitai.com/models/4201/reali...
ControlNet Scribble Model huggingface.co/lllyasviel/Con...
ControlNet Extension github.com/Mikubill/sd-webui-...
As someone who understands very little about this topic, this video was an amazing tutorial. Thank you! It should be said though that if your computer doesn't have a Nvidia branded GPU, your computer will not generate images from sketches within a decent amount of time. After following this tutorial and spending about 10 solid hours playing with stable diffusion and researching how to get SD to generate one image in less than 45 minutes, the only solution I can find is that you either have to pay for Google Collab, or update the computer's GPU. If anyone can provide me with information proving this research wrong - I'd love that and be sincerely grateful.
As of now I believe: if you're like me and wanted to use stable diffusion to create architectural renders for free with a basic intel GPU laptop - it won't happen. More likely you'll have to spend your money on Midjourney.
Best tutorial I ve watched in a long time and i watch a lot of them. Thanks alot. Great work!
Super well explained and organized thank you so much. More architecture related vids would be great
Thank you! 😄
Man this was one of the best I have seen!
Thank you for your time and effort.
Exactly what i was looking for thanks so much
Glad I could help!
@@altArchitecture i was stuck on realistic vision, there are not too many tutorials that go all the way through the process like this.
@@JS-zd4yp thank you. Yeah I noticed they all assumed you know stable diffusion already. I mean 90% of people don’t have the time to learn it…I hope I made it easy and clear
Wow! really cool. Thanks!
Thanks for the tutorial sir 🙏
Of course! Thanks for watching
This is a really cool video. I want to try it on bathroom designs. Thanks!
love ur intro and branding
Thank you very much!
Amazing Video, thanks man
I'm happy you liked it!
THIS... is EXCELLENT !!!! And I mean it 👍👍👍🙂🙂🙂
Thank you!!! 👍👍
It helped!
Thank you
You welcome! 👍👍
Hello, thanks for the great tutorial, I was wondering if I could download it on my MacBook?
great tutorial, can't wait to use it! having trouble getting past the Stable Diffusion "install from URL" at 7:08. you said it might take a lot of time - i've been processing for 8 hours or so - should i keep waiting?
Were you able to resolve your issue?
May I ask why you chose 1.5 over 2 and XL?
Super awesome tutorial. Easy to follow. I pasted the Scribble Model into the model's folder, but I don't seem to have any files in there whereas you have a ton of files in there. Also, have it processing an image but it's taking about 20 minutes for one image. Is this correct?
It's okay. I have more models to use in SD. Remember when you downloaded the scribble model? on that same page are a long list of other models you can try out. There's a lot of tuts online for them but in my opinion the scribble model is the most useful for arch at the moment. My computer is decently fast so that could be why its processing quickly.
Do you have a workflow tutorial, or are you interested in making one, that also generates orthogonal views / model sheets from the initial sketch? I know there is things like char turner but so far it alsways works based on text input only.
@altarch when I click batch, I get a runtime error not able to use GPU...thoughts?
can you also make a lumion + mid journey/stable diffusion workflow video?
This is a great idea. If you render out a Lumion scene in all white/studio then import into stable diffusion you can practically create a render from your own 3D model.
Hello. Thanks for the video. Which models would be good for image to image, e.g. input=empty room -> output=same room with furniture and decoration or input=photo of my living room -> output=redesign of my living room with new furniture and decoration?
I’d have to go with empty room > room with furniture. The input with just bare walls, floors, windows, etc will make a better canvas to be populated.
Is it possible to do this with midjourney? thanks!
Hello thanks for the video, i downloaded the sd-webui-controlnet extension but it won't show up on my stable diffusen. Is there anything I could have missed? tanks.
Does stable diffusion with control net only work with nvidia?
hello, can you please help me?
i get this message at the end..
Stable diffusion model failed to load
Applying attention optimization: Doggettx... done.
what should i do?
Ay the end of running webui-user.bat for the first time when it takes a long time to install? If not give me a time stamp from the video so I know what step you are on or when you are getting the error
Hi, thanks for the video, I watched this and your Midjourney one. So is the essential difference between stable diffusion and midjourney, that when starting from a 3d sketch, stable diffusion will retain the original geometry, spatial parameters and contents (eg. furniture) and render it, where as if you upload the same 3d sketch to midjourney it will use the image as guide and reimagine the spatial parameters and contents? or is there a midjourney prompt that will ask it to retain the spatial parameters of the sketch? thanks
Hi Helena, that’s a really good analysis and your pretty correct. At the moment Midjourney isn’t as precise as stable diffusion in getting exactly what you want, but in my opinion it can produce very inspiring images quickly and easily.
Ok thanks. Guess I need to dive into stable diffusion then! Midjourney looked easier so I started there 😂
Awesome - I tried it and it works . Now what's the easiest way to go back to it after I logged off ? Thank you
Check out the bonus tip at the end! @10:00
Could I do the same test but for a buildings facade instead of its interior? Or Is midjourney better for it?
i am wondering the same thing can you guide us about that
I got through all the steps but unfortunatly when I went to generate some images I got this error message.
FileNotFoundError: [WinError 3] The system cannot find the path specified: ''
Error when tying to run the webui-user. getting this message:
Couldn't launch python
exit code: 9009
stderr: 'python' is not recognized as an internal or external command,
operable program or batch file.
Anyone know what to do?
Hello, thank you first of all for sharing this. I’m having trouble when in stable diffusion, next to preprocessor the (model) area doesn’t have any options it just says none. Can you help me resolve this
Try the section titled "Download Realistic Vision" @4:46 again and let me know if you missed a step!
Thank you!
are you still using ? i succesfuly downloaded but not able to generate any image yet
What gpu you have
Correction: I fixed the first error. Now I'm receiving the following: 'NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check."
I changed the setting to "upcast cross attention layer to float32" but no luck
I am having the same problem. let me know if you have any luck please!
Hey! I followed all the steps but there's no scribble model option for the model when I upload the sketch. It just says none. Any idea what the issue could be? Thanks for the video
I hit the refresh button next to it and it worked👍
Great I’m happy it worked out!
Not working.
NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Unfortunately there is a problem, when I run webui-user it gives an error: Couldn't launch python. exit code: 9009. Could you please help with it?
Every single one of these "simple" methods is like this: You want to know if its going to rain? Here's what you need to do: take a shovel, raincoat and a fishingpole! Place an alarm clock in your fridge! Now take the next bus to the second city nearby and wait in the woods till friday...
BRUH, I JUST WANT TO KNOW IF ITS GONNA RAIN OR NOT!!!
Hah! Right there with ya
RealisticVisionV20 Doesn't show available at the checkpoint (time stamp 8:10) how do I fix this?
It’s possible you may have skipped this step 4:46 download realistic vision (and move it into the checkpoints folder)
It says "RuntimeError: Not enough memory, use lower resolution (max approx. 640x640). Need: 0.4GB free, Have:0.1GB free" when i tried to generate the prompt. Could you tell me what should i do to increase memory?
I found a possible fix online. Please report back if it doesnt fix your problem. Its a lesser known bug with SD. Right click the web.ui-user file and select edit. Change the line from "set COMMANDLINE_ARGS=" to be "set COMMANDLINE_ARGS=--medvram"
Thank you for your attention; when i open the file with wordpad, there is no exact line that you mentioned. Could "export COMMANDLINE_ARGS="--medvram" need to be change at this situation? Or do i need to add the exact line to the file?@@altArchitecture
Hey, we're loving what you do here!! we'd like to collaborate with you. Please let us know how we can get in touch.
Can you do the opposite, turn image into outline sketch?
Great question! I’ll look into that.
🌹
Why thank you very much 🕺🏻 🌹
possible with MJ?
Midjourney isn’t able to do it as well as stable diffusion
I was able to do the downloading steps but got this Error. Can anyone help?
OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 4.00 GiB total capacity; 2.85 GiB already allocated; 0 bytes free; 2.89 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Time taken: 9.34sTorch active/reserved: 2935/2968 MiB, Sys VRAM: 4096/4096 MiB (100.0%)
Try decreasing your batch size to around 15. It's an option in stable diffusion.
@@altArchitecture, Thank you. I tried it already and didn't work.
@@stafescritorio394what happened?
Good video, but this grid background is absolutely terrible to look at.
Thanks for your feedback! Do you think making it larger would be better?
Paid?
hi there! i need some help here, around minute 6, i did all the process of the windows batch file, it downloaded all, then i clicked the space bar and now i get this ( and can not find the running on local url address) : venv "C:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-2-g4afaaf8a
Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7
Traceback (most recent call last):
File "C:\AI\stable-diffusion-webui\launch.py", line 48, in
main()
File "C:\AI\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
File "C:\AI\stable-diffusion-webui\modules\launch_utils.py", line 356, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Press any key to continue . . .
can you please help? i'm almost there!
I am facing an issue please tell me what to do
"venv "C:\Ai\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.4.0
Commit hash: 394ffa7b0a7fff3ec484bcd084e673a8b301ccc8
Traceback (most recent call last):
File "C:\Ai\stable-diffusion-webui\launch.py", line 38, in
main()
File "C:\Ai\stable-diffusion-webui\launch.py", line 29, in main
prepare_environment()
File "C:\Ai\stable-diffusion-webui\modules\launch_utils.py", line 268, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Press any key to continue . . ."
This is an error I received while trying to use SD on my 10-year-old laptop. This fix did not help me but it helped others. Are you running this on a slow/old computer? Try this fix listed in the video description:
Error message "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check". Open the notepad program, click file, click open, click file type and change it to all files, navigate to the AI folder where webui-user.bat is located, and open it, in the line that says COMMANDLINE_ARGS= type it so it says this "COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test". Save the file then try to reopen webui-user.bat.
@@altArchitecture This command work "--skip-torch-cuda-test" Thank You i am not aspecting a reply thank You very much for your reply
@@user-ye1qx8zx5l Thats great! Im happy it worked out. Thanks for letting me know.
@@altArchitecture Now i am facing this issue.
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
@@user-ye1qx8zx5l Try this command instead "--skip-torch-cuda-test --precision full --no-half"
Hello there, i tried all the steps one by one and when i lunch the webui-user.bat i get the following message:
Couldn't launch python
exit code: 9009
stderr:
Python konnte nicht gefunden werden. Fⁿhren Sie die Verknⁿpfung ohne Argumente aus, um sie ⁿber den Microsoft Store zu installieren, oder deaktivieren Sie diese Verknⁿpfung unter
Launch unsuccessful. Exiting.
I installed as recomended python 3.10.6
any recomendations?
thanks in advance!
When installing Python you were asked if you wanted to add it to your system PATH (The first question when installing Python on your computer) Try to uninstall and reinstall and see if that helps