EASILY Create Renders From A Sketch With AI - Stable Diffusion and Controlnet Tutorial

Поделиться
HTML-код
  • Опубликовано: 26 июн 2024
  • Create rough sketches into realistic renders using AI Stable Diffusion. In this video, we'll guide you step-by-step through the process, saving you time and effort. We will use Stable Diffusion, Automatic1111, Controlnet, and Realistic Vision. Don't miss out on this opportunity to enhance your interior and exterior designs. Watch now and start creating stunning visuals!
    🛑 STOP Stable Diffusion is OVERLY complicated...
    Stable Diffusion Cheat Sheet 👉 / altarch
    This simple yet powerful tool is guaranteed to elevate your Stable Diffusion AI experience and help you produce IMPRESSIVE architectural imagery!
    Prompt 👉 INSERT STYLE HERE, architecture, 8k uhd, dslr, soft lighting, high quality
    00:00 Turn a Sketch to Render With AI Introduction
    00:23 Stable Diffusion Cheat Sheet
    00:35 Downloads Shortcut
    01:02 Hugging Face Registration
    01:12 Github Registration
    01:20 Install Githubforwindows
    01:50 Create AI Folder
    02:19 Install Automatic1111
    03:00 Download Stable Diffusion
    03:33 Download and Install Python
    04:25 Relocate Stable Diffusion Model
    04:46 Download and Install Realistic Vision
    05:19 Download Controlnet Scribble Model
    05:34 Install and Start Stable Diffusion
    06:48 Install Controlnet Extension
    07:28 Install Controlnet Scribble Model
    08:00 How to Use Stable Diffusion (Create Renders)
    10:00 BONUS TIP!!!
    10:16 Important Closing Remarks
    Common Errors:
    1. Webui-user.bat freezing during installation? You may have accidentally clicked inside the black window and paused it simply click again inside the black window to unpause it. It does take a long time to load so give it at least an hour before troubleshooting.
    2. Error message "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check". Open the notepad program, click file, click open, click file type and change it to all files, navigate to the AI folder where webui-user.bat is located, and open it, in the line that says COMMANDLINE_ARGS= type it so it says this "COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test". Save the file then try to reopen webui-user.bat.
    ___
    ∴ S U P P O R T
    1️⃣ Subscribe 🎁
    2️⃣ Like 👍
    3️⃣ Support me on Patreon 👉 / altarch
    ∴ L I N K S
    Stable Diffusion Cheat Sheet / altarch
    ALL DOWNLOADS ZIP FILE / altarch
    Hugging Face huggingface.co/
    Github github.com/
    Gitforwindows gitforwindows.org/
    Automatic 1111 github.com/AUTOMATIC1111/stab...
    Stable Diffusion v1-5 huggingface.co/runwayml/stabl...
    Python www.python.org/
    Realistic Vision (3.0 Update) civitai.com/models/4201/reali...
    ControlNet Scribble Model huggingface.co/lllyasviel/Con...
    ControlNet Extension github.com/Mikubill/sd-webui-...

Комментарии • 92

  • @whynopotatos
    @whynopotatos 6 месяцев назад +3

    As someone who understands very little about this topic, this video was an amazing tutorial. Thank you! It should be said though that if your computer doesn't have a Nvidia branded GPU, your computer will not generate images from sketches within a decent amount of time. After following this tutorial and spending about 10 solid hours playing with stable diffusion and researching how to get SD to generate one image in less than 45 minutes, the only solution I can find is that you either have to pay for Google Collab, or update the computer's GPU. If anyone can provide me with information proving this research wrong - I'd love that and be sincerely grateful.
    As of now I believe: if you're like me and wanted to use stable diffusion to create architectural renders for free with a basic intel GPU laptop - it won't happen. More likely you'll have to spend your money on Midjourney.

  • @kaiserchopp8892
    @kaiserchopp8892 4 месяца назад +1

    Best tutorial I ve watched in a long time and i watch a lot of them. Thanks alot. Great work!

  • @jsrender1
    @jsrender1 11 месяцев назад +3

    Super well explained and organized thank you so much. More architecture related vids would be great

  • @Davesivak
    @Davesivak 3 месяца назад

    Man this was one of the best I have seen!

  • @darkB2266
    @darkB2266 4 месяца назад

    Thank you for your time and effort.

  • @JS-zd4yp
    @JS-zd4yp Год назад +1

    Exactly what i was looking for thanks so much

    • @altArchitecture
      @altArchitecture  Год назад

      Glad I could help!

    • @JS-zd4yp
      @JS-zd4yp Год назад +1

      @@altArchitecture i was stuck on realistic vision, there are not too many tutorials that go all the way through the process like this.

    • @altArchitecture
      @altArchitecture  Год назад

      @@JS-zd4yp thank you. Yeah I noticed they all assumed you know stable diffusion already. I mean 90% of people don’t have the time to learn it…I hope I made it easy and clear

  • @rodrigoArch
    @rodrigoArch 3 месяца назад

    Wow! really cool. Thanks!

  • @aryandhani
    @aryandhani Год назад +1

    Thanks for the tutorial sir 🙏

  • @RBDesignWorks
    @RBDesignWorks 3 месяца назад

    This is a really cool video. I want to try it on bathroom designs. Thanks!

  • @homagebackup6404
    @homagebackup6404 10 месяцев назад +2

    love ur intro and branding

  • @liamryan2870
    @liamryan2870 10 месяцев назад +1

    Amazing Video, thanks man

  • @barbadaalpha
    @barbadaalpha 10 месяцев назад +1

    THIS... is EXCELLENT !!!! And I mean it 👍👍👍🙂🙂🙂

  • @a.miroshin
    @a.miroshin 3 месяца назад

    It helped!

  • @ziyi1420
    @ziyi1420 6 месяцев назад +1

    Thank you

  • @dalalalobaidan3461
    @dalalalobaidan3461 9 месяцев назад

    Hello, thanks for the great tutorial, I was wondering if I could download it on my MacBook?

  • @kstrix
    @kstrix 11 месяцев назад +1

    great tutorial, can't wait to use it! having trouble getting past the Stable Diffusion "install from URL" at 7:08. you said it might take a lot of time - i've been processing for 8 hours or so - should i keep waiting?

  • @NYCgoblue
    @NYCgoblue 8 месяцев назад

    May I ask why you chose 1.5 over 2 and XL?

  • @johnbell1809
    @johnbell1809 9 месяцев назад +1

    Super awesome tutorial. Easy to follow. I pasted the Scribble Model into the model's folder, but I don't seem to have any files in there whereas you have a ton of files in there. Also, have it processing an image but it's taking about 20 minutes for one image. Is this correct?

    • @altArchitecture
      @altArchitecture  8 месяцев назад

      It's okay. I have more models to use in SD. Remember when you downloaded the scribble model? on that same page are a long list of other models you can try out. There's a lot of tuts online for them but in my opinion the scribble model is the most useful for arch at the moment. My computer is decently fast so that could be why its processing quickly.

  • @DanielThiele
    @DanielThiele 2 месяца назад

    Do you have a workflow tutorial, or are you interested in making one, that also generates orthogonal views / model sheets from the initial sketch? I know there is things like char turner but so far it alsways works based on text input only.

  • @Range_Development_Services
    @Range_Development_Services 9 месяцев назад

    @altarch when I click batch, I get a runtime error not able to use GPU...thoughts?

  • @kaanbekaroglu3225
    @kaanbekaroglu3225 11 месяцев назад +1

    can you also make a lumion + mid journey/stable diffusion workflow video?

    • @altArchitecture
      @altArchitecture  10 месяцев назад +2

      This is a great idea. If you render out a Lumion scene in all white/studio then import into stable diffusion you can practically create a render from your own 3D model.

  • @rafael_tg
    @rafael_tg 8 месяцев назад

    Hello. Thanks for the video. Which models would be good for image to image, e.g. input=empty room -> output=same room with furniture and decoration or input=photo of my living room -> output=redesign of my living room with new furniture and decoration?

    • @altArchitecture
      @altArchitecture  7 месяцев назад

      I’d have to go with empty room > room with furniture. The input with just bare walls, floors, windows, etc will make a better canvas to be populated.

  • @SantiagoPeraza-nu6gm
    @SantiagoPeraza-nu6gm 3 месяца назад

    Is it possible to do this with midjourney? thanks!

  • @Anna-LenaBuhler
    @Anna-LenaBuhler 3 дня назад

    Hello thanks for the video, i downloaded the sd-webui-controlnet extension but it won't show up on my stable diffusen. Is there anything I could have missed? tanks.

  • @baljitkaur2973
    @baljitkaur2973 9 месяцев назад

    Does stable diffusion with control net only work with nvidia?

  • @el_me3margy
    @el_me3margy 11 месяцев назад

    hello, can you please help me?
    i get this message at the end..
    Stable diffusion model failed to load
    Applying attention optimization: Doggettx... done.
    what should i do?

    • @altArchitecture
      @altArchitecture  11 месяцев назад

      Ay the end of running webui-user.bat for the first time when it takes a long time to install? If not give me a time stamp from the video so I know what step you are on or when you are getting the error

  • @Helena-vv6rz
    @Helena-vv6rz 9 месяцев назад +1

    Hi, thanks for the video, I watched this and your Midjourney one. So is the essential difference between stable diffusion and midjourney, that when starting from a 3d sketch, stable diffusion will retain the original geometry, spatial parameters and contents (eg. furniture) and render it, where as if you upload the same 3d sketch to midjourney it will use the image as guide and reimagine the spatial parameters and contents? or is there a midjourney prompt that will ask it to retain the spatial parameters of the sketch? thanks

    • @altArchitecture
      @altArchitecture  9 месяцев назад

      Hi Helena, that’s a really good analysis and your pretty correct. At the moment Midjourney isn’t as precise as stable diffusion in getting exactly what you want, but in my opinion it can produce very inspiring images quickly and easily.

    • @Helena21165
      @Helena21165 9 месяцев назад +1

      Ok thanks. Guess I need to dive into stable diffusion then! Midjourney looked easier so I started there 😂

  • @fotoGR3at
    @fotoGR3at 10 месяцев назад +1

    Awesome - I tried it and it works . Now what's the easiest way to go back to it after I logged off ? Thank you

    • @altArchitecture
      @altArchitecture  10 месяцев назад

      Check out the bonus tip at the end! @10:00

  • @raquelgregorio8310
    @raquelgregorio8310 5 месяцев назад +1

    Could I do the same test but for a buildings facade instead of its interior? Or Is midjourney better for it?

    • @ma__ku
      @ma__ku 5 месяцев назад

      i am wondering the same thing can you guide us about that

  • @baljitkaur2973
    @baljitkaur2973 9 месяцев назад

    I got through all the steps but unfortunatly when I went to generate some images I got this error message.
    FileNotFoundError: [WinError 3] The system cannot find the path specified: ''

  • @gunnersen1337
    @gunnersen1337 7 месяцев назад

    Error when tying to run the webui-user. getting this message:
    Couldn't launch python
    exit code: 9009
    stderr: 'python' is not recognized as an internal or external command,
    operable program or batch file.
    Anyone know what to do?

  • @jacobmekari2932
    @jacobmekari2932 11 месяцев назад

    Hello, thank you first of all for sharing this. I’m having trouble when in stable diffusion, next to preprocessor the (model) area doesn’t have any options it just says none. Can you help me resolve this

    • @altArchitecture
      @altArchitecture  11 месяцев назад

      Try the section titled "Download Realistic Vision" @4:46 again and let me know if you missed a step!

    • @jacobmekari2932
      @jacobmekari2932 11 месяцев назад

      Thank you!

  • @mustafakuvel7674
    @mustafakuvel7674 7 месяцев назад

    are you still using ? i succesfuly downloaded but not able to generate any image yet

  • @user-lj4jk7ef5i
    @user-lj4jk7ef5i День назад

    What gpu you have

  • @tylerhowell3111
    @tylerhowell3111 7 месяцев назад

    Correction: I fixed the first error. Now I'm receiving the following: 'NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check."
    I changed the setting to "upcast cross attention layer to float32" but no luck

    • @hussainziyath1799
      @hussainziyath1799 6 месяцев назад

      I am having the same problem. let me know if you have any luck please!

  • @user-ed9zy8wu2q
    @user-ed9zy8wu2q 11 месяцев назад +1

    Hey! I followed all the steps but there's no scribble model option for the model when I upload the sketch. It just says none. Any idea what the issue could be? Thanks for the video

    • @user-ed9zy8wu2q
      @user-ed9zy8wu2q 11 месяцев назад +1

      I hit the refresh button next to it and it worked👍

    • @altArchitecture
      @altArchitecture  10 месяцев назад

      Great I’m happy it worked out!

  • @AleQcMx-LD
    @AleQcMx-LD 6 месяцев назад

    Not working.
    NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

  • @alexandravoitenko232
    @alexandravoitenko232 4 месяца назад

    Unfortunately there is a problem, when I run webui-user it gives an error: Couldn't launch python. exit code: 9009. Could you please help with it?

  • @id104335409
    @id104335409 5 месяцев назад +1

    Every single one of these "simple" methods is like this: You want to know if its going to rain? Here's what you need to do: take a shovel, raincoat and a fishingpole! Place an alarm clock in your fridge! Now take the next bus to the second city nearby and wait in the woods till friday...
    BRUH, I JUST WANT TO KNOW IF ITS GONNA RAIN OR NOT!!!

  • @user-tv1ui4lt5j
    @user-tv1ui4lt5j 11 месяцев назад

    RealisticVisionV20 Doesn't show available at the checkpoint (time stamp 8:10) how do I fix this?

    • @altArchitecture
      @altArchitecture  11 месяцев назад

      It’s possible you may have skipped this step 4:46 download realistic vision (and move it into the checkpoints folder)

  • @ykaanozyurt
    @ykaanozyurt 9 месяцев назад +2

    It says "RuntimeError: Not enough memory, use lower resolution (max approx. 640x640). Need: 0.4GB free, Have:0.1GB free" when i tried to generate the prompt. Could you tell me what should i do to increase memory?

    • @altArchitecture
      @altArchitecture  8 месяцев назад

      I found a possible fix online. Please report back if it doesnt fix your problem. Its a lesser known bug with SD. Right click the web.ui-user file and select edit. Change the line from "set COMMANDLINE_ARGS=" to be "set COMMANDLINE_ARGS=--medvram"

    • @ykaanozyurt
      @ykaanozyurt 7 месяцев назад

      Thank you for your attention; when i open the file with wordpad, there is no exact line that you mentioned. Could "export COMMANDLINE_ARGS="--medvram" need to be change at this situation? Or do i need to add the exact line to the file?@@altArchitecture

  • @Dehancer
    @Dehancer 11 месяцев назад

    Hey, we're loving what you do here!! we'd like to collaborate with you. Please let us know how we can get in touch.

  • @faisalabdi6350
    @faisalabdi6350 6 месяцев назад

    Can you do the opposite, turn image into outline sketch?

    • @altArchitecture
      @altArchitecture  6 месяцев назад

      Great question! I’ll look into that.

  • @thelittletitanx3608
    @thelittletitanx3608 10 месяцев назад +1

    🌹

  • @alwayspositivealways
    @alwayspositivealways Год назад

    possible with MJ?

    • @altArchitecture
      @altArchitecture  Год назад

      Midjourney isn’t able to do it as well as stable diffusion

  • @user-yb5td8sn7p
    @user-yb5td8sn7p 11 месяцев назад

    I was able to do the downloading steps but got this Error. Can anyone help?
    OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 4.00 GiB total capacity; 2.85 GiB already allocated; 0 bytes free; 2.89 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    Time taken: 9.34sTorch active/reserved: 2935/2968 MiB, Sys VRAM: 4096/4096 MiB (100.0%)

    • @altArchitecture
      @altArchitecture  11 месяцев назад

      Try decreasing your batch size to around 15. It's an option in stable diffusion.

    • @stafescritorio394
      @stafescritorio394 11 месяцев назад

      @@altArchitecture, Thank you. I tried it already and didn't work.

    • @altArchitecture
      @altArchitecture  11 месяцев назад

      @@stafescritorio394what happened?

  • @rogercls
    @rogercls 8 месяцев назад

    Good video, but this grid background is absolutely terrible to look at.

    • @altArchitecture
      @altArchitecture  8 месяцев назад

      Thanks for your feedback! Do you think making it larger would be better?

  • @williasafitri5905
    @williasafitri5905 Месяц назад

    Paid?

  • @ilankava
    @ilankava 7 месяцев назад

    hi there! i need some help here, around minute 6, i did all the process of the windows batch file, it downloaded all, then i clicked the space bar and now i get this ( and can not find the running on local url address) : venv "C:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
    Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
    Version: v1.6.0-2-g4afaaf8a
    Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7
    Traceback (most recent call last):
    File "C:\AI\stable-diffusion-webui\launch.py", line 48, in
    main()
    File "C:\AI\stable-diffusion-webui\launch.py", line 39, in main
    prepare_environment()
    File "C:\AI\stable-diffusion-webui\modules\launch_utils.py", line 356, in prepare_environment
    raise RuntimeError(
    RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
    Press any key to continue . . .
    can you please help? i'm almost there!

  • @user-ye1qx8zx5l
    @user-ye1qx8zx5l 11 месяцев назад +1

    I am facing an issue please tell me what to do
    "venv "C:\Ai\stable-diffusion-webui\venv\Scripts\Python.exe"
    Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
    Version: v1.4.0
    Commit hash: 394ffa7b0a7fff3ec484bcd084e673a8b301ccc8
    Traceback (most recent call last):
    File "C:\Ai\stable-diffusion-webui\launch.py", line 38, in
    main()
    File "C:\Ai\stable-diffusion-webui\launch.py", line 29, in main
    prepare_environment()
    File "C:\Ai\stable-diffusion-webui\modules\launch_utils.py", line 268, in prepare_environment
    raise RuntimeError(
    RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
    Press any key to continue . . ."

    • @altArchitecture
      @altArchitecture  11 месяцев назад

      This is an error I received while trying to use SD on my 10-year-old laptop. This fix did not help me but it helped others. Are you running this on a slow/old computer? Try this fix listed in the video description:
      Error message "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check". Open the notepad program, click file, click open, click file type and change it to all files, navigate to the AI folder where webui-user.bat is located, and open it, in the line that says COMMANDLINE_ARGS= type it so it says this "COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test". Save the file then try to reopen webui-user.bat.

    • @user-ye1qx8zx5l
      @user-ye1qx8zx5l 11 месяцев назад +1

      @@altArchitecture This command work "--skip-torch-cuda-test" Thank You i am not aspecting a reply thank You very much for your reply

    • @altArchitecture
      @altArchitecture  11 месяцев назад

      @@user-ye1qx8zx5l Thats great! Im happy it worked out. Thanks for letting me know.

    • @user-ye1qx8zx5l
      @user-ye1qx8zx5l 11 месяцев назад

      @@altArchitecture Now i am facing this issue.
      RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

    • @altArchitecture
      @altArchitecture  11 месяцев назад

      @@user-ye1qx8zx5l Try this command instead "--skip-torch-cuda-test --precision full --no-half"

  • @merk1love
    @merk1love 11 месяцев назад

    Hello there, i tried all the steps one by one and when i lunch the webui-user.bat i get the following message:
    Couldn't launch python
    exit code: 9009
    stderr:
    Python konnte nicht gefunden werden. Fⁿhren Sie die Verknⁿpfung ohne Argumente aus, um sie ⁿber den Microsoft Store zu installieren, oder deaktivieren Sie diese Verknⁿpfung unter
    Launch unsuccessful. Exiting.
    I installed as recomended python 3.10.6
    any recomendations?
    thanks in advance!

    • @altArchitecture
      @altArchitecture  11 месяцев назад +1

      When installing Python you were asked if you wanted to add it to your system PATH (The first question when installing Python on your computer) Try to uninstall and reinstall and see if that helps