Stable Diffusion + ControlNET in Architecture

Поделиться
HTML-код
  • Опубликовано: 28 дек 2024

Комментарии • 32

  • @ArchViz007
    @ArchViz007  28 дней назад

    Share your thoughts below, and if you’d like to support my work, buy me a coffee: buymeacoffee.com/archviz007. Thank you!

  • @KADstudioArchitect
    @KADstudioArchitect 16 дней назад +1

    WOW, Amazing tutorial which I was looking for it for a long time

    • @ArchViz007
      @ArchViz007  16 дней назад

      Thanks @KADstudioArchitect! Take care

  • @cri.aitive
    @cri.aitive Месяц назад +1

    Thank you for your hard work in creating this video and for sharing your valuable experiences. Although I’m not involved in architecture, I feel it has greatly helped me expand my knowledge of Stable Diffusion.

    • @ArchViz007
      @ArchViz007  Месяц назад

      gr8 @cri.aitive :) Take care!

  • @SivaMaharana
    @SivaMaharana Месяц назад +2

    Thank you for your videos..this is very useful for our architects

    • @ArchViz007
      @ArchViz007  Месяц назад

      Gr8! :) Are you US based?

  • @brettwessels1283
    @brettwessels1283 Месяц назад +2

    Nice video, we've been using this kind of workflow in our office since the start of the year but using PromeAI instead, however I'll be implementing the Stable Diffusion methods going for as the control you have is just so much better...

    • @ArchViz007
      @ArchViz007  Месяц назад

      Thanks, exactly! I've been looking into PromeAI for some time now, and the owners have been pushing it to me for promotion. It's just not up to par with SD + ControlNet, at least not yet. Are you working i the US? Anyway take care, @brettwessels1283.

    • @ArchViz007
      @ArchViz007  Месяц назад

      Gr8!

  • @zainfadhil2588
    @zainfadhil2588 26 дней назад +1

    very soon, most rendering engines will adopt AI in their workflow. tydiffusion is an AI plug-in is designed to work with 3ds max and does give very nice results. thank you for the video

    • @ArchViz007
      @ArchViz007  25 дней назад +1

      Yup, I agree! It's the only way to gain total control. The ultimate destination for useful AI in architecture and design is a 'render engine'-then we're back to something like V-Ray or Corona 2.0. That way, we can return to being designers and humans piloting a tool again. Full circle!

  • @rakeshyadav-mz6kk
    @rakeshyadav-mz6kk Месяц назад +1

    Thanks you for this great video

    • @ArchViz007
      @ArchViz007  Месяц назад

      Thanks @rakeshyadav-mz6kk! take care

  • @vinbarg
    @vinbarg 7 дней назад +1

    hello, if i like the image from the prompt that is 700x500 pixel size, how can i get higher resolution of the seme immage with more detailes? if i use extras it only upscale all the imperfections and i doesn t look good. thx.

    • @ArchViz007
      @ArchViz007  6 дней назад

      Hi! To get a higher resolution with more details, enable 'Highres. fix' in the txt2img tab (just below the sampling method). Once enabled, you can choose an upscaler and adjust the upscaling parameters to refine the result. This helps avoid simply upscaling imperfections. Hope this helps!

    • @vinbarg
      @vinbarg 6 дней назад

      Hi thx, for fast reply, found some videos on it on yt, will try it, thx so much

  • @LukasRichter-p7n
    @LukasRichter-p7n Месяц назад +1

    Hi! I'd love to see how you can change the perspective of a model while keeping all other parameters the same. It would be great to create multiple images like this for our clients.

    • @ArchViz007
      @ArchViz007  Месяц назад

      Yes, I´ll look into that! take care @LukasRichter

  • @Aristocle
    @Aristocle 20 дней назад +1

    You need to improve the prompt quality. You have to start writing it in a global sense and then go into detail. Furthermore, you never specified sky info, e.g: on top a light blue sky.
    Better to start with a simple rendering in the viewport (I use Blender and it can be done quickly with Evee), to help SD and give it direction.
    The goal, in using this Gen AI, is to have consistency in the results, also through greater context in the prompt. For example, to have the same structure rendered from multiple views.

  • @samuelbonilla1110
    @samuelbonilla1110 Месяц назад +1

    wow!!!

  • @erlinghagendesign
    @erlinghagendesign Месяц назад +2

    IDEA: how about you use lineart in controlnet to create sketches from images and use them further on with the same processes shown

    • @ArchViz007
      @ArchViz007  Месяц назад

      I’ve already tried it and found that the 'lineart' control type more or less doesn’t affect the generation (txt2img) or re-generation (img2img) process. Interestingly, Canny seems to do what one might expect lineart to handle. Please post again if you discover anything different. Thanks, @erlinghagendesign!

  • @cecofuli
    @cecofuli Месяц назад +2

    IDEA: try to use Nvidia Canvas as a base concept, then use the Nvidia image in SD ;-)

    • @ArchViz007
      @ArchViz007  Месяц назад +1

      Good idea @cecofuli! Think I´ll explore that in the next video :)

  • @letspretend_22
    @letspretend_22 Месяц назад +1

    I have set up everything exactly the same with the same UI, ControlNet, Model, settings, etc., but it's like it is totally ignoring the ControlNet image. So weird.

    • @letspretend_22
      @letspretend_22 Месяц назад

      Found the error: I added the controlNet extension. It then already has a "canny" option, but you actually need to download the .tbh files and put it in the correct folder. It doesn't show an error or anything in the SD UI.

    • @ArchViz007
      @ArchViz007  Месяц назад

      Have you enabled it? Same resolution? Pixel perfect... Stable Diffusion can be a bit fiddly

    • @letspretend_22
      @letspretend_22 Месяц назад

      @@ArchViz007 After downloading Canny and placing it in the correct folder, it worked. Yes, I noticed the fiddlyness, but got some good results in the end. It seems the ControlNet image has to quite specific as well. I noticed the architecture model you use, doesn't really do perspectives other than from eye-height very well. When you give it a sketch from an aerial view, it usually tries to interpret that as being from eye-level, leading to some great Escher-style images :)

    • @ArchViz007
      @ArchViz007  Месяц назад

      @letspretend_22 Yeah, I’ve discovered the same. Maybe we can use civitai.com/models/115392/a-birds-eye-view-of-architecture for those pictures. I think I’ll look into it! By the way, love Escher-what a master!

  • @metternich05
    @metternich05 15 дней назад

    Very tedious. I had to fast forward a lot. Anyways, you are still better off doing your whole scene in a 3D app. Much more control and more predictable output. I don't get this AI rage in archviz. It's hot garbage. Maybe it needs another 5-10 years to become sort of useful.