Stable Diffusion: Will It Revolutionize Architecture?

Поделиться
HTML-код
  • Опубликовано: 31 июл 2024
  • In this tutorial, learn how to download and run Stable Diffusion, an innovative AI tool that generates images from text descriptions. Tailored for architects and designers, this video shows how Stable Diffusion integrates into the architectural workflow for visualization and concept development. Explore its powerful image-to-image capabilities and see how this technology can transform your projects. Additionally, join a broader discussion on the role of AI in the future of architecture.
    00:00 Intro - Why Stable Diffusion is interesting in architecture
    01:40 Stable Diffusion web browser demo
    02:14 System requirements
    03:10 Install Git
    04:05 Install Python
    05:13 Install Stable Diffusion - WebUI
    07:06 Install model/checkpoint
    10:17 Set up WebUI.bat
    11:00 Launch Stable Diffusion
    12:02 How SD works & how to operate it - PART 1
    25:11 Controversial discussion about AI image generators like Midjourney
    27:56 How SD works & how to operate it - PART 2
    30:24 A different approach to concept development in architecture
    33:49 AI specialists in architecture
    35:10 A "ping pong" between specialists as always
    37:12 How SD works & how to operate it - PART 3
    51:11 Landscape architects and SD
    51:57 How SD works & how to operate it - PART 4
    57:39 Upscale & manipulate an image in SD
    1:02:25 Showcases
    1:05:55 Conclusion
    Resources:
    Try Stable Diffusion in a webbrowser:
    huggingface.co/spaces/stabili...
    Install Git:
    git-scm.com/download/win
    Install Python:
    www.python.org/downloads/rele...
    Stable Diffusion WebUI GitHub link:
    github.com/AUTOMATIC1111/stab...
    Download model / checkpoint:
    huggingface.co/runwayml/stabl...
    webui-user.bat edit. Paste the following text at the top and save:
    git pull
    Models/checkpoints:
    civitai.com/
    or
    stable-diffusion-art.com/models
    Related videos:
    • DALL-E vs Midjourney: ...

Комментарии • 4

  • @AdnanDarwish3
    @AdnanDarwish3 Месяц назад +1

    thank you for a great video keep up the good work

  • @KILABANANA
    @KILABANANA Месяц назад +1

    Just name the stuff in negative prompt, you don't have to write "no human", that means that it has to have a human. Also try using a blender model for shape, vertex colors for masks... you can keep the shape that way just change the texture, i'm rambling but you can do a lot with 3d and control nets etc...

    • @ArchViz007
      @ArchViz007  Месяц назад +1

      :) yeah I noticed that also after upload! lol. I know you can do a lot with a 3d model also. Thx @KILABANANA