Design Computation Human
Design Computation Human
  • Видео 66
  • Просмотров 108 194
-The- Lecture on What Generative AI Can/Not Do in Design, Arts and Architecture | CDFAM 2024 Keynote
I promise you will learn something new from this lecture.
In this lecture, I am diligently tracing the meanings created through architecture, design, and arts and evaluating the potentials and pitfalls of AI by showing actual designs and artwork.
The historical context presented here should empower you as you learn more about emerging AI technologies and chart your own path in design and art practices.
0:40 The definition of Computational design
2:38 Towards AI-generated imagery
3:13 AI-generated shoe was done a long time ago
4:35 Proving the value of Computational Design in Industry Giants
5:11 What value are you creating and why?
6:25 Why is AI captivating for designers?
7:15 Should you be excit...
Просмотров: 435

Видео

AI + Design Webinar Tomorrow | Join me on Medium Day | Free Registration Link
Просмотров 120Год назад
Register for Free: hopin.com/events/medium-day-2023/registration Details here: medium.com/@OnurGun/join-me-on-medium-day-design-computation-human-in-30-minutes-a5aa51663838?sk=4ffca880c5a8031af2e9d5978af10438 CONTEXT: Design, computation, and human are interwoven concepts essential to understanding each other. Pursuing design, a human endeavor, centers on creating meaning and value. This endeav...
Riding in the Candy Lad- RunwayML GEN-1
Просмотров 131Год назад
FOLLOW TO SUPPORT & Learn more about Computational Design Thinking: LinkedIn: www.linkedin.com/in/onuryucegun/ Instagram: onur.yuce.gun Medium: medium.com/@OnurGun Academia: mit.academia.edu/onurgun Website: onuryucegun.com/ COME JOIN to the fastest growing Computational Design Group on LinkedIn: www.linkedin.com/groups/9221510/
ControlNET Workshop for Beginners - Stable Diffusion - Architectural Design Exploration with AI
Просмотров 1,7 тыс.Год назад
Workshop on ControlNet. See Stable Diffusion installation guide here: ruclips.net/video/hnJh1tk1DQM/видео.html FOLLOW TO SUPPORT & Learn more about Computational Design Thinking: LinkedIn: www.linkedin.com/in/onuryucegun/ Instagram: onur.yuce.gun Medium: medium.com/@OnurGun Academia: mit.academia.edu/onurgun Website: onuryucegun.com/ COME JOIN to the fastest growing Computational...
Neo Nexus - Act 01 Scene 01
Просмотров 214Год назад
The movie begins with a voice-over of a computerized female voice that introduces the year 2150, where technology has surpassed human imagination. Used Midjourney @RunwayML ChatGPT together. FOLLOW TO SUPPORT & Learn more about Computational Design Thinking: LinkedIn: www.linkedin.com/in/onuryucegun/ Instagram: onur.yuce.gun Medium: medium.com/@OnurGun Academia: mit.academia.edu/...
ControlNet and Stable Diffusion Local Step by Step Installation Guide
Просмотров 11 тыс.Год назад
A super clear guide showing how you can install AUTOMATIC1111 stable diffusion webUI and ControlNet locally on your computer. FOLLOW TO SUPPORT & Learn more about Computational Design Thinking: LinkedIn: www.linkedin.com/in/onuryucegun/ Instagram: onur.yuce.gun Medium: medium.com/@OnurGun Academia: mit.academia.edu/onurgun Website: onuryucegun.com/ COME JOIN to the fastest growin...
GEN-1 by RunwayML | Generative AI Video Tool
Просмотров 1,1 тыс.Год назад
A powerful AI video filtering tool is emerging. Gen-1 by Runway was introduced recently. I was intrigued to give it a try. It definitely starts offering more as you start digging into the parameters. IT looks like GEN-1 is being crafted for becoming a powerful storytelling tool. But how would you use it for DESIGN? I want to quote Bill Mitchell here: "availability brings use" Certainly, the col...
ChatGPT and Midjourney: How to Use Uncommon Words to Push Your Images to the Next Level
Просмотров 810Год назад
In this video I show how you can create more authentic images in Midjourney using ChatGPT. Midjourney is a great diffusion tool but it comes with limitations and biases, such as default color schemes and detailing styles. Using ChatGPT, I show ways to research for uncommon words, adjectives, nouns and verbs to push your creations to the next level. 1:15 Limitations of Midjourney 4:26 How to com...
Türkçe Ders: YAPAY ZEKA ve HESAPLAMALI TASARIM | Midjourney, Stable Diffusion, Runway
Просмотров 750Год назад
3:31 Alternatif kariyer patikalari: Eğitim, Mimarlık, Ürün tasarımı, Yöneticilik 17:25 Bilgisayar ve yaratıcılık 40:46 Yapay zeka ve gelecekte tasarım süreçlerine yapacağı etki 1:10:06 Soru ve cevaplar Hesaplamalı Tasarım, Uygulamaları, Teorisi ve Örnekleri için sayfalarımı ziyaret edin: LinkedIn: www.linkedin.com/in/onuryucegun/ Instagram: onur.yuce.gun Website: onuryucegun.com/...
Stable Diffusion & Rhino Grasshopper Workshop | Text-to-Image to 3D | Image Processing | Data Viz
Просмотров 3,6 тыс.Год назад
Image to Image to Image: Bitmap Processing & Data Visualization Workshop This workshop concentrates on the processes that are concerned with data and its representation, in the form of images. We teach methods with which colored gradients and heatmaps can be converted into 3D representations. We show how the 3D forms we generate, in turn, can be rendered to create images. We experiment with eme...
Everything You Need to Know about Computational Design | My Course @ School of Disruptive Innovation
Просмотров 2,3 тыс.Год назад
Everything You Need to Know about Computational Design | My Course @ School of Disruptive Innovation
MidJourney to Reach 30,000,000 Users?! Web UI, Larger Grid, Composition, New Robust & Faster Algo
Просмотров 2,6 тыс.Год назад
MidJourney to Reach 30,000,000 Users?! Web UI, Larger Grid, Composition, New Robust & Faster Algo
How to use the --CHAOS parameter in MidJourney: Learn from a Visual Experiment and Accompanying PDF
Просмотров 5 тыс.2 года назад
How to use the CHAOS parameter in MidJourney: Learn from a Visual Experiment and Accompanying PDF
Neural Filters in Photoshop: Enhance Midjourney, Dall.E2, Stable Diffusion Images
Просмотров 1,6 тыс.2 года назад
Neural Filters in Photoshop: Enhance Midjourney, Dall.E2, Stable Diffusion Images
Photoshop AI Upscale: Enhance Midjourney, Dall.E, Stable Diffusion Images with Automated Tools
Просмотров 4,8 тыс.2 года назад
Photoshop AI Upscale: Enhance Midjourney, Dall.E, Stable Diffusion Images with Automated Tools
How to Fuel Your Creativity with AI Tools: Diffusion Models for Divergence and Convergence in Design
Просмотров 1,8 тыс.2 года назад
How to Fuel Your Creativity with AI Tools: Diffusion Models for Divergence and Convergence in Design
MIT Superseding Parts Computing Wholes | Final : Lavender Tessmer, Computational Design
Просмотров 6272 года назад
MIT Superseding Parts Computing Wholes | Final : Lavender Tessmer, Computational Design
How to Get the Image You Want in Midjourney: Stylize, Quality, Weight Parameters Explained Visually
Просмотров 28 тыс.2 года назад
How to Get the Image You Want in Midjourney: Stylize, Quality, Weight Parameters Explained Visually
Midjourney Deciphered: Workflow Diagram, Tips, AI in Creative Processes
Просмотров 11 тыс.2 года назад
Midjourney Deciphered: Workflow Diagram, Tips, AI in Creative Processes
Synthetic Natures | Teaching Complexity using Generative Design, Digital Fabrication and Electronics
Просмотров 3162 года назад
Synthetic Natures | Teaching Complexity using Generative Design, Digital Fabrication and Electronics
Lebbeus Woods - ML SGAN Latent Space Walk
Просмотров 3092 года назад
Lebbeus Woods - ML SGAN Latent Space Walk
Prof. Şebnem Yalınay-Çinici : Bireysel ve Mesleki Gelişim, Mimarlık Eğitimi ve Dijital Teknolojiler
Просмотров 1,3 тыс.3 года назад
Prof. Şebnem Yalınay-Çinici : Bireysel ve Mesleki Gelişim, Mimarlık Eğitimi ve Dijital Teknolojiler
Silver Wing | Groundless ML | Cloud Tectonics
Просмотров 1713 года назад
Silver Wing | Groundless ML | Cloud Tectonics
How to Become a Critical Thinker: Hard vs. Soft Skills
Просмотров 3743 года назад
How to Become a Critical Thinker: Hard vs. Soft Skills
Parts of the Landscape | Sorting Grains Computationally | Robert Pirsig's View on Scientific Method
Просмотров 1973 года назад
Parts of the Landscape | Sorting Grains Computationally | Robert Pirsig's View on Scientific Method
Professional Development | Architect, Computational Designer in New York City : Charles Portelli
Просмотров 6413 года назад
Professional Development | Architect, Computational Designer in New York City : Charles Portelli
BREATHE - Cloud Tectonics || Oil Painting + Machine Learning + DFAM + Motion Graphics
Просмотров 4313 года назад
BREATHE - Cloud Tectonics || Oil Painting Machine Learning DFAM Motion Graphics
Tips for PhD Admissions: Differences between PhD and Masters Applications
Просмотров 4603 года назад
Tips for PhD Admissions: Differences between PhD and Masters Applications
How to Apply to a Masters Program? Killer Hints and Best Practices for Graduate Degree Applications
Просмотров 5853 года назад
How to Apply to a Masters Program? Killer Hints and Best Practices for Graduate Degree Applications

Комментарии

  • @jonegana
    @jonegana Месяц назад

    where can I download the PDF??

  • @jayarikishii
    @jayarikishii 3 месяца назад

    Key insights 🎨 The combination of image source, sketch creation, and prompt integration in ControlNet creates a new image, making the process more dynamic and creative. 🖥 The local web server will run on your computer, making the installation process more accessible and convenient. 💡 To make ControlNet work locally, you need to download and paste the pre-trained libraries on your computer. 🚀 The installation process for ControlNet and Stable Diffusion is straightforward and can be easily restarted if needed. 🔍 Choosing the right model and prompt can make a significant difference in the results of the segmentation process, highlighting the importance of careful selection. TLDR: The video provides a step-by-step guide for installing ControlNet and Stable Diffusion for creating sketches from images and merging them with prompts, as well as tools for segmentation and depth extraction. 1. 00:00 📹 ControlNet and Stable Diffusion provide a step-by-step installation guide for creating sketches from images and merging them with prompts, as well as tools for segmentation and depth extraction. 1.1 The video provides a step-by-step installation guide for stable diffusion and control net, with a follow-up workshop recording for Penn State architecture department. 1.2 ControlNet enables users to create sketches from images and merge them with prompts to create new images, and also offers tools for segmentation and depth extraction to make the story more interesting. 2. 01:51 📝 Install Python and the Stable Diffusion application by Googling and following the links to download and install the necessary files, then upgrade as needed. 3. 03:09 📦 Install opencv python library and upgrade package manager using CMD in the same folder. 4. 04:36 🔧 Install ControlNet on local stable diffusion by navigating to the folder, running the installation, and accessing the web server, then adding ControlNet from GitHub to the local stable diffusion web UI. 4.1 Navigate to the folder, run the installation, copy and paste the address into your browser to access the local web server for stable diffusion. 4.2 Install ControlNet on top of local stable diffusion by going to the extensions tab, searching for ControlNet on Google, going to GitHub, copying the URL, and installing it from the URL in the local stable diffusion web UI. 5. 06:37 📦 Download the full trained models from the provided link and paste them into the models folder in the extracted folder. 6. 07:37 🔧 Install unstable diffusion, enable control net interface, and restart web UI if issues arise. 6.1 Install the unstable diffusion, apply and restart the web UI, and if there are any issues, shut everything down and restart the web UI and stable diffusion. 6.2 Enable the control net interface in order to use it with stable diffusion, as it is hidden and needs to be expanded in the tab. 7. 09:58 🔧 Installation of ControlNet and Stable Diffusion Local is quick and easy with a powerful Nvidia card, making running segmentation models straightforward. 7.1 Select the preprocessor and model tab, choose the depth model, select the pre-trained model file with the same name, enable control net, and generate. 7.2 Installation of ControlNet and Stable Diffusion Local is fast with a powerful Nvidia card, and running segmentation models is straightforward. 8. 11:45 📹 Install stable diffusion and control net with critical thinking, learn about AI, and support the channel for more content. 8.1 This video provides a step-by-step guide for beginners on how to install stable diffusion and control net, with the suggestion to be a critical thinker while using these tools. 8.2 Learn about AI and how it can change your life, check out the speaker's articles on medium.com, and support the channel for more videos.

  • @fieryarrows8091
    @fieryarrows8091 5 месяцев назад

    Hi, i really enjoyed watching this lesson and i would be really happy if you give us the access to the link resource in order to replicate your exercises. Thank you for your great job

  • @MuslimVlog123
    @MuslimVlog123 6 месяцев назад

    anybody mention minimum GPU requied

  • @abiodunshonibare832
    @abiodunshonibare832 7 месяцев назад

    This video was absolutely helpful, I’m surprised I’m just coming across your channel now, but better late than never. Thank you for sharing your knowledge and inspiring us

  • @youtubemzx97
    @youtubemzx97 8 месяцев назад

    Hello, can you please provide the workshop files for us to download

  • @beckybanist
    @beckybanist 8 месяцев назад

    İçerik için çok teşekkürler.. Bir sorum vardı; bilgisayara yüklediğim controlnet canvas kısmında çizim yapmaya çalıştığımda çizgiler gözükmüyor, bunu nasıl düzeltebilirim?

  • @dianaallaham2801
    @dianaallaham2801 8 месяцев назад

    You're amazing, thank you!

  • @kanagasundram78
    @kanagasundram78 9 месяцев назад

    Great conversation

  • @valorantacemiyimben
    @valorantacemiyimben 11 месяцев назад

    çöp

  • @manojdaniel9079
    @manojdaniel9079 Год назад

    Great Explanation. Thank you so much !!

  • @marchermitte
    @marchermitte Год назад

    '.pip' is not recognized as an internal or external command, operable program or batch file. That's what I get when executing the command

  • @futurearchitecture2247
    @futurearchitecture2247 Год назад

    Hi Mr. Gun, i have registered, hope to catch your session on time. have a great day!

  • @jsrender1
    @jsrender1 Год назад

    What folder are you putting controlnet etc into? You say 'this folder' and it's really hard to follow

  • @darkrider897
    @darkrider897 Год назад

    Hi Sir, i followed the video to the final step but i cant generate an image. A blurry image appeared when it is generating but when it is completed, this pops out: OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 4.00 GiB total capacity; 2.97 GiB already allocated; 0 bytes free; 3.34 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I dont know what does it mean, anyone can help me

  • @yu-liliao4671
    @yu-liliao4671 Год назад

    Many thanks for your great video! Unfortunately, I am the 1%. I received the warming RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. Would be grateful if you could help me to fix it.

  • @Xandr_Nekomata
    @Xandr_Nekomata Год назад

    Thanks for the guide! I was using it for my second instalation ) because first was failed =). And do you know it is possible to use stable diffusion and control net for generate animations from videos? this is my goal for now ) and I hope Ill get it soon ))

  • @crimsomwolf
    @crimsomwolf Год назад

    the extension step was a gem! thanks!!

  • @norakatharinafankhauserzed2023

    amazing step by step guide, very helpful, thank you!

  • @sherifamr4160
    @sherifamr4160 Год назад

    when I reopen the webui file it tells me this { ERROR: Error [WinError 2] The system cannot find the file specified while executing command git version ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH? [notice] A new release of pip available: 22.2.2 -> 23.1.2 [notice] To update, run: D:\AI rendering + rhino\automatic1111\stable-diffusion-webui-master\venv\Scripts\python.exe -m pip install --upgrade pip} .... I wonder if you can help me. Thank you again for your time and efforts

    • @baangcao5471
      @baangcao5471 Год назад

      Installing GIT will help you get rid of that warning

  • @mscxnulleins9135
    @mscxnulleins9135 Год назад

    thx for the infos

  • @sirousghaffari9556
    @sirousghaffari9556 Год назад

    in minute 9:20 For me, the control is Net 1.1.147 . Is this wrong?

  • @sirousghaffari9556
    @sirousghaffari9556 Год назад

    Thanks to you, I was finally able to install and use it👍

  • @TheMeganebou
    @TheMeganebou Год назад

    hey this is amazing, thanks! could I please have access to the files, would like to try some of the grasshopper scripts for a university course I have. h4m0k4n

  • @sirousghaffari9556
    @sirousghaffari9556 Год назад

    At 5:00, I follow the instructions and run the batch file. This is the error message: Exit code: 1

    • @computationaldesign
      @computationaldesign Год назад

      See Montaser's response above: "I had to install the latest stable version of Python, not the latest update to get it work, and while installing it, make sure to enable the "Add Python to PATH" checkbox Otherwise, the batch file may not find the Python installation."

    • @sirousghaffari9556
      @sirousghaffari9556 Год назад

      @@computationaldesign Thanks for the help, but now it gives me error 2, what should I do? I would be grateful if you could help me

  • @abdullahsy
    @abdullahsy Год назад

    Movie ready materials! It’s impressive looking forward to see what is next :)

  • @michelearchitecturestudent1938

    Great video content! My only issue is related to the last steps showed. When stable diffusion is started I can only select the preprocessor and not the model in the control net tab. In the video you have multiple solutions for this one (for example the control_sd15_detph), but my only one is "none"". Do you know how to fix it?

    • @computationaldesign
      @computationaldesign Год назад

      You have to download the models separately and put them into the folder: huggingface.co/lllyasviel/ControlNet/tree/main/models

    • @michelearchitecturestudent1938
      @michelearchitecturestudent1938 Год назад

      @@computationaldesign Now all it works. Thanks ❤️

  • @montasermisk3014
    @montasermisk3014 Год назад

    Thank you so much, I was struggling with the pip update notice, and finally, it's working now ;) however I had to install the latest stable version of Python, not the latest update to get it work, and while installing it, make sure to enable the "Add Python to PATH" checkbox Otherwise, the batch file may not find the Python installation.

    • @computationaldesign
      @computationaldesign Год назад

      Correct, thanks for the reminder!

    • @montasermisk3014
      @montasermisk3014 Год назад

      Thank you so much hocam 😊 I couldnt figure it out until i watch your tutorial

    • @fireconejo2382
      @fireconejo2382 Год назад

      excuse me but I get the same error even doing both things (installing python 3.10.0 and checking the PATH box), any idea of what am I doing wrong plz?

  • @daniellanges8430
    @daniellanges8430 Год назад

    for some reason when I open up the controlnet interface in the Webui I dont see anything under models (just none). The preproccessors are there but not the models.

  • @JMcGrath
    @JMcGrath Год назад

    Great Video, is there a link to you architectural workshop?

    • @computationaldesign
      @computationaldesign Год назад

      Not yet! The video needs quite a bit of editing, I am hoping to dig some time for that.

  • @abdullahsy
    @abdullahsy Год назад

    “It's about quality, not quantity” The guide is perfect and needed! Keep it up 🔥 always great to see a new video coming out!

    • @computationaldesign
      @computationaldesign Год назад

      Thank you Abdullah, give it a shot if you can, it is fun to play with it locally.

  • @muhb6620
    @muhb6620 Год назад

    Thank you, greetings from Egypt❤️

  • @mukbangreversed
    @mukbangreversed Год назад

    Does requesting access actually work?

  • @AhmadYouness
    @AhmadYouness Год назад

    Thank you very much, very very informative workshop✌

  • @urniurl
    @urniurl Год назад

    I applied for access, no luck yet.

  • @berniejankowski8676
    @berniejankowski8676 Год назад

    Onur is an amazing teacher. I appreciate his ability to express these explorations so quickly.

  • @cosmin9982
    @cosmin9982 Год назад

    Pardon my intrusivess, I did not know where to reach you, here or on Linkedin. So, in short, ACE - architects concil of Europe has a call for researchers in AI/architecture to submit some research directions on the current scene in both domains and I really feel this is right up your alley. Just wanted to bring this to your attention, maybe it is of interest for you. I have been keeping up loosely with your articles on medium or bootcamp and I think your expertise would be a good match. just as a disclaimer: I am not affiliated or endorsed by ACE in any way. Have a great day!

  • @abdullahmallah8173
    @abdullahmallah8173 Год назад

    Watching this recording in 2023 gave me two feelings/thoughts by the end of it, - How quickly the tools we are using today have improved; we keep jumping to the next best/new invention/Trend... From Grasshopper to AI makes me wonder what will come next. - Nowadays, we are seeing the effects of AI on the field of Architecture; what you said regarding that in your last part, "Value or Replace", is happening exactly the way you said it, although not many designers have lost their jobs yet to AI, those who got a job as a prompt engineer, now found themselves out of a job thanks to tools like GPT-3, and concept artist/designers added more value to themselves using tools like MJ and Dall-E. New questions came to mind with these rapid innovations of tools like Midjourney and/or GPT-3, some people, including me, are feeling the need to "Rush to learn"; sometimes, I think this feeling comes from the fear of replacement or being left behind and/or excitement. Are those feelings normal, or should we really have those fears? Or are they just a temporary reaction to what's coming? Sometimes we forget how important it is to develop soft skills like critical thinking, Emotional intelligence, Judgement and decision-making. (quoting your post on medium.com, " Complete List of Skills You Need for Enduring Success") When watching the people around me, especially the students and instructors at my school, they mostly seem more worried about how they could improve themselves using AI than how they could improve their awareness and criticality. Just writing my thoughts here. I would like to hear your thoughts regarding what I've said.

    • @computationaldesign
      @computationaldesign Год назад

      Great summary of what is happening and insightful questions here. I will touch and answer the core questions here after quoting you: "...including me, are feeling the need to "Rush to learn"; sometimes, I think this feeling comes from the fear of replacement or being left behind and/or excitement... Are those feelings normal, or should we really have those fears? Or are they just a temporary reaction to what's coming?" This is FOMO, fear of missing out, and it is a very natural feeling to have. Majority of people, including the skilled ones develop these kind of uneasy feelings when they face rapid changes, or tectonic shifts in one field/subject. The actions taken under the influence of FOMO usually mislead people, as they usually appear instinctually, without comprehensive re-evaluation of the situation. The way to fight this is pitting what comes as INSTINCTUAL to what is INSIGHTFUL. Note that instinctual is momentary, whereas insightful is timeless. In other words and simpler words, know that this is a MARATHON, not a SPRINT. Invest into yourself and your understanding for long term -- on the flipside, realize that this long term investment may include short-term actions. So play with AI tech, but don't get drown in the hype wave. Hype waves are for the ones who want to make it "there" using a shortcut. But that does not work, either. Here is why: Time ALWAYS reveals that which is truly valuable. If you are persistent in value-building, not with one tool, not with one technique, not with hype waves only, things should work out well. As long as you know you will NEVER make it "there", because there is NO ceiling, you will build the best version of yourself. And this is a good, if not a great effort.

  • @abdullahmallah8173
    @abdullahmallah8173 Год назад

    As always, you always succeed in surprising me with your critical constructive thinking and work! Keep it up! I love the style of your videos and what you have done with ChatGPT+MJ. P.S.: This could be me only. For some reason, your voice isn't in-sync with your Webcam. I could be just wrong and tired. =]

    • @computationaldesign
      @computationaldesign Год назад

      Thanks for the feedback Abdullah, you are right about the voice sync problem, which was not there when I last edited the video.

  • @amr95ahmed
    @amr95ahmed Год назад

    Hey, great video!! I'm having a little trouble trying to run the google colab. Instead of running it gives me this output: sed: can't read /usr/local/lib/python3.7/dist-packages/gradio/blocks.py: No such file or directory sed: can't read /usr/local/lib/python3.7/dist-packages/gradio/blocks.py: No such file or directory sed: can't read /usr/local/lib/python3.7/dist-packages/gradio/blocks.py: No such file or directory sed: can't read /usr/local/lib/python3.7/dist-packages/gradio/strings.py: No such file or directory Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 7, in <module> from fastapi import FastAPI ModuleNotFoundError: No module named 'fastapi' I don't have any knowledge in coding so can you help please?

    • @computationaldesign
      @computationaldesign Год назад

      There is a super lightweight version SD v2.1 WebUI now, check it here: colab.research.google.com/github/qunash/stable-diffusion-2-gui/blob/main/stable_diffusion_2_0.ipynb

  • @abdullahsy
    @abdullahsy Год назад

    The portrait’s fusion with the solo…👌❤️‍🔥

  • @beetlejuss
    @beetlejuss Год назад

    Do they know there are apps like Ms Paint that allow you basic drawing and painting AND it they didn't became inevitably Photoshop?

    • @computationaldesign
      @computationaldesign Год назад

      Probably yes... Just responded to another comment this way: From the community calls I decipher that the developers prefer to keep things open ended instead of making this tool a direct design / graphics tool.

  • @justresist
    @justresist Год назад

    Yes please!

  • @justresist
    @justresist Год назад

    Brilliant presentation, thank you

  • @tomiomio
    @tomiomio Год назад

    yesss, tutorial please 🔥🔥🔥🔥🔥

  • @cosmin9982
    @cosmin9982 Год назад

    Sure, a tutorial sounds great

  • @Aitimoney
    @Aitimoney Год назад

    Me, most definitely. Just got in on your ride.

  • @troncalnegrete479
    @troncalnegrete479 Год назад

    Thanks!!

  • @Juan2142
    @Juan2142 Год назад

    Not sure if anyone else uses this feature but it really works great: When you've received your downloaded image from MJ, and it's in PNG format, open it up in Photoshop and save it as a jpeg in whichever location on your desktop that suits you best. Next, open up Adobe Bridge. Once Bridge has opened, set the view to "Film strip". Now either drag the JPG file that you have just created into Bridge or locate it via Bridge through the left-hand panel. Once it appears as a thumbnail in your "filmstrip" it will automatically show as a bigger version above in the image pane above, for Mac users: control-click on the image in the image pane (not the thumbnail) - it will bring up a palette box, scroll down to "open up in Camera Raw" which is all the way at the bottom of the palette box. Now what you will see is what looks very similar to what you may have seen in the video, except there is a little hidden easter egg that Camera has for Bridge that it does not have for photoshop, and this is due to the handling of RAW files. All you now have to do is: Control-click on the image and a palette box will pop up, scroll down to where it says: "Enhance", and click - "Enhance", this will now bring up another dialogue box showing a scaled-up cropped section of your image and a "enhance button" click the "Enhance" button. Now if you look at the bottom left thumbnail pannel you will see that there are 2 images, the first image is the original image, and the second image is the enhanced image (x 2 and an algorithm has been applied). You can now tweak the enhanced image with the sliders on the right-hand side accordingly. Click "Open" at the bottom right-hand side of the Bridge application - this automatically opens it in Photoshop. The image will now basically be double the size of the original file and will be in a DNG (RAW) file format. Now here is a little hack that I played around with, the AI in the Bridge will not let you just continue to enhance the image once you have enhanced it once. So if you have enhanced it and opened it up in photoshop, save that open DNG file as a PSD and then save a copy as a jpeg. Now open that jpeg that you have just saved in Adobe Bridge again and follow the process I described above and you can enlarge it again and get a fantastic result because the AI in Adobe Bridge does a really good job. One thing to note is that on your first "Enhancement" do not "sharpen or add any clarity or texture if you do this, the 2nd enhancement will inherit those properties and most likely double them and make your image look over-sharpened or a bit too gritty. On some images I have been able to push out x8 upscale that looks amazing! However the most regular ones have been x4 upscale. It is best to play around, but also to understand your image and know that not one thing fits all images. You can save most of your image settings in Bridge also, so you can start building different templates for the various types of images you enhance. Hope this makes sense and it helps.

  • @ChrisBamborough
    @ChrisBamborough Год назад

    Great video, thanks for making. I’m fascinated by this topic, but I’m unsure what the value of the output is. What do you use the generative image for, is it an end product or is it an inspiration for other art forms?

    • @computationaldesign
      @computationaldesign Год назад

      If we take it as an endproduct, then we end up with too many end products! I constantly work on developing design workflows which help filter out the clutter, bring forth the good stuff, and use those for getting tangible results.