No they don't. Assuming you're an actual architect and not just someone who does visualisations. AI has an understanding of architecture, and it looks very good, BUT it doesn't have the knowledge what makes physically the most sense, what are regulations or what does and what doesn't work. You still can lead projects, and oversee the building process in general. I also specialised in visualisations but am shifting to a more technical level at the moment. Keep up with technology or it owns you.
in third world countries, this already affecting so many job market. those who do not even have a nuance of abilities to produce design can call themselves designer. They said ' we do not need these people now to do images of our product now, we can do that in Ai'. I'm waiting for the day when people left and right start suing over common design interest.
This channel will grow so fast if you can show either through Stable Diffusions or MidJourney 5.1 how to render a sketchup file, 3d max (jpeg) the exterior of a building into the render we want without a lot of distortions using prompts. There is no such video online. And I am positive that if people are not searching it now, they will very soon!
Hey, thanks a lot for your nice comment! I totally agree; that's where we are headed to. It is not quite possible to have a simple one-click render solution yet without many settings and prompting the "try error and experimenting" process. Although, I am working on a video for Midjourney and how to use it to render from a sketch or base simple image. I will share it as soon as I figure out a nice, straightforward workflow.
Hey I am a architect from Switzerland and it really amazes me how far we came. I already did a presentation in my architectural office and I am about to implement it in our design workflow... After using a lot of midjourney I came across the problem not having the control to just change a specific thing... I am trying now a combination of Stable Diffusion and MJ. Thank you for your informative video!
One question do I have: What computer do you use, graphic card and memory and how long does it take to for you to create a picture (AI render process)? I am working with a late MacBookPro and it takes me up to 10min to have a picture.
Hey, thank you for your comment :) That's great to hear because I think, usually, our industry is not the fastest in case of adaptations to new technologies :| Thanks for your kind words, I really appreciated it ❤
@@Ramb0li I am using a laptop with RTX3060 and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. Depending on the image resolution, the number of sampling steps, and the sampling method, it takes 2-5 minutes on average. I usually test my prompt and settings with a low resolution and fewer sampling steps to make the process faster. And once I find a nice prompt combination and correct settings, I render a final version with a higher resolution. Maybe that can help to speed up the process.
Thank you for taking the time to show us this fantastic tool, and very inspiring ideas. I believe that AI resources, are here to stay. All we have to do is figure out the best way to work with them. We are just starting to work with this, and we still have a lot to learn, including improving our writing skills to make better prompts.
Hi, thanks for your comment and lovely feedback. Totally agree; soon, we will have more ideas on how to use it more user-friendly way. Regarding prompting, I believe it will have less impact on the overall result in the future. We will be able to explain it with plain text without needing any special keywords or phrases.
@@fc5130 Hey, in this video, I used V1.4. Because, at that time, Realistic Vision V2.0 wasn't available yet. I am using V2.0 at the moment. You are very welcome :)
I wish some of the prompting be replaced by inputing additional images and tagging or labeling through sketching perhaps like in Dalle-E. For example instead of describing how modern styled green sofa with geometrical patterns I want, I should be able to drop a reference photo of such a sofa or any other object inside my project. I am sure these kind of features will come sooner than later but what makes Stable Diffusion amazing is that its also free and open source.
Thank you so much for sharing this. I am trying to figure out how to do something similiar with portraits, keeping the original face and changing the clothes, background focal length etc. This is a great starting point.
Hey, thanks for your lovely feedback and comment! Hmm, interesting idea. I will definitely try it out. Please share your result and experience with us!
@@designinput consider making a video in which you show us how to create a 3d render from a sketchup jpeg without any changes on the composition and the placement of the object. would be really helpful
So this needs to be developed using an interactive user interface. The word prompts need to become labels. Architects want to be able to draw lines from objects and label them feeding specific information into the AI generation. The Architect does not care about multiple options as much as he cares about creating the specific option he desires. He must be enabled through the interface to engage in an interactive back and forth. Erasing parts, and redrawing them, developing parts of the drawing, adding more specific labels,........ all in an endeavour to produce a vision as close to what he sees in his minds eye. This is of utmost importance. All said and done, on a positive note, this is the only useful sphere for architects, which I think they may use and be willing to pay for, that I have seen thus far from all the AI related attempts. It would be idiotic not to take it forward to fruition.
Hey, you are right, and we will soon see more user-friendly interfaces integrated with other software for sure. I totally agree; in the case of architecture, accuracy and quality are way more important than the number of alternatives you have. But even a couple of months ago, having this much control over the whole generation process was impossible. And it is getting better every day. I am sure you will be able to fine-tune your final result very soon. Thank you very much for your comment!
I agree with you. Some of the drawing/erasing features of Dall-E would be amazing! You can already use Dall-E to replace parts of an image but you can`t use it for entire image2image process.
Nice work. This is clearly the direction of how concepts are generated. probably in another 4 weeks this capability will be available at numerous webapps for free.
WOW it worked!!! THANKS A LOT!!! I had to download some important stuff like .pth files and then drag them to the right place. Just to find them afterwards in ControlNet / Model like in your example. YOU ARE AMAZING WITH THIS TUTORIALS!!! THANKS
not sure, but I believe you don't need to choose anything from preprocessing menu, just leave in at none because otherwise you let SD to create sketch from sketch as input
wow this looks amazing! Here is what I am thinking : is it possible to turn an image into sketch by AI, then use AI on that sketch to produce designs that actually fits the real-life object?
Ellerine sağlık videolar çok güzel ve bilgilendirici. (Sanırım Türksün ingilizcesi ne güzelmiş çok iyi anladım dedim ve sonra farkettim) sanırım bu saatten sonra Türkçe içerik gelmez :)
I actually don’t want random results in my designs... and it’s really not that hard to texture or model a 3d scene... but it is useful for finding ides maybe
But with the technology, people will start using it and it may become the industry norm to have this quality of rendering early in the design stage. It might become less what we want and what the client / market expects. We are already facing similar things with clients expecting renders early on so they can visualise the thinking. They don't understand sketches and drawings like we do. The majority don't actually understand the work we do beyond what colour the kitchen bench should be, which is often how they want to express some control / knowledge in the design process. I also would never be able to produce so many variations to this level of detail in the time it would take to sketch five solid ideas and model them in sketchup or rhino, then render them dealing with vray crashing all the time or too many trees and details slowing things down. I also think this will change architecture schools dramatically in terms of pin up. Students who don't have that critical and analytical depth to their thinking will flock to this aesthetic driven approach fo ideation.
your image is already a scribble, you don't need to put the preprocessor as scribble. It can be left to none. Use the preprocessor if you want to change your image into a scribble
Love that you put it that it's only to help you come with more ideas. It is only a tool we are still the master and still need to match what the image is to what the client needs .. yes make more detailed video's
I would like to see a video that uses both a floorplan and 2D designs for for example.a kitchen from the front. It would be interesting to see if people like me, with limited drawing and no 3D skills, could use tools like Figma to create 2D organizations of cabinets and floor plans to create effective rendering of the environment
Hi, thanks for your comment :) There is no such tool that allows us to use both floor plans and side views as input to create 3D models or renders. But the whole industry is moving and improving incredibly fast, and I am pretty sure someone is working on this right now :) When I see something related, I will definitely share it!
@@amagro9495 Thanks for your comment! Hmm, good question. Changing perspective for the same space can be challenging if you are only using text-to-image or image-to-image modes. But if you have a basic 3D model that you can work on, you can manage to do it. I just uploaded a video about creating renders from the 3D model; feel free to check that out. But I will definitely test and experiment with the perspective change!
Hey, thanks for your nice comment! is an additional Lora model to improve overall quality of the image, but it is not necessary to use it. You can learn more about it here: civitai.com/models/13941/epinoiseoffset
@@designinput Thank you! Increasing the image quality is an important task for me. Could you be so nice to explain what is the meaning of "dslr" in your key words mean?
@@7ckngsane354 hey, dslr refers to the DSLR cameras. It is a common keyword for stable diffusion prompting, but it is hard to judge the effect of a keyword like this on the overall image quality. Even though sometimes it can help, I don't think it has a huge impact. Feel free to experiment with/without it to see the difference between them and share the results with us :) You are very welcome ❤
Until we're not able to change specific materials on specific objects i dont see a huge point in this. The sketch would be enough to let your agency or even the client imagine the result and the ai render could be very misleading compared to a handmade render of the sketch. Just a couple papaers down the line though, ghis will be the new process how it's done
Thanks for these great instructions! Couldn't figure out how to add "models" in ControlNet tab, now I have only "none" in "Model" tab, but you have some options with names "control_sd15_canny/normal/seg etc..) Thanks!
Hi Anton, thanks for your great feedback! You must download them separately and place them in the ControlNet folder under the models folder. You can download them here: huggingface.co/lllyasviel/ControlNet/tree/main/models Also, you can check this video to use it easily: Use Stable Diffusion & ControlNet in 6 Clicks For FREE: ruclips.net/video/Uq9N0nqUYqc/видео.html
Hello, my problem with this is that I cant find when I press processor scribble , and my generated images are very different than my sketch I upload, can you help me with that please ? appreciate your work
i also have that problem! The images it generate are very different (diferent shapes, window sizes, roof angle, etc.). I also have Realistic Vision V1.4 and Control Net with MLSD on... But the results are far from what is shown in the video..
There’s nothing better than to show a client a completely finished project right at the beginning so that you have zero wiggle room to change anything and their expectations are super high. haha amazing way to f yourself from the beginning.
Hey, I am using a laptop with RTX3060 and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. Depending on the image resolution, the number of sampling steps, and the sampling method, it takes 2-5 minutes on average. I usually test my prompt and settings with a low resolution and fewer sampling steps to make the process faster. And once I find a nice prompt combination and correct settings, I render a final version with a higher resolution.
I can't figure out how to install it. When I open the webui-user batch file, the code tells me to press any key to continue and when I do it, it just closes the window. Have restarted the PC, still not working properly
Hey, thanks for your comment! You can download Realistic Vision V2.0 here: civitai.com/models/4201/realistic-vision-v20 And you should place it to the Stable Diffusion folder under the models file. Thanks for your support
can i upload a floorplan to create a scenery for every angle of visualisation is needed. they have to match its look from angle to angle and should be correct with the reality around it. give it some years and you just implement points on a 3d model to do do. keywords for every surface and a hirarchy for the post production look. from 3d to promts to avoid fine tuning in specific programs you may not understand.
thanks for sharing.. i have a problem on getting my design into realistic as possible because i dont have budget to buy good performance PC (i even cannot open d5 render and have 0 to 5 fps when using lumion). if only i can master this and somehow make its like rendering my design image it will really helpful for my future !
Hey, you are very welcome; thanks for your comment! Ah, I feel your pain... Well, then, local Stable Diffusion is not a good option in this case, but you can try cloud base platforms to use Stable Diffusion; just with a couple of bucks, you can use it without any issues. I plan to make a video to share some options for these platforms.
@@designinput ah.. thanks for your insight imma learn into that! But this video like give me a glimpse of hope if maybe free AI can just rendering our design into realistic image and make ud can adjust the material/color too!! Well. But i think its will hits hard that many high budget rendering software and their very high specs PC too ! 🤣
I follow your steps, but for some reason, it won't use the image / sketch but makes a completely new image. How to you get stable diffusion to use the sketch as the base to create the CGI on?
@@designinput Thank you for your reply. Yes, after reading through the comments I saw someone mentioned to turn it on, and I did. Still didn't solve the issue. I'm following your new video now and see if this works then.
I can't find scrabble preprocessor even that I dowmloaded scrabble model other scrabble preprocessors just like scrabble hed and pidinet are available so what is the problem?
Hey, if you will upload your drawing to ControlNet you don't need to use preprocessor. Just choose "none" for preprocessor and "scribble" model. Thanks for your comment!
Great video and I like to repeat the steps you demonstrate. The link to "Realistic Vision V1.4" appears broken, but I did find a similar download on huggingface. However, I do not have the ControlNet option visible when I go to Stable Diffusion after following all of the steps. What am I missing?
Hey, thanks for letting me know; I changed it with the updated Realistic Vision V2.0. At the moment, ControlNet doesn't come directly with Stable Diffusion, you need to download it separately and then put it in the ControlNet folder inside the Stable Diffusion folder on your computer. You can download the ControlNet models here: huggingface.co/lllyasviel/ControlNet/tree/main/models And then, you should move them here: C:\SD\stable-diffusion-webui\models\ControlNet After you place the folder, restart the Stable Diffusion, and you should see the ControlNet section. I will upload a detailed step-by-step tutorial about this in the following days.
@@robwest1830 Hey, no, we don't need all of them. If you want to use only your sketches as input, you can download only the scribble model (which is the best for sketches). Or you can try depth mode if you want to use views from your 3D model or photos.
@@designinput I'm very impressed and I would like to try it out myself, but I ran into the same problem, missing ContorlNet option in Stable Diffusion. I created the folder ControlNet in stable-diffusion-webui\models\ Then restarted webui-user.bat, but stable diffusion doesn't show ControlNet at all. Am I missing something? I downloaded the scribble model and put it in ControlNet folder
I tried it. Constructing / modeling takes up most of the time. Assigning the materials in the render program is quick. In the AI you have to try a lot of prompts and generate a lot of images. That takes longer. And it's inferior in quality.
Hey, if the goal is to create a final quality render, you are absolutely right. It can easily become more time-consuming than actually modeling everything and creating renders. But if the goal is creating something more conceptual for the early phases of the design process it can be really beneficial and time saving
Hey bro i did follow and reach a mad lvl crazzy stuff thanks but for your this process i couldn't figure out the control net 1.111 processor not get Only control net running! If yiu can help would be great!!
thanks for the toturial bro can you add the link for Realistic-vision 1.4.ckpt which you used in the video please,and one more thing i can't find ControlNet to add picture what's the issue i've?
Hi there! Amazing info. Been trying this for the past few days. I had a problem with at first with CUDA and VRAM. I thought it was because of my GPU (i have an Nvidia GTX 1050 with 4GB), so i made a few adjustements with another video i've seend about this (editing medvram or xformers), but they usually change a bit the results from de AI. Did you have some problems with CUDA when you try to generate images? is there a way to solve this without changing to much parameters? Thx a lot for the info!
Hey, thanks for the comment! I have a GPU with 6GB VRAM, so I had issues with that too. As far as I know, xformers can change the result slightly, but I had better results only with medvram or lowvram. They use less vram but increase the generation time.
@@designinput That's right, i did that too. Just testing some results, the best ones came from using only medvram. Also, i've seen another thing that's called "Token Merge", but that's in case those other things didn't work (xformers, med or lowvram). Thx a lot again!
Hey, thank you! I have used Realistic Vision V2.0 model together with epi_noiseoffset. You can find their links here: civitai.com/models/4201/realistic-vision-v20 civitai.com/models/13941/epinoiseoffset
Hi, you need to download ControlNet Models separately and then put them in the ControlNet file: C:\stable-diffusion-webui\models\ControlNet You can find all the models here: huggingface.co/lllyasviel/ControlNet/tree/main/models You don't need all of them; if you want to follow this video, you can download only the Scribble model, but feel free to experiment with all of them :) Thanks for your comments!
Hi Michele, thanks for your comment. You need to download the ControlNet models additionally; you can find them here: huggingface.co/lllyasviel/ControlNet/tree/main/models I will upload a step-by-step tutorial about the whole process soon, hope that will be helpful for you.
Hi, is Stable Diffusion checkpoint important to have that result, I tried to use the same setting with the same sketch (your sketch) but can't have the same result
Hey, yes, which model you use has a significant impact on the final image. My current favorite model is Realistic Vision V2.0. You can download it from the link in the video description. Thanks for your comment!
It is kind of terrifying . Many skills will become obsolete with time . It's a new revolution in my eyes just like the arrival of Software when the work is done with hand drafting. It is not like everything is going to become absolutely obsolete. We can't say to clients to trust the AI . Human expertise and experience will still be needed (forever I guess ) But this rendering and modeling field is at a great danger .
Hey, thanks for your comment! Of course, I don't think it will (or can) replace people entirely, but it will allow them to speed up the overall process. Because of this, you will need fewer people to work with...
Its GAME OVER. Even for archtects…ai will kill that too. You need certain skils to become «this and that» ai will make everybody every thing…so the competition will kill the Edge certain people had and therby the income witch will drasticaly be reduser to dust…
Hey, there are many web applications that do that right now. You can get the API directly from Stability AI or just install it on a cloud computing service (like AWS) and run it there
Hey, I believe you can. It mostly depends on your GPU and the amount of VRAM it has. I am using RTX 3060 6GB VRAM. So feel free to test it out. If you can't, you can check out this video to use it on Google Colab: ruclips.net/video/Uq9N0nqUYqc/видео.html&lc=Ugxw1pFnOcldtEnPEAt4AaABAg
@@designinput thanks for your answer.. I got the info I needed. This Ai tools are developing fast, I do believe better and more accurate to the architecture branch should be developed soon.😊👍
Hi, hmm, good question. I haven't tested much that option yet, but I definitely will now. What kind of drawing do you mean? Like sketch or more technical CAD-style drawings?
Hey, thank you soooo much for this video! Your resutls are amazing, but mine are ekmg they s*ck haha... I think the problem is that control_sd15_scribble does not load for me, can you give links to all of the files we need donwload (models) - I am using RunPod, maybe you could help me with that?
hey, so I see I have a problem in the "preprocessor x model" since don't see '....Scribble', but this: 'control_v11p_sd15_canny [d14c016b] ' I have uplaoded it to the workspace/stable-diffusion-webui/models/Stable-diffusion/control_sd15_scribble.pth or should I put it somehwere else? Thank you
Hey, sorry for the late response :( workspace/stable-diffusion-webui/models/Stable-diffusion/control_sd15_scribble.pth this path is totally correct. Let me know if it still doesn't work and we can take a look together
Hey, unfortunately, not really :/ You can primarily describe it with text; additionally, you can add similar textures to your sketch to mimic a similar material. Thanks for your comment!
Hey, unfortunately, I don't have much experience with how to use it on a Mac but you can follow this tutorial to install it. Hopefully, it will help, thanks :) ruclips.net/video/Jh-clc4jEvk/видео.html
Hey, there is no special formula for the text input. I mostly try to follow the structure from the checkpoint I am using. But you can just freely describe the scene you would like to create in you prompt.
Hey, sure, after you download the Realistic Vision model, all you need to do is drop that file to the "C:\stable-diffusion-webui\models\Stable-diffusion" folder. After that, if you start Stable Diffusion again, you can find it in the available model's menu. Let me know if you need any help. Thank you :)
Hi @@robwest1830, you can find all the necessary resources in the link in the video description. For installation, I will share a quick tutorial, but until then, feel free to follow this one: ruclips.net/video/hnJh1tk1DQM/видео.html He clearly explains everything you need to install to start using it.
Hey, thanks for your comment! You must download them separately and place them in the ControlNet folder under the models folder. You can download them here: huggingface.co/lllyasviel/ControlNet/tree/main/models Also, you can check this video to use it easily: Use Stable Diffusion & ControlNet in 6 Clicks For FREE: ruclips.net/video/Uq9N0nqUYqc/видео.html
I found how to install control net, but I can only select the preprocessor and not the model in the tab. In the video you have multiple solutions...my only one is "none"". Do you know how to fix it?
how did you create the sketch, with what, and how long. it cannot be done with 5min 😂 many tools there on photo, slab wall roof beam column, windows kitchen accessories objects 😂😂 etc.
Hey, I created the sketch on my iPad in around 10 minutes, I guess. But the main point of the video is not about the sketch. I focused more on the sketch to render workflow. That's why I didn't want to include the sketching part :) You can add any details you wish to your drawing or leave it for AI's imagination!
the problem soon will be: "why to pay an architect, an interior designer, a 3d designer, when i can ask to an AI to do everything in just few seconds?" it`s funny to see people so excited about AI not understanding that the AI is going to replace them and leave them without their job
ermm i mean. this tool seems to work great for concepts. AI is expert at creating a photo which, at a glance, is extremely beautiful. but we are talking about constructing a real building. if you look at the images it brought out, they all depict spaces which cant actually exist. right away i noticed the glass panels and how they dont actually make sense (ai hasnt mastered object permanence) to me, its more likely that while some jobs will be lost/condensed, its more going to be a skill that people must learn. if architects and designers learn how to use AI well, theyll have a powerful tool at presenting concepts to clients which can then be refined into real designs. ai is powerful but it needs someone to steer it.
AI can't be held liable of a design that is actually to be erected, they can't sign and seal documents needed for your dream building. Yes surely they can design buildings, but are those design produced by them are sustainable or efficient? Does it meet the clients needs and building laws? I believe AI is still a tool that will help architect a lot, from conceptualization to presentations but still AI cant supplant an Architect's creativity and liabilities.
There goes this thing when 1st calculators were invented: These are gonna replace mathematicians. But 😂 calculators and computers didn't replace anything. So yea. My point is that it's just a tool and depends on user. Maybe we won't require that much engineering skills but it doesn't mean we don't require engineering skills at all. In corporate maybe little bit what ya thinking is true. But very small amount.
You are obviously not a practicing architect and do not understand what it is architects actually do. These are just renderings. The production of inspirational imagery is a tiny part of an architect’s job and is not the same thing as designing and constructing a building.
Well you have cooking RUclips videos and ingredients sent to your front door but I don’t think McD’s or any restaurants suffered. Some times people can’t or don’t want to do things themselves. But I do think architects are like software designers.Some times it’s simple as copy and paste lol and they make the software for the architects :) so it’s funny Yes I’m one so I know
Hey, what do you mean exactly by different images? It is possible to have a certain level of control over the process with ControlNet but up to some level. Even if you keep the seed number the same, the final images probably will be very different from each other. I am sure we will have more control over it soon with all the new developments, but it is not quite possible to generate exactly the same image multiple times. Thanks for your comment!
My younger clients will be able to do all my previous work in arch visualisation within a year. GAME OVER!!!
All the fulfilling human jobs are on the way out!. Really depressing
No they don't. Assuming you're an actual architect and not just someone who does visualisations. AI has an understanding of architecture, and it looks very good, BUT it doesn't have the knowledge what makes physically the most sense, what are regulations or what does and what doesn't work. You still can lead projects, and oversee the building process in general.
I also specialised in visualisations but am shifting to a more technical level at the moment. Keep up with technology or it owns you.
@@Tepalus ai is in a way just been born, and will be on a totaly new level already within a year!!! Beast mode in two years... What about 10 years?
in third world countries, this already affecting so many job market. those who do not even have a nuance of abilities to produce design can call themselves designer. They said ' we do not need these people now to do images of our product now, we can do that in Ai'. I'm waiting for the day when people left and right start suing over common design interest.
If this is enouh for you to be replaced.i question what you do as architect. This just help you make render easier.
This channel will grow so fast if you can show either through Stable Diffusions or MidJourney 5.1 how to render a sketchup file, 3d max (jpeg) the exterior of a building into the render we want without a lot of distortions using prompts.
There is no such video online. And I am positive that if people are not searching it now, they will very soon!
how do you knew that? maybe every architect and other creative person once heard about ai and is following these topic/is using it?
@@adoyer04 maybe..but maybe i am a wizard 🤷🏻♂️
@@petera4813 im an architect in a firm and we want it but cant find it.
@@michaelbooth90 if you find... let me know...!!!! ;)
Hey, thanks a lot for your nice comment! I totally agree; that's where we are headed to. It is not quite possible to have a simple one-click render solution yet without many settings and prompting the "try error and experimenting" process. Although, I am working on a video for Midjourney and how to use it to render from a sketch or base simple image. I will share it as soon as I figure out a nice, straightforward workflow.
Hey I am a architect from Switzerland and it really amazes me how far we came. I already did a presentation in my architectural office and I am about to implement it in our design workflow... After using a lot of midjourney I came across the problem not having the control to just change a specific thing... I am trying now a combination of Stable Diffusion and MJ. Thank you for your informative video!
One question do I have: What computer do you use, graphic card and memory and how long does it take to for you to create a picture (AI render process)? I am working with a late MacBookPro and it takes me up to 10min to have a picture.
Hey, thank you for your comment :) That's great to hear because I think, usually, our industry is not the fastest in case of adaptations to new technologies :|
Thanks for your kind words, I really appreciated it ❤
@@Ramb0li I am using a laptop with RTX3060 and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. Depending on the image resolution, the number of sampling steps, and the sampling method, it takes 2-5 minutes on average.
I usually test my prompt and settings with a low resolution and fewer sampling steps to make the process faster. And once I find a nice prompt combination and correct settings, I render a final version with a higher resolution. Maybe that can help to speed up the process.
Thank you for taking the time to show us this fantastic tool, and very inspiring ideas. I believe that AI resources, are here to stay. All we have to do is figure out the best way to work with them. We are just starting to work with this, and we still have a lot to learn, including improving our writing skills to make better prompts.
Hi, thanks for your comment and lovely feedback. Totally agree; soon, we will have more ideas on how to use it more user-friendly way.
Regarding prompting, I believe it will have less impact on the overall result in the future. We will be able to explain it with plain text without needing any special keywords or phrases.
🧡
You can find all the recourses here: designinputstudio.com/create-realistic-render-from-sketch-using-ai-you-should-know-this/
ControlNet Paper: arxiv.org/pdf/2302.05543.pdf
ControlNet Models: huggingface.co/lllyasviel/ControlNet/tree/main/models
Realistic Vision V2.0: civitai.com/models/4201/reali...
Install Stable Diffusion Locally (Quick Setup Guide): ruclips.net/video/Po-ykkCLE6M/видео.html
Instagram: instagram.com/design.input/
Do u use realistic vision V2.0 or V1.4 like all the tutorials? Thank u!
@@fc5130 Hey, in this video, I used V1.4. Because, at that time, Realistic Vision V2.0 wasn't available yet. I am using V2.0 at the moment.
You are very welcome :)
@@designinput Thank u :)
I wish some of the prompting be replaced by inputing additional images and tagging or labeling through sketching perhaps like in Dalle-E. For example instead of describing how modern styled green sofa with geometrical patterns I want, I should be able to drop a reference photo of such a sofa or any other object inside my project. I am sure these kind of features will come sooner than later but what makes Stable Diffusion amazing is that its also free and open source.
Just give it time and everything you described will be possible.
Thank you so much for sharing this. I am trying to figure out how to do something similiar with portraits, keeping the original face and changing the clothes, background focal length etc. This is a great starting point.
Hey, thanks for your lovely feedback and comment! Hmm, interesting idea. I will definitely try it out. Please share your result and experience with us!
@@designinput consider making a video in which you show us how to create a 3d render from a sketchup jpeg without any changes on the composition and the placement of the object. would be really helpful
PERFECT !!!! That's all i see about it. Nice work bro 👍
Hey, thanks a lot for your comment
I didn't know such a thing was possible from napkin sketch to render. Thanks
Hi, thanks for your comment. You are very welcome, happy to hear it was helpful!
So this needs to be developed using an interactive user interface.
The word prompts need to become labels. Architects want to be able to draw lines from objects and label them feeding specific information into the AI generation.
The Architect does not care about multiple options as much as he cares about creating the specific option he desires.
He must be enabled through the interface to engage in an interactive back and forth. Erasing parts, and redrawing them, developing parts of the drawing, adding more specific labels,........ all in an endeavour to produce a vision as close to what he sees in his minds eye.
This is of utmost importance.
All said and done, on a positive note, this is the only useful sphere for architects, which I think they may use and be willing to pay for, that I have seen thus far from all the AI related attempts.
It would be idiotic not to take it forward to fruition.
Hey, you are right, and we will soon see more user-friendly interfaces integrated with other software for sure.
I totally agree; in the case of architecture, accuracy and quality are way more important than the number of alternatives you have. But even a couple of months ago, having this much control over the whole generation process was impossible. And it is getting better every day. I am sure you will be able to fine-tune your final result very soon.
Thank you very much for your comment!
I agree with you. Some of the drawing/erasing features of Dall-E would be amazing! You can already use Dall-E to replace parts of an image but you can`t use it for entire image2image process.
I certainly agree with you text prompts needs to become labels.Great Idea
Nice work. This is clearly the direction of how concepts are generated. probably in another 4 weeks this capability will be available at numerous webapps for free.
Hey Tom, thanks for your comment! Totally agree! We will start to see this workflow integrated into many different applications soon.
I had never heard of Stable Diffusion before and it looks really helpful!! Please make a tutorial on how to install it!!
Thank you for your comment! Definitely, I will make one soon.
I agree, this would be helpful!
@H M 😂😂
Not only it's helpful. It'll save us lots of money n time.
@@knight32d haha :) Totally agree! Thanks for your comment!
WOW it worked!!! THANKS A LOT!!! I had to download some important stuff like .pth files and then drag them to the right place.
Just to find them afterwards in ControlNet / Model like in your example. YOU ARE AMAZING WITH THIS TUTORIALS!!! THANKS
Hi, you are right it's a bit detailed and a long process for an architect but super happy to hear it worked 🧡 Thanks for the lovely comment!
Exactly what i was looking for, thank you!
Great to hear! You are very welcome :)
It is almost what I was searching of thank you for your help
You are very welcome, thanks for your comment!
not sure, but I believe you don't need to choose anything from preprocessing menu, just leave in at none because otherwise you let SD to create sketch from sketch as input
Yes, you are absolutely right. I didn't realize that at that time. Thank you for letting us know about it!
Brilliant tutorial. Many thanks for this.
THIS is a game changer.
Hi, it really is... Thanks for the comment!
I am an architectural designer. I've been doing this for 15 years. I don't think I need it anymore.
wow this looks amazing! Here is what I am thinking : is it possible to turn an image into sketch by AI, then use AI on that sketch to produce designs that actually fits the real-life object?
there's a lot of programs can help to turn it to sketch but will not be clear like rendering it
directly from sketchup to AI for different testing different looks
Ellerine sağlık videolar çok güzel ve bilgilendirici. (Sanırım Türksün ingilizcesi ne güzelmiş çok iyi anladım dedim ve sonra farkettim) sanırım bu saatten sonra Türkçe içerik gelmez :)
🧡🧡
Very clear explanation, selamlar addled
Thanks alot but also it could of helped to show how to install controll net
I actually don’t want random results in my designs... and it’s really not that hard to texture or model a 3d scene... but it is useful for finding ides maybe
But with the technology, people will start using it and it may become the industry norm to have this quality of rendering early in the design stage. It might become less what we want and what the client / market expects. We are already facing similar things with clients expecting renders early on so they can visualise the thinking. They don't understand sketches and drawings like we do. The majority don't actually understand the work we do beyond what colour the kitchen bench should be, which is often how they want to express some control / knowledge in the design process. I also would never be able to produce so many variations to this level of detail in the time it would take to sketch five solid ideas and model them in sketchup or rhino, then render them dealing with vray crashing all the time or too many trees and details slowing things down. I also think this will change architecture schools dramatically in terms of pin up. Students who don't have that critical and analytical depth to their thinking will flock to this aesthetic driven approach fo ideation.
your image is already a scribble, you don't need to put the preprocessor as scribble. It can be left to none. Use the preprocessor if you want to change your image into a scribble
Yes, you are absolutely right. I didn't realize that at that time. Thank you for letting us know about it!
Love that you put it that it's only to help you come with more ideas. It is only a tool we are still the master and still need to match what the image is to what the client needs .. yes make more detailed video's
Exactly! Thank you very much for your comment!
I will share a detailed step-by-step tutorial about it very soon.
I would like to see a video that uses both a floorplan and 2D designs for for example.a kitchen from the front. It would be interesting to see if people like me, with limited drawing and no 3D skills, could use tools like Figma to create 2D organizations of cabinets and floor plans to create effective rendering of the environment
Hi, thanks for your comment :) There is no such tool that allows us to use both floor plans and side views as input to create 3D models or renders. But the whole industry is moving and improving incredibly fast, and I am pretty sure someone is working on this right now :)
When I see something related, I will definitely share it!
@@designinput Congrats for the video. Do you know if it is possible to generate, from a single image/design, several ones with different perspectives?
@@amagro9495 Thanks for your comment! Hmm, good question. Changing perspective for the same space can be challenging if you are only using text-to-image or image-to-image modes. But if you have a basic 3D model that you can work on, you can manage to do it. I just uploaded a video about creating renders from the 3D model; feel free to check that out.
But I will definitely test and experiment with the perspective change!
@@designinput I think he means using a floorplan in SD to generate a "3D rendering"
This is amazing. I have a question: What does the mean in your key words? What does dslr mean as well? Much appreciated!
Hey, thanks for your nice comment! is an additional Lora model to improve overall quality of the image, but it is not necessary to use it. You can learn more about it here: civitai.com/models/13941/epinoiseoffset
@@designinput Thank you! Increasing the image quality is an important task for me. Could you be so nice to explain what is the meaning of "dslr" in your key words mean?
@@7ckngsane354 hey, dslr refers to the DSLR cameras. It is a common keyword for stable diffusion prompting, but it is hard to judge the effect of a keyword like this on the overall image quality. Even though sometimes it can help, I don't think it has a huge impact. Feel free to experiment with/without it to see the difference between them and share the results with us :)
You are very welcome ❤
@@designinput 👍
it is fantastic!!! thank you so much for sharing
Hi, thanks a lot for your feedback
Until we're not able to change specific materials on specific objects i dont see a huge point in this.
The sketch would be enough to let your agency or even the client imagine the result and the ai render could be very misleading compared to a handmade render of the sketch.
Just a couple papaers down the line though, ghis will be the new process how it's done
Thanks for these great instructions! Couldn't figure out how to add "models" in ControlNet tab, now I have only "none" in "Model" tab, but you have some options with names "control_sd15_canny/normal/seg etc..) Thanks!
Hi Anton, thanks for your great feedback! You must download them separately and place them in the ControlNet folder under the models folder. You can download them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
Also, you can check this video to use it easily: Use Stable Diffusion & ControlNet in 6 Clicks For FREE: ruclips.net/video/Uq9N0nqUYqc/видео.html
looks amazing!
Thank you!
teşekkürler
:)
Thank you for your time
Hey, you are very welcome! Thanks a lot for your comment, happy to hear that!
Thank you for the tips! ;-)
Great video. You should do one with the same sketches but using Midjourney as a comparison please.
Hi, thanks for your lovely comment and suggestion! I am currently working on that, I will upload a video about it soon!
Hello, my problem with this is that I cant find when I press processor scribble , and my generated images are very different than my sketch I upload, can you help me with that please ? appreciate your work
i also have that problem! The images it generate are very different (diferent shapes, window sizes, roof angle, etc.). I also have Realistic Vision V1.4 and Control Net with MLSD on... But the results are far from what is shown in the video..
Great. Thank you very much.
There’s nothing better than to show a client a completely finished project right at the beginning so that you have zero wiggle room to change anything and their expectations are super high. haha amazing way to f yourself from the beginning.
The renderings are niceeeeeeeeeee!
Hi, thanks a lot for your lovely feedback
That is really impressive
Hey, thank you!
Could you please tell me about your computer's specs? What graphics card are you using, and does it take a long time to generate each image?
Hey, I am using a laptop with RTX3060 and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. Depending on the image resolution, the number of sampling steps, and the sampling method, it takes 2-5 minutes on average.
I usually test my prompt and settings with a low resolution and fewer sampling steps to make the process faster. And once I find a nice prompt combination and correct settings, I render a final version with a higher resolution.
I can't figure out how to install it. When I open the webui-user batch file, the code tells me to press any key to continue and when I do it, it just closes the window. Have restarted the PC, still not working properly
Very inspirational
Thank you Now you are my teacher!
Hey, glad to hear you liked it :)
Haha, thanks a lot for your lovely comment!
Hi! thanks for the video, very interesting. how did you convert the safetensor for realistic vision v2.0 to cptk?
Thanks and keep up the good work!
Hey, thanks for your comment! You can download Realistic Vision V2.0 here: civitai.com/models/4201/realistic-vision-v20
And you should place it to the Stable Diffusion folder under the models file.
Thanks for your support
Wow! Fascinating, thank you for making this video.
Hey, thanks a lot for your lovely feedback and comment
can i upload a floorplan to create a scenery for every angle of visualisation is needed. they have to match its look from angle to angle and should be correct with the reality around it. give it some years and you just implement points on a 3d model to do do. keywords for every surface and a hirarchy for the post production look. from 3d to promts to avoid fine tuning in specific programs you may not understand.
thanks for sharing..
i have a problem on getting my design into realistic as possible because i dont have budget to buy good performance PC (i even cannot open d5 render and have 0 to 5 fps when using lumion). if only i can master this and somehow make its like rendering my design image it will really helpful for my future !
Hey, you are very welcome; thanks for your comment! Ah, I feel your pain... Well, then, local Stable Diffusion is not a good option in this case, but you can try cloud base platforms to use Stable Diffusion; just with a couple of bucks, you can use it without any issues. I plan to make a video to share some options for these platforms.
@@designinput ah.. thanks for your insight imma learn into that! But this video like give me a glimpse of hope if maybe free AI can just rendering our design into realistic image and make ud can adjust the material/color too!! Well. But i think its will hits hard that many high budget rendering software and their very high specs PC too ! 🤣
I follow your steps, but for some reason, it won't use the image / sketch but makes a completely new image. How to you get stable diffusion to use the sketch as the base to create the CGI on?
Hey Daniel, thanks for your comment. It is probably related to ControlNet. Did you enable it before you generated the new image?
@@designinput Thank you for your reply. Yes, after reading through the comments I saw someone mentioned to turn it on, and I did. Still didn't solve the issue. I'm following your new video now and see if this works then.
it is possible to get interior using a 3d model of the lamp
Ammmmmaaaaaaaazzzzing
Hey, thank you!
I can't find scrabble preprocessor even that I dowmloaded scrabble model other scrabble preprocessors just like scrabble hed and pidinet are available so what is the problem?
Hey, if you will upload your drawing to ControlNet you don't need to use preprocessor. Just choose "none" for preprocessor and "scribble" model. Thanks for your comment!
@@designinput Ok 👍 thanks for your help
Thanks so much helpful for me
Hey, thank you for your comment. So happy to hear that, you are very welcome ❤️
Great video and I like to repeat the steps you demonstrate. The link to "Realistic Vision V1.4" appears broken, but I did find a similar download on huggingface. However, I do not have the ControlNet option visible when I go to Stable Diffusion after following all of the steps. What am I missing?
Hey, thanks for letting me know; I changed it with the updated Realistic Vision V2.0. At the moment, ControlNet doesn't come directly with Stable Diffusion, you need to download it separately and then put it in the ControlNet folder inside the Stable Diffusion folder on your computer.
You can download the ControlNet models here: huggingface.co/lllyasviel/ControlNet/tree/main/models
And then, you should move them here: C:\SD\stable-diffusion-webui\models\ControlNet
After you place the folder, restart the Stable Diffusion, and you should see the ControlNet section. I will upload a detailed step-by-step tutorial about this in the following days.
@@designinput do we need all of the controlnet files? as there are 8 4.71 GB files
@@robwest1830 Hey, no, we don't need all of them. If you want to use only your sketches as input, you can download only the scribble model (which is the best for sketches).
Or you can try depth mode if you want to use views from your 3D model or photos.
@@designinput I'm very impressed and I would like to try it out myself, but I ran into the same problem, missing ContorlNet option in Stable Diffusion.
I created the folder ControlNet in stable-diffusion-webui\models\
Then restarted webui-user.bat, but stable diffusion doesn't show ControlNet at all. Am I missing something? I downloaded the scribble model and put it in ControlNet folder
i still think its hard to control and fine tuning the ai image, still better to handle with 3d software
Is there a way we can incorporate specific furniture we might see in an online store
I tried it. Constructing / modeling takes up most of the time. Assigning the materials in the render program is quick. In the AI you have to try a lot of prompts and generate a lot of images. That takes longer. And it's inferior in quality.
Hey, if the goal is to create a final quality render, you are absolutely right. It can easily become more time-consuming than actually modeling everything and creating renders. But if the goal is creating something more conceptual for the early phases of the design process it can be really beneficial and time saving
Teşekkürler
Super
This IA is paying or free? Carmaker online or need download program?
Thanks 😊
Hey bro i did follow and reach a mad lvl crazzy stuff thanks but for your this process i couldn't figure out the control net 1.111 processor not get Only control net running! If yiu can help would be great!!
I’m so happy. Omg
Nice room decor video
Hey, thanks for your comment :) Happy to hear that you liked it!
thanks for the toturial bro
can you add the link for Realistic-vision 1.4.ckpt which you used in the video please,and one more thing i can't find ControlNet to add picture what's the issue i've?
same issue
who needs architecture school anymore.
Hi there! Amazing info. Been trying this for the past few days. I had a problem with at first with CUDA and VRAM. I thought it was because of my GPU (i have an Nvidia GTX 1050 with 4GB), so i made a few adjustements with another video i've seend about this (editing medvram or xformers), but they usually change a bit the results from de AI.
Did you have some problems with CUDA when you try to generate images? is there a way to solve this without changing to much parameters?
Thx a lot for the info!
Hey, thanks for the comment! I have a GPU with 6GB VRAM, so I had issues with that too. As far as I know, xformers can change the result slightly, but I had better results only with medvram or lowvram. They use less vram but increase the generation time.
@@designinput That's right, i did that too. Just testing some results, the best ones came from using only medvram. Also, i've seen another thing that's called "Token Merge", but that's in case those other things didn't work (xformers, med or lowvram).
Thx a lot again!
great tool for visualization, but architecture is not just visual, anyway cool stuff!!!
Hi, totally agree! Thanks for the comment :)
no need for preprocessor in control net Tab when U have already an image made with the control net model
Yes, you are absolutely right. I didn't realize that at that time. Thank you for letting us know about it!
impressive!
Thank you!
Hi dude, thaks for sharing❤
Hey, thank you for the feedback ❤ Happy to hear that you liked it!
@@designinput Where r u from?
Superb.
Hey Berk, thanks a lot for the feedback! ❤
Hi, thanks for your video! quick question which lora model u used? where i can download it?
Hey, thank you! I have used Realistic Vision V2.0 model together with epi_noiseoffset. You can find their links here:
civitai.com/models/4201/realistic-vision-v20
civitai.com/models/13941/epinoiseoffset
@@designinput Thank you so much! Really appreciate it!
The scribble model isn't in my drop down
thanks a lot❤
Thanks for your kind comment! Glad to hear that you liked it :)
Hi, everything is fine but under the controlnet dropdown under models it says that I have none. Where do I get the ones you have?
Hi, you need to download ControlNet Models separately and then put them in the ControlNet file: C:\stable-diffusion-webui\models\ControlNet
You can find all the models here:
huggingface.co/lllyasviel/ControlNet/tree/main/models
You don't need all of them; if you want to follow this video, you can download only the Scribble model, but feel free to experiment with all of them :)
Thanks for your comments!
Great video! I have a question...how do I activate controlnet in the text to image prompt? I don't see this option in my realistic_vision_1.4
Hi Michele, thanks for your comment. You need to download the ControlNet models additionally; you can find them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
I will upload a step-by-step tutorial about the whole process soon, hope that will be helpful for you.
@@designinput thanks for the reply. I found the video...but still have problems
Hey ,,i need your help could you plz one image rendering for my college project, coz i haven't laptop
Hey, thanks for the comments! How can I help? Let me know please, thanks :)
I can send you one sketch could you plz convert into colour image plz
Reply me as soon as possible
Hi, is Stable Diffusion checkpoint important to have that result, I tried to use the same setting with the same sketch (your sketch) but can't have the same result
Hey, yes, which model you use has a significant impact on the final image. My current favorite model is Realistic Vision V2.0. You can download it from the link in the video description.
Thanks for your comment!
@@designinput I tried, however, the result still didn't follow your sketch, I use the ggl collab one though
how do i load control sd 15 scribble into model? thank you
It is kind of terrifying . Many skills will become obsolete with time . It's a new revolution in my eyes just like the arrival of Software when the work is done with hand drafting. It is not like everything is going to become absolutely obsolete. We can't say to clients to trust the AI . Human expertise and experience will still be needed (forever I guess ) But this rendering and modeling field is at a great danger .
Hey, thanks for your comment! Of course, I don't think it will (or can) replace people entirely, but it will allow them to speed up the overall process. Because of this, you will need fewer people to work with...
Its GAME OVER. Even for archtects…ai will kill that too. You need certain skils to become «this and that» ai will make everybody every thing…so the competition will kill the Edge certain people had and therby the income witch will drasticaly be reduser to dust…
Is there any way we can use the API to create our own app that does this in a more "one click" kind of way with the correct prompts?
Hey, there are many web applications that do that right now. You can get the API directly from Stability AI or just install it on a cloud computing service (like AWS) and run it there
can I install stable Diffusion on my home PC, it has a graphic card rtx2060, and an i7 10th gen with 16gb ram, will it work?
Hey, I believe you can. It mostly depends on your GPU and the amount of VRAM it has. I am using RTX 3060 6GB VRAM. So feel free to test it out.
If you can't, you can check out this video to use it on Google Colab: ruclips.net/video/Uq9N0nqUYqc/видео.html&lc=Ugxw1pFnOcldtEnPEAt4AaABAg
I have not seen anywhere any mention to the resolution of rendered images. How big or what size can you get from this? Thanks 😊
Hi, by defult it generates 512x512 but you can enter custom values up to 2048x2048. I think that's the limit.
Thanks for your comment :)
@@designinput thanks for your answer.. I got the info I needed. This Ai tools are developing fast, I do believe better and more accurate to the architecture branch should be developed soon.😊👍
@@dalegas76 you are very welcome, that's great! Totally agree, I believe it will be very soon :)
is there a method to transform photos into drawings
Hi, hmm, good question. I haven't tested much that option yet, but I definitely will now. What kind of drawing do you mean? Like sketch or more technical CAD-style drawings?
@@designinput like sketch
@@ABDUCTOLOGY photoshop has had an effect to do that for a while. It might be called cartoonize or something like that
Hey, thank you soooo much for this video! Your resutls are amazing, but mine are ekmg they s*ck haha...
I think the problem is that control_sd15_scribble does not load for me, can you give links to all of the files we need donwload (models) - I am using RunPod, maybe you could help me with that?
hey, so I see I have a problem in the "preprocessor x model" since don't see '....Scribble', but this: 'control_v11p_sd15_canny [d14c016b] '
I have uplaoded it to the workspace/stable-diffusion-webui/models/Stable-diffusion/control_sd15_scribble.pth
or should I put it somehwere else?
Thank you
Hey, sorry for the late response :( workspace/stable-diffusion-webui/models/Stable-diffusion/control_sd15_scribble.pth this path is totally correct. Let me know if it still doesn't work and we can take a look together
How can I convert a 2d dwg file in to a 3d render using AI
thank you 🙏🙏🙏🙏🙏🙏
Thank you, glad that you liked it! ❤
Is there a way for it to reference real world materials? As if I put a link for a backsplash, can it use that?
Hey, unfortunately, not really :/ You can primarily describe it with text; additionally, you can add similar textures to your sketch to mimic a similar material.
Thanks for your comment!
Hi there! I have a Mac. How can I install stable diffusion?
Hey, unfortunately, I don't have much experience with how to use it on a Mac but you can follow this tutorial to install it. Hopefully, it will help, thanks :)
ruclips.net/video/Jh-clc4jEvk/видео.html
can you give examples for the input text that can work
Hey, there is no special formula for the text input. I mostly try to follow the structure from the checkpoint I am using. But you can just freely describe the scene you would like to create in you prompt.
Hi there, thanks for inspiring tutorial; what does " " mean?
Hey, thanks a lot for your comment, much appreciated
Can you tell me how to install : " Realistic Vision V1.4 Model " or " Realistic Vision V2.0 " after download, ? thank you ^^
Hey, sure, after you download the Realistic Vision model, all you need to do is drop that file to the "C:\stable-diffusion-webui\models\Stable-diffusion" folder. After that, if you start Stable Diffusion again, you can find it in the available model's menu.
Let me know if you need any help. Thank you :)
@@designinput yes i did that, thanks you so muchhhhh
@@nevergiveuptrader great, happy to hear it worked. You are very welcome :)
i dont even get how to download it :D pls tell me
Hi @@robwest1830, you can find all the necessary resources in the link in the video description.
For installation, I will share a quick tutorial, but until then, feel free to follow this one:
ruclips.net/video/hnJh1tk1DQM/видео.html
He clearly explains everything you need to install to start using it.
Forgive my ignorance: how do you install Control Net?
Hey, thanks for your comment! You must download them separately and place them in the ControlNet folder under the models folder. You can download them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
Also, you can check this video to use it easily: Use Stable Diffusion & ControlNet in 6 Clicks For FREE: ruclips.net/video/Uq9N0nqUYqc/видео.html
I found how to install control net, but I can only select the preprocessor and not the model in the tab. In the video you have multiple solutions...my only one is "none"".
Do you know how to fix it?
Hey, did you download the ControlNet models and place them into the ControlNet folder under the models file?
@@designinput thanks for the reply again. not it works ❤️
@@michelearchitecturestudent1938 you are very welcome ❤
So does this mean AI has officially replaced 3d architecture artists ??!!
how did you create the sketch, with what, and how long. it cannot be done with 5min 😂 many tools there on photo, slab wall roof beam column, windows kitchen accessories objects 😂😂 etc.
Hey, I created the sketch on my iPad in around 10 minutes, I guess. But the main point of the video is not about the sketch. I focused more on the sketch to render workflow. That's why I didn't want to include the sketching part :) You can add any details you wish to your drawing or leave it for AI's imagination!
the problem soon will be: "why to pay an architect, an interior designer, a 3d designer, when i can ask to an AI to do everything in just few seconds?" it`s funny to see people so excited about AI not understanding that the AI is going to replace them and leave them without their job
ermm i mean. this tool seems to work great for concepts. AI is expert at creating a photo which, at a glance, is extremely beautiful.
but we are talking about constructing a real building. if you look at the images it brought out, they all depict spaces which cant actually exist. right away i noticed the glass panels and how they dont actually make sense (ai hasnt mastered object permanence)
to me, its more likely that while some jobs will be lost/condensed, its more going to be a skill that people must learn. if architects and designers learn how to use AI well, theyll have a powerful tool at presenting concepts to clients which can then be refined into real designs. ai is powerful but it needs someone to steer it.
AI can't be held liable of a design that is actually to be erected, they can't sign and seal documents needed for your dream building. Yes surely they can design buildings, but are those design produced by them are sustainable or efficient? Does it meet the clients needs and building laws? I believe AI is still a tool that will help architect a lot, from conceptualization to presentations but still AI cant supplant an Architect's creativity and liabilities.
There goes this thing when 1st calculators were invented:
These are gonna replace mathematicians.
But 😂 calculators and computers didn't replace anything. So yea.
My point is that it's just a tool and depends on user. Maybe we won't require that much engineering skills but it doesn't mean we don't require engineering skills at all.
In corporate maybe little bit what ya thinking is true. But very small amount.
You are obviously not a practicing architect and do not understand what it is architects actually do. These are just renderings. The production of inspirational imagery is a tiny part of an architect’s job and is not the same thing as designing and constructing a building.
Well you have cooking RUclips videos and ingredients sent to your front door but I don’t think McD’s or any restaurants suffered. Some times people can’t or don’t want to do things themselves. But I do think architects are like software designers.Some times it’s simple as copy and paste lol
and they make the software for the architects :) so it’s funny
Yes I’m one so I know
Hi, do you know of any A.I. that allows you to change the camera views for interiors and exteriors?
Hey, unfortunattely it's not possible yet so some manual work needed but maybe in soon future, why not?
@@designinput Thank you for your reply. That will be a game changer.
@@mlee9049 absolutelly!
why it keeps generating different image, not same as my uploaded one?
even I checked to enable on control net
Hey, what do you mean exactly by different images?
It is possible to have a certain level of control over the process with ControlNet but up to some level. Even if you keep the seed number the same, the final images probably will be very different from each other.
I am sure we will have more control over it soon with all the new developments, but it is not quite possible to generate exactly the same image multiple times.
Thanks for your comment!