Image to 3D Model (AI + Architecture)

Поделиться
HTML-код
  • Опубликовано: 4 авг 2024
  • This video showcases programs and research that will help convert your 2D Images to 3D Digital Mesh Geometry. We experiment with converting 3 unique images including an AI generating image of an architectural building, an image of an architectural connection detail, and a 360 panoramic view.
    Zoe Depth
    huggingface.co/spaces/shariqf...
    Symmetry-Driven 3D Reconstruction from Concept Sketches
    dl.acm.org/doi/abs/10.1145/35...
    3D Guru AI
    www.3dguru.ai/
    ‪@UHStudio‬
    / @uhstudio
    00:00 - Intro
    00:38 - ZoeDepth (Architectural Model)
    03:20 - ZoeDepth (Connection Detail)
    04:52 - ZoeDepth (360 Panoramic)
    08:31 - Symmetry-Driven 3D Reconstruction from Concept Sketches
    10:20 - 3D Guru AI Sneak Preview
    12:13 - Traditional vs Emergent Architectural Practice
    14:01 - Conclusion (UH Studio Design Academy Podcast)
    #3dprinting #artificialintelligence #architecture
  • ХоббиХобби

Комментарии • 81

  • @ArgoBeats
    @ArgoBeats Год назад +2

    This is crazy... thank you for the heads-up, Stephen!

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      I know, right!? These tools are getting really interesting 🤔

  • @rodfarid2056
    @rodfarid2056 Год назад +1

    keeps getting better and better .... thanks Stephen🤝

    • @StephenCoorlas
      @StephenCoorlas  Год назад +1

      These tools will disrupt the current standards for practice. And this is the worst these tools will ever be!

  • @09591000
    @09591000 Год назад +2

    Спасибо! Полезные инструменты. Удачи!

    • @StephenCoorlas
      @StephenCoorlas  Год назад +2

      Thank you! Hope the tools are useful for you too!

  • @pierrebessette2018
    @pierrebessette2018 Год назад +1

    Excellent ! Thanks a lot.

  • @vicarioustube
    @vicarioustube Год назад +1

    Thank you VERY informative and exciting.

  • @nasserahmad4653
    @nasserahmad4653 Год назад +1

    Great tutorial as always

  • @jbltube2881
    @jbltube2881 Год назад +2

    amazing

  • @SebAnt
    @SebAnt Год назад +2

    Wow!!

  • @axelesch9271
    @axelesch9271 Год назад +1

    I think it is more like an image to depth map ai, but looks awesome !

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      It is, but the depth map AI generates a 3D model, so it's pretty much there!

  • @u9vata
    @u9vata Год назад +1

    Also traditional (or not so traditional, but still in this sense non-ai) workflows will get AI in them soon. I sometimes do contracting for spacedesigner3D and I wanted to add feature for AI enchanced renderings just to see I am late and its already worked on by an other contractor haha. So 1-2 version down the line and you probably will see it.
    This means products you use will have AI features even if you keep your product based workflow - like you can create traditional floor plan as "basic idea of what is going on", then instantly create variations if you want - which is very useful when you are right in talk with a customer for example.
    Ps.: The other contractor who add that feature is also Hungarian like me so it feels nice that feature is still coming from here around even if not from me personally ;-)

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      It will be integrating into mainstream platforms and programs inevitably. So it is important to at least be aware that others will be using these tools

  • @yearight1205
    @yearight1205 Год назад +1

    I make cinematics in Unreal Engine. My latest interest is to take an image of an environment from Midjourney, create a depth map of it and extrude it a tad more inside of Blender before exporting it over to Unreal Engine, turning specular to zero to prevent blotchy lighting on the image and film my scene. Sadly to date I have yet to pull it off with an environment, but I have had major luck with smaller objects (think statues, walls, bridges, etc). I am about to run through your videos in the hopes to find something I can use for this. I am confident that sometime in the next year or two someone will create that which I seek.

    • @StephenCoorlas
      @StephenCoorlas  Год назад +2

      Hey - That sounds amazing. This might be the only video I have on the subject of image to 3D workflows, but I do agree that in the very near future there will be apps/online platforms available to convert images to usable 3D geometry.

    • @yearight1205
      @yearight1205 Год назад

      @@StephenCoorlas I think I actually have something to test out. I don't know how familiar you are with Unreal Engine, but you basically create the depth map with LeiaPix, put it all together in Blender. Export it to Unreal Engine and then using the Lattice deform tool in the modeling section of Unreal Engine, you extrude the ground outwards and upwards. I have lost zero quality in my first test with this, and it looks 100% correct.
      I'd imagine you'd have to ensure that whatever scene you work with would need a decent width to it and perhaps extrude it in an arc shape to ensure the camera pan looked correct while filming, but thought I'd share with you since I had been trying to solve this problem for a few days and I believe this might be the solution.

  • @Ben-rz9cf
    @Ben-rz9cf Год назад +3

    Image to 3D is one of the few AI innovations i am actually still excited for. "Generative" AI is not actually generative or intelligent and amounts to little more than interpolated copyright infringement. But being able to infer a sense of depth from static images? Infinitely more useful for actual artists.

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      Some may argue that the generative aspect of the image generators is simply in the iterations, but especially when combined with processes such as ControlNet.
      I do agree, being able to extract depth and 3-Dimensional geometry will make these explorations more useful 🤘

  • @ingilizanahtar644
    @ingilizanahtar644 6 месяцев назад

  • @vinzenternser2036
    @vinzenternser2036 Год назад +2

    Thank you for the knowledge Stephen. Do you think 3D Generative Adversarial Network (GAN) driven 3D model creation from 2D image midjourney input will be used in the future? This could be revolutionary in architecture i think.

    • @StephenCoorlas
      @StephenCoorlas  Год назад +1

      Hey - Yes, I do believe this is a very early adaptation of this technology. The current models are a bit messy, with many polygons, but it's getting easier to imagine GAN's within the actual 3D model creation process that can clean up, simplify and interpolate usable geometry for architects. There are many avenues this technology can go in, and it's exciting to discuss the possibilities.

  • @BadConceptArtist
    @BadConceptArtist Год назад +1

    That's amazing. It's super effective with flat boxes and simple stuff so iget rid off all the boring images as planes cut outs for background elements. Can really help with small kitbashing. Do you know any other softwere or Tools that can do the same with a video? Like photogrammetry.... but not photogrammetry...(?)

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      Glad you found this useful - I'm not aware of anything that uses video to generate 3D geometry although that's a fascinating thought. You might want to look into NERF technology as it seems the most promising for what you're describing.

  • @CharmiGajjar-ep8lb
    @CharmiGajjar-ep8lb Год назад +2

    What software did you use to open the 3d model from Zoedepth

    • @StephenCoorlas
      @StephenCoorlas  Год назад +1

      That's just 3D Builder that should come standard with windows 10 or higher. The 3D Files are .glb which should open in Rhino, Blender or SketchUp with an extension

  • @riiprafa
    @riiprafa 4 дня назад +1

    Hey Stephen, just sharing some thoughts: the outerside generated from the 360° image got me thinking of the possibility of generating a "inverse'' panoramic which would rotate around an object showing every side of the object to the 3d generator. I Tried with a image of a a face but the result were not good as expected yet i think it is a matter of changing some parameters in the AI algorithm.

    • @StephenCoorlas
      @StephenCoorlas  3 дня назад

      That's an interesting thought and approach. Anything to rebuild imagery from multiple perspectives will help in constructing a useable mesh for further development. These tools are advancing so quickly, that even some of these thought processes will seem antiquated by the time new AI models are developed.

  • @keshams3665
    @keshams3665 Год назад +6

    I wonder if we can upload 3D mesh or any other 3D CAD file to an AI program to create photorealistic renders instantly. It will save a lot of time!

    • @StephenCoorlas
      @StephenCoorlas  Год назад +2

      That would be great, there are some programs that are getting close, have you tried Veras?

    • @keshams3665
      @keshams3665 Год назад +2

      @@StephenCoorlas Really cool. Thank you!

    • @shyleshkumar1449
      @shyleshkumar1449 Год назад

      Scenemaker ai

  • @antoniovoto
    @antoniovoto Год назад +1

    Thanks for the video. I have a question. I can't find the download icon for Symmetry-Driven 3D Reconstruction from Concept Sketches. thanks.

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      The link to their research is in the video description. Once you're there, scroll down and you should be able to download their research paper and video.

    • @antoniovoto
      @antoniovoto Год назад +1

      @@StephenCoorlas thanks!

    • @antoniovoto
      @antoniovoto Год назад

      @@StephenCoorlas sorry again I saw the video but I can not find the link to download the program. thanks...

  • @osmanucan2625
    @osmanucan2625 Год назад +1

    Hi! I'm doing a project on nearly the exact topic you talk about in the end. I feel like I have to be up to date with these experimental processes but while doing so I might lack in the more fundamental aspects in the chase of being ahead of the curve. I wonder what would you think about this topic.

    • @StephenCoorlas
      @StephenCoorlas  Год назад +1

      There are different types of pursuits in life. Some people always chase the "newest" thing, while others pursue a craft, hobby or interest while utilizing a tool or technology the suites their needs. The hype can get very exciting and enticing, but if you are always chasing it leaves little time to settle and pursue anything deeper. My advice is to think about what really interests you, to focus on that topic, and incrementally pursue technologies or philosophies that assist you in developing your own theories while offering new perspectives to keep things fresh 🤘

    • @osmanucan2625
      @osmanucan2625 Год назад +1

      @@StephenCoorlas Wow, I really appreciate the response. Looking forward to any updates on the topic!

  • @conceptosurbanos4619
    @conceptosurbanos4619 11 месяцев назад +1

    Is midjourney work from different workflows ,such as quick mass models,sketches,floor plans ,site plans and prompts all at once ,

    • @StephenCoorlas
      @StephenCoorlas  11 месяцев назад

      It can, but there are other programs coming out specifically designed for architecture that can produce better massing models / floor plans. Midjourney is not very site specific yet.

    • @conceptosurbanos4619
      @conceptosurbanos4619 11 месяцев назад

      It looks like when you give a prompt to mid journey +some sketches ,and site plans info,the program uses a mind of its own ,and you get beautifully render ,but with no relevance to the information send ,how can you minimize,that

  • @jbiziou
    @jbiziou Год назад +1

    Very cool !!, this is a very promising step:)!! question for you , Im using Blender, and when I import there are no materials or textures, Any thoughts ? thanks, :)

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      Thanks! I've received that comment several times. I can't say I've experimented with bringing the models into blender so I'm not sure why that's occurring. I think it has something to do with how blender is reading the material mapping.

    • @jbiziou
      @jbiziou Год назад +1

      @@StephenCoorlas thanks for the reply, :) its def very interesting and will be cool to see how it develops :)

  • @21stcenturyscotsadvertisin24
    @21stcenturyscotsadvertisin24 Год назад +1

    I very much hope the meshes are all clean quads.

  • @Ausstein
    @Ausstein Год назад +1

    Hi,
    the thygate/stable-diffusion-webui-depthmap-script can inpaint the meshes already :)

    • @StephenCoorlas
      @StephenCoorlas  Год назад +1

      Oh wow - I'll need to check that out! Thanks for the heads up!

    • @Ausstein
      @Ausstein Год назад +1

      @@StephenCoorlas No worries, the development speed of these tools is insane right now and stuff like this is easy to miss

  • @thebrainsinc5410
    @thebrainsinc5410 Год назад

    On the Pano/360 to 3d, a basic projection of image on top of a sphere/box produces better results right? Can you explain why this current approach is game changer than that for pano to 3d? Also the quality is better with that technique.

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      The sphere projection doesn't contain actual 3D mesh geometry. This allow you to actually pan through the 3D model, not just stand in one place and rotate.

    • @thebrainsinc5410
      @thebrainsinc5410 Год назад

      @@StephenCoorlas not completely true, if you for example create a sphere and project a pano image on the box and make the box of size lets say 50m in radius, you could walk freely inside the box using ARKit. Adding depth with ARKit or any other rendering engine is quiet trivial in this case.
      Also how would you perceive depth in the case above?

  • @WallyMahar
    @WallyMahar Год назад +1

    Does anyone know how I can open up that 16-bit raw depth, multiplier: 256 and so i can use it in blender

    • @WallyMahar
      @WallyMahar Год назад +2

      Never mind y’all image appears Black but it’s just really spread out. If you condense the levels down in Photoshop, you can see the image.

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      I've had a few inquire about the missing image in blender, but I'm not certain what the issue is or why it doesn't show on the imported models.

  • @conceptosurbanos4619
    @conceptosurbanos4619 11 месяцев назад

    Once you get your render done by midjourney ,what is the best way,to go for the construction drawings &details, and the best way for the construction to go specially for parametric or free form ,so the cost of the formwork does not become an impediment ,the 3d printer can work much better in walls but how about curve ceilings ,cloth formwork ,or what

    • @StephenCoorlas
      @StephenCoorlas  11 месяцев назад

      That's a large gap that hasn't necessarily been solved yet. you would need to breakdown those processes into several steps which likely contain much human intervention to make the design hold true to the AI generated image through construction.

  • @welsonfy5246
    @welsonfy5246 Год назад +1

    Hi, good job. Possible export to fbx with textures ?

    • @StephenCoorlas
      @StephenCoorlas  Год назад +1

      Great question, I have not tried yet, but I believe I have seen other's attempting that. I'll need to take a deeper look.

    • @welsonfy5246
      @welsonfy5246 Год назад

      @@StephenCoorlas Or else convert a glb to fbx with texture if possible?

  • @user-cw4lu8ww7o
    @user-cw4lu8ww7o Год назад +2

    我想问问从zoedepth导出的dlb模型导入Blender中为什么缺失贴图,且有没有什么解决办法?诚挚的感谢您如果解答我的问题。

    • @user-cw4lu8ww7o
      @user-cw4lu8ww7o Год назад +1

      不好意思,是导出的glb模型

    • @StephenCoorlas
      @StephenCoorlas  Год назад

      对不起,我不知道为什么纹理没有显示在 blender

  • @onurakgul5083
    @onurakgul5083 Год назад +1

    Could you share a video about how can we convert these prototypes to completely a final 3d model

    • @StephenCoorlas
      @StephenCoorlas  Год назад +1

      Yes, this is an important topic in advancing the usefulness of these tools, I’ll look into that coming soon

    • @onurakgul5083
      @onurakgul5083 Год назад

      @@StephenCoorlas Thank you bro this video will very useful for me

  • @Braden-York
    @Braden-York 4 месяца назад +1

    What if ai could turn 2d aerial imagery into a 3d environment?

    • @StephenCoorlas
      @StephenCoorlas  4 месяца назад

      That's the idea - people are programming AI to be better at interpolating things like this. It's all in the training.

  • @avy0010
    @avy0010 Год назад +1

    Why you made the ending sad

  • @sirrodneyffing1
    @sirrodneyffing1 Год назад +1

    Thanks for the video. AI looks like it will, very quickly now, wipe out illustration, graphic design and maybe photography and God knows what else as viable professions. The 800-pound gorilla in the corner in Architecture is going to be the same issue. AI will soon be spitting out, not just whacky slick Zaha Hadid concept images in seconds, but complete, ready to build BIM models; so where does an architect fit into that? It’s going to be wild ride over the next few years that’s for sure.

    • @StephenCoorlas
      @StephenCoorlas  Год назад +2

      Yes - these are great thoughts and it's up to us to find our place or rather control how AI is implemented into our software, workflows and processes. At least during the beginning of this transition, AI will always need to be monitored by a human, so we need to remain educated and experienced to ensure the tools are developing content to our expectations.