the BEST depth map generator

Поделиться
HTML-код
  • Опубликовано: 21 янв 2023
  • *please excuse the production quality I was trying out a lot of new tings*
    In this video, depth maps battle for the hologram crown on the Looking Glass' Portrait device. First, I talk about depth perception as experienced in the physical world covering monocular and binocular depth cues such as texture gradients, saturation, relative size, occlusion, as well as vergence and parallax. After this, I talk about Savta, MiDas, DPT, runwayML, Looking Glass 2D to 3D converter, and Stable Diffusion. I show you how each depth map affects the image or hologram quality as seen on the portrait holographic display. Last, I show you an example of a Stable Diffusion generated depth map trained as a style and explore what the creative process is like when starting with a depth map and then generating a color map or rgbd image based on that initial depth pass.
    Looking Glass Discord: / discord
    Thank you to the fantastic musician and artist, Purz, for the tunes
    / @purzbeats
    To learn with me and check out other explorations
    linktr.ee/elliemacqueen
    Get $40 off a Looking Glass Portrait here: look.glass/ellie
    Interested in signing up for Blocks waitlist? blocks.glass/

Комментарии • 37

  • @amarnamarpan
    @amarnamarpan Год назад +4

    Your information you provided was very rare and thank you for doing this comparison. Just a side note. It would have been much better if I could hear you properly. The audio is too choppy.

  • @KentaroxKondo
    @KentaroxKondo Год назад +1

    Hi, I'm a Japanese guy looking for a way to create 3d depth map from a single photo to produce a stereogram image. This video is very helpful. Thank you for sharing it!

    • @DangitDigital
      @DangitDigital  Год назад

      thanks for watching

    • @TheKatt08
      @TheKatt08 4 месяца назад

      Have you heard of stereophotomaker? It's a software (Japanese actually) you normally would use to align a stereo pair, and create a depth map from that. But I believe it can also be used the other way around, if you have an image and the depth map you can create the respective other stereo pair!
      You can also create some pretty cool wiggle animations with it, with the perspective shifting around slightly in all directions.

  • @dustinsuburbia
    @dustinsuburbia Год назад

    This was great! Ty

    • @DangitDigital
      @DangitDigital  Год назад

      glad you enjoyed it / it was helpful : )

  • @bhoototiger3366
    @bhoototiger3366 11 месяцев назад

    thanks very cool information

  • @TaXeJota
    @TaXeJota Год назад

    hi!! thx for the video! i have a question. what's that box where you show the results of the depthmap?

    • @DangitDigital
      @DangitDigital  Год назад

      its a looking glass display called a portrait from looking glass factory! there's an affiliate link in the vid bio : )

  • @Vaporwave2099
    @Vaporwave2099 Год назад +1

    does it exist for video? because I want to make a glass chain that moves and distorts what is behind it, in after effects but I don't know what tools, effects or programs to do it with, thanks

    • @DangitDigital
      @DangitDigital  Год назад

      RunwayML works for video! And you can generate depth maps in after effects as well: ruclips.net/video/Pq-QFJChhhs/видео.html

  • @KevBinge
    @KevBinge Год назад

    Craaaaazy edit 😁😁

    • @DangitDigital
      @DangitDigital  Год назад +1

      yeahhh trying to mix it up :D

    • @KevBinge
      @KevBinge Год назад

      @@DangitDigital Awww yeah!

  • @DECODEDVFX
    @DECODEDVFX Год назад

    This is cool.

    • @DangitDigital
      @DangitDigital  Год назад

      .. from the coolest blenderhead around : P

  • @utubesgreat4me
    @utubesgreat4me 3 месяца назад

    Great. Thanks :)

  • @Vaporwave2099
    @Vaporwave2099 Год назад

    please tell me how to do the depth map you do with the 3d object at the end of the video I've been trying to figure it out for weeks

    • @DangitDigital
      @DangitDigital  Год назад

      hey! if you're wanting to generate a "depth map" with stable diffusion like with the example at the end of this video, you can use this training colab doc where you'll train the model to output a "stylized" version of the input image based on four inputted depth maps. As shown in the video, you can essentially use this as a depth map: colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb

    • @DangitDigital
      @DangitDigital  Год назад

      you can also use this script, which may be more straightforward: github.com/thygate/stable-diffusion-webui-depthmap-script

  • @warrencapes8286
    @warrencapes8286 7 месяцев назад

    nothing but an advert for looking glass

  • @TheKatt08
    @TheKatt08 4 месяца назад

    Oh, I saw now that you haven't made any videos since this one. What a shame, come back!
    (Also, my comment mysteriously disappeared. Hopefully it's just that you have to approve them manually, and not my bad internet that threw the comment into the aether)

  • @TomLeclercq
    @TomLeclercq Год назад +4

    It's a great video! I love the explanations and the edit
    If I may provide some constructive feedback:
    I am so sad for you; the output sound is so hard to listen to, even though you seem to have the material :o
    It would be worth a v2 :p Also, for the following video, if you could speed less for non-native English speakers (I thought I was in playback speed x1.5 in the beginning ˆˆ)
    Thanks for sharing this \o/
    Subscribed and Liked

    • @DangitDigital
      @DangitDigital  Год назад

      thanks for the feedback Tom! I wholeheartedly agree re the sound. Good to know that the speed is too fast. Appreciate the sub

    • @markusmeeder
      @markusmeeder 7 месяцев назад

      I think that this just looks like a microphone, but in reality it is just a potatoe recording the sound. ;)

  • @rangnathmahto5688
    @rangnathmahto5688 2 месяца назад

    Are you even using the mic you have.

  • @prabhavxz
    @prabhavxz 5 месяцев назад

    Does this work for videos

  • @arteyespinacas3328
    @arteyespinacas3328 Год назад

  • @achuck8245
    @achuck8245 5 месяцев назад

    Imagine making a video about a specific process and plastering our own face across the screen 90% of the time...

  • @richardlizier5270
    @richardlizier5270 4 месяца назад

    *please excuse the production quality I was trying out a lot of new tings*
    There is no excuse for such a production.

  • @claudianreyn4529
    @claudianreyn4529 Год назад +7

    How did you manage to produce the worst image & sound quality on the entire RUclips?

  • @Keepingthefaith72
    @Keepingthefaith72 5 месяцев назад

    The audio is terrible, can't understand anything...

  • @nathan24277
    @nathan24277 10 месяцев назад +2

    I had to stop watching, the audio was so bad. Sorry