the BEST depth map generator
HTML-код
- Опубликовано: 21 янв 2023
- *please excuse the production quality I was trying out a lot of new tings*
In this video, depth maps battle for the hologram crown on the Looking Glass' Portrait device. First, I talk about depth perception as experienced in the physical world covering monocular and binocular depth cues such as texture gradients, saturation, relative size, occlusion, as well as vergence and parallax. After this, I talk about Savta, MiDas, DPT, runwayML, Looking Glass 2D to 3D converter, and Stable Diffusion. I show you how each depth map affects the image or hologram quality as seen on the portrait holographic display. Last, I show you an example of a Stable Diffusion generated depth map trained as a style and explore what the creative process is like when starting with a depth map and then generating a color map or rgbd image based on that initial depth pass.
Looking Glass Discord: / discord
Thank you to the fantastic musician and artist, Purz, for the tunes
/ @purzbeats
To learn with me and check out other explorations
linktr.ee/elliemacqueen
Get $40 off a Looking Glass Portrait here: look.glass/ellie
Interested in signing up for Blocks waitlist? blocks.glass/
Your information you provided was very rare and thank you for doing this comparison. Just a side note. It would have been much better if I could hear you properly. The audio is too choppy.
Hi, I'm a Japanese guy looking for a way to create 3d depth map from a single photo to produce a stereogram image. This video is very helpful. Thank you for sharing it!
thanks for watching
Have you heard of stereophotomaker? It's a software (Japanese actually) you normally would use to align a stereo pair, and create a depth map from that. But I believe it can also be used the other way around, if you have an image and the depth map you can create the respective other stereo pair!
You can also create some pretty cool wiggle animations with it, with the perspective shifting around slightly in all directions.
This was great! Ty
glad you enjoyed it / it was helpful : )
thanks very cool information
hi!! thx for the video! i have a question. what's that box where you show the results of the depthmap?
its a looking glass display called a portrait from looking glass factory! there's an affiliate link in the vid bio : )
does it exist for video? because I want to make a glass chain that moves and distorts what is behind it, in after effects but I don't know what tools, effects or programs to do it with, thanks
RunwayML works for video! And you can generate depth maps in after effects as well: ruclips.net/video/Pq-QFJChhhs/видео.html
Craaaaazy edit 😁😁
yeahhh trying to mix it up :D
@@DangitDigital Awww yeah!
This is cool.
.. from the coolest blenderhead around : P
Great. Thanks :)
please tell me how to do the depth map you do with the 3d object at the end of the video I've been trying to figure it out for weeks
hey! if you're wanting to generate a "depth map" with stable diffusion like with the example at the end of this video, you can use this training colab doc where you'll train the model to output a "stylized" version of the input image based on four inputted depth maps. As shown in the video, you can essentially use this as a depth map: colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb
you can also use this script, which may be more straightforward: github.com/thygate/stable-diffusion-webui-depthmap-script
nothing but an advert for looking glass
I wasn't paid to make this.
Oh, I saw now that you haven't made any videos since this one. What a shame, come back!
(Also, my comment mysteriously disappeared. Hopefully it's just that you have to approve them manually, and not my bad internet that threw the comment into the aether)
It's a great video! I love the explanations and the edit
If I may provide some constructive feedback:
I am so sad for you; the output sound is so hard to listen to, even though you seem to have the material :o
It would be worth a v2 :p Also, for the following video, if you could speed less for non-native English speakers (I thought I was in playback speed x1.5 in the beginning ˆˆ)
Thanks for sharing this \o/
Subscribed and Liked
thanks for the feedback Tom! I wholeheartedly agree re the sound. Good to know that the speed is too fast. Appreciate the sub
I think that this just looks like a microphone, but in reality it is just a potatoe recording the sound. ;)
Are you even using the mic you have.
Does this work for videos
Imagine making a video about a specific process and plastering our own face across the screen 90% of the time...
*please excuse the production quality I was trying out a lot of new tings*
There is no excuse for such a production.
How did you manage to produce the worst image & sound quality on the entire RUclips?
The audio is terrible, can't understand anything...
I had to stop watching, the audio was so bad. Sorry