World of Depth
World of Depth
  • Видео 26
  • Просмотров 11 927
Artificial Intelligence For the Stereographer
Originally presented at the NSA Virtual 3D-Con 2021 (3d-con.com), 8/15/21
The collected links: WorldOfDepth.com/tutorials/AIfor3D.html
Many modern developments in AI deal with the 3rd dimension, from self-driving cars that perceive and navigate their environments in 3D, to programs that can build 3D models of scenes simply by analyzing 2D videos of them. Though some of these AIs are only accessible to researchers and tech companies, some AI applications like upscaling images or colorizing black and white photos are immediately useful and available to all. This workshop will show you how to use AIs of special interest to stereographers, via the free tool Google Colab, to do things like crea...
Просмотров: 9 090

Видео

Bee-lympics (Anaglyph + 2D Version)
Просмотров 463 года назад
(See below for other 3D formats) Presenting 3 contestants competing in ‘exiting from a downward-facing flower,’ featuring slow-motion footage of bumblebees in 3D! How do you score them-🔟🔟🔟? Cross-eye stereopair version: ruclips.net/video/3WCjNXqeBl0/видео.html Parallel view stereopair version: ruclips.net/video/dvHXaebKFOA/видео.html 3D TV version: ruclips.net/video/k1xog3YTTR0/видео.html All f...
Bee-lympics (Parallel Version)
Просмотров 323 года назад
(See below for other 3D formats) Presenting 3 contestants competing in ‘exiting from a downward-facing flower,’ featuring slow-motion footage of bumblebees in 3D! How do you score them-🔟🔟🔟? Cross-eye stereopair version: ruclips.net/video/3WCjNXqeBl0/видео.html Anaglyph and 2D version: ruclips.net/video/G_IgLwpl4ks/видео.html 3D TV version: ruclips.net/video/k1xog3YTTR0/видео.html All footage ta...
Bee-lympics (Cross-Eye Version)
Просмотров 1523 года назад
(See below for other 3D formats) Presenting 3 contestants competing in ‘exiting from a downward-facing flower,’ featuring slow-motion footage of bumblebees in 3D! How do you score them-🔟🔟🔟? Parallel view stereopair version: ruclips.net/video/dvHXaebKFOA/видео.html Anaglyph and 2D version: ruclips.net/video/G_IgLwpl4ks/видео.html 3D TV version: ruclips.net/video/k1xog3YTTR0/видео.html All footag...
Bee-lympics (3D TV Version)
Просмотров 183 года назад
(See below for other 3D formats) Presenting 3 contestants competing in ‘exiting from a downward-facing flower,’ featuring slow-motion footage of bumblebees in 3D! How do you score them-🔟🔟🔟? Cross-eye stereopair version: ruclips.net/video/3WCjNXqeBl0/видео.html Parallel view stereopair version: ruclips.net/video/dvHXaebKFOA/видео.html Anaglyph and 2D version: ruclips.net/video/G_IgLwpl4ks/видео....
3D Slices & Shapes
Просмотров 1343 года назад
Learn how to sp(l)ice up your stereos with some artful 3D shapes in this headache-free tutorial! All you need is a stereophoto and a graphics program with layers, such as the free online Photopea. This video uses the universal SBS (side-by-side) stereopair format. To see anaglyph and other versions of the example stereos, see worldofdepth.com/daily/210606.html. Originally shown at the June 6, 2...
Jazz Trio in 3D-“Easter Parade” (Anaglyph Version)
Просмотров 683 года назад
3D TV version: ruclips.net/video/x1dPclOXQ6Q/видео.html Side-by-side version: ruclips.net/video/OGxcmixYJqU/видео.html 2D version: ruclips.net/video/Cj0seev_Myk/видео.html Happy Easter! The Grand St. Stompers trio of Gordon Au (trumpet), Matt Koza (clarinet), and Nick Russo (guitar) perform “Easter Parade,” in 3D. I captured this with a stereo twin rig using 2 X Sony RX100 mark II, with audio c...
Jazz Trio in 3D-“Easter Parade” (3D TV Version)
Просмотров 1163 года назад
Side-by-side version: ruclips.net/video/OGxcmixYJqU/видео.html Anaglyph version: ruclips.net/video/s_bGDw1MC4A/видео.html 2D version: ruclips.net/video/Cj0seev_Myk/видео.html Happy Easter! The Grand St. Stompers trio of Gordon Au (trumpet), Matt Koza (clarinet), and Nick Russo (guitar) perform “Easter Parade,” in 3D. I captured this with a stereo twin rig using 2 X Sony RX100 mark II, with audi...
Jazz Trio in 3D-“Easter Parade” (SBS Version)
Просмотров 513 года назад
3D TV version: ruclips.net/video/x1dPclOXQ6Q/видео.html Anaglyph version: ruclips.net/video/s_bGDw1MC4A/видео.html 2D version: ruclips.net/video/Cj0seev_Myk/видео.html Happy Easter! The Grand St. Stompers trio of Gordon Au (trumpet), Matt Koza (clarinet), and Nick Russo (guitar) perform “Easter Parade,” in 3D. I captured this with a stereo twin rig using 2 X Sony RX100 mark II, with audio captu...
I Broke Apex Construct! Part 2 (Anaglyph Version)
Просмотров 213 года назад
How to explore out-of-bounds in the VR game Apex Construct, captured here in 3D from the Oculus Quest 2, on the C.A.R.F. level. SBS cross version: ruclips.net/video/VFFvfl1pLMw/видео.html SBS parallel version: ruclips.net/video/ODdn-INXJhY/видео.html Anaglyph version: 3D TV version: ruclips.net/video/S2WmPYv9vgs/видео.html HOW I MADE THE 3D STILL IMAGES: • In-game ‘cha-cha’ sequential screensho...
I Broke Apex Construct! Part 2 (SBS Parallel Version)
Просмотров 233 года назад
How to explore out-of-bounds in the VR game Apex Construct, captured here in 3D from the Oculus Quest 2, on the C.A.R.F. level. SBS cross version: ruclips.net/video/VFFvfl1pLMw/видео.html Anaglyph version: ruclips.net/video/bii0sAlrTFA/видео.html 3D TV version: ruclips.net/video/S2WmPYv9vgs/видео.html HOW I MADE THE 3D STILL IMAGES: • In-game ‘cha-cha’ sequential screenshots aligned with Stereo...
I Broke Apex Construct! Part 2 (SBS Cross Version)
Просмотров 173 года назад
How to explore out-of-bounds in the VR game Apex Construct, captured here in 3D from the Oculus Quest 2, on the C.A.R.F. level. SBS parallel version: ruclips.net/video/ODdn-INXJhY/видео.html Anaglyph version: ruclips.net/video/bii0sAlrTFA/видео.html 3D TV version: ruclips.net/video/S2WmPYv9vgs/видео.html HOW I MADE THE 3D STILL IMAGES: • In-game ‘cha-cha’ sequential screenshots aligned with Ste...
I Broke Apex Construct! Part 2 (3D TV Version)
Просмотров 703 года назад
How to explore out-of-bounds in the VR game Apex Construct, captured here in 3D from the Oculus Quest 2, on the C.A.R.F. level. SBS cross version: ruclips.net/video/VFFvfl1pLMw/видео.html SBS parallel version: ruclips.net/video/ODdn-INXJhY/видео.html Anaglyph version: ruclips.net/video/bii0sAlrTFA/видео.html HOW I MADE THE 3D STILL IMAGES: • In-game ‘cha-cha’ sequential screenshots aligned with...
I Broke Apex Construct! Part 1 (SBS Cross Version)
Просмотров 443 года назад
(Side-by-side cross-eye stereopair version) How to explore out-of-bounds in the VR game Apex Construct, captured here in 3D from the Oculus Quest 2. And by the way, Apex Construct is great fun: amazing designs, pretty intuitive controls, an interesting story, plus with the added possibility of VR urbex like this, it’s a strong recommendation from me! SBS parallel version: ruclips.net/video/0x1M...
I Broke Apex Construct! Part 1 (SBS Parallel Version)
Просмотров 343 года назад
(Side-by-side parallel stereopair version) How to explore out-of-bounds in the VR game Apex Construct, captured here in 3D from the Oculus Quest 2. And by the way, Apex Construct is great fun: amazing designs, pretty intuitive controls, an interesting story, plus with the added possibility of VR urbex like this, it’s a strong recommendation from me! SBS cross version: ruclips.net/video/tFdFGR-s...
I Broke Apex Construct! Part 1 (Anaglyph Version)
Просмотров 323 года назад
I Broke Apex Construct! Part 1 (Anaglyph Version)
I Broke Apex Construct! Part 1 (3D TV Version)
Просмотров 1653 года назад
I Broke Apex Construct! Part 1 (3D TV Version)
Optimizing Anaglyphs: a Walk-Through
Просмотров 6473 года назад
Optimizing Anaglyphs: a Walk-Through
How to Make 3D Sequential Ghost Stereos
Просмотров 1363 года назад
How to Make 3D Sequential Ghost Stereos
DIY 3D Catadioptric Stereos (3D TV Version)
Просмотров 1234 года назад
DIY 3D Catadioptric Stereos (3D TV Version)
DIY 3D Catadioptric Stereos (Anaglyph Version)
Просмотров 644 года назад
DIY 3D Catadioptric Stereos (Anaglyph Version)
DIY 3D Catadioptric Stereos (Half-Width SBS Version)
Просмотров 944 года назад
DIY 3D Catadioptric Stereos (Half-Width SBS Version)
DIY 3D Catadioptric Stereos (Universal SBS Version)
Просмотров 3394 года назад
DIY 3D Catadioptric Stereos (Universal SBS Version)
3D Tour of Margaritifer Terra, Mars (Cross-View SBS Version)
Просмотров 1164 года назад
3D Tour of Margaritifer Terra, Mars (Cross-View SBS Version)
3D Tour of Margaritifer Terra, Mars (Parallel SBS Version)
Просмотров 614 года назад
3D Tour of Margaritifer Terra, Mars (Parallel SBS Version)

Комментарии

  • @marioschadel3747
    @marioschadel3747 6 месяцев назад

    Wow, this is such an excellent resource! I am a palaeontologist/biologist and will share this with colleagues and students as this is so well explained and accessible, without the need to use any paid software. What perhaps could be improved is, if you would have used Photopea (or GIMP) to adjust the stereo window. This would make it less platform specific and would reduce the number of programs used.

    • @WorldofDepth
      @WorldofDepth 5 месяцев назад

      Thank you-glad it is useful! And yes: I use Preview here because opening the L & R images in 2 separate panels is very convenient, but you can replicate this in Photopea etc.

  • @larbibenmhidi3054
    @larbibenmhidi3054 6 месяцев назад

    Well done !

  • @dragonflyK110
    @dragonflyK110 8 месяцев назад

    "Originally presented at the NSA Virtual 3D-Con 2021, 8/15/21 " I'm not gonna lie for a minute or so I was very confused about why the National Security Agency would have a 3D Conference :) Anyway thank you for this video, it was quite educational for somebody getting back into 3D tech after not paying much attention to it over the last decade or so. And despite the video's age it seems to have held up quite well. Though if you know of any relevant AI models that have been released since this video I'd love to know, as I'm currently trying to learn as much as I can about this topic. Thank you again for the time you put into this video.

    • @WorldofDepth
      @WorldofDepth 6 месяцев назад

      Thanks for the appreciation! In my tests, the best AI depth estimator is still MiDaS, but the new version 3.1, released after this video. There is a very new one called Marigold (huggingface.co/spaces/toshas/marigold), but in my first tests, it's not as good as MiDaS v3.1.

    • @dragonflyK110
      @dragonflyK110 6 месяцев назад

      @@WorldofDepth Thank you for the response, I have done quite a bit of research since that comment so I have actually heard about Marigold. Have you tried out Depth Anything? It's even newer than Marigold and is in my testing much better than MiDaS. It has a HF space if you want to try it out.

  • @BrawlStars-jd7jh
    @BrawlStars-jd7jh Год назад

    really cool stuff, thanks for sharing!

    • @BrawlStars-jd7jh
      @BrawlStars-jd7jh Год назад

      i have a problem, when i run the last proccess in the 3d Photo inpainting notebook, it says "TypeError: load() missing 1 required positional argument: 'Loader'" i already uploaded the depth and the base image

  • @importon
    @importon 2 года назад

    Have you lost interesting in this stuff? Why no new content?

    • @WorldofDepth
      @WorldofDepth Год назад

      As you can see, this video workshop was part of NSA Virtual 3D-Con 2021, and other videos of mine were made for similar conferences and some smaller regional events. I presented again at the 2022 3D-Con last month, but unfortunately, the conference was not recorded. If you want to see the latest 3D I'm producing, follow @WorldOfDepth on Instagram, and/or check WorldOfDepth.com (though I need to update that more!).

  • @donrikk8546
    @donrikk8546 2 года назад

    @World of Depth hey my friend i have been having a problem with my video anaglyph conversions, how do i convert a video into half color or dubois? is ther a way to do it in adobe premier ? or any other program outthere that can help me edit my videos to half color or dubois?

    • @WorldofDepth
      @WorldofDepth 2 года назад

      I use FFMPEG myself, a free command-line program. The relevant filter is ‘stereo3d,’ which has all the options you mentioned: trac.ffmpeg.org/wiki/Stereoscopic. I'm sure there are other programs that will also do it, but I don't know which offhand, and this is the only I can personally recommend.

    • @donrikk8546
      @donrikk8546 2 года назад

      ​@@WorldofDepth thank you man i appreciate the response its very helpful! one last question for you if you dont mind, is t her a tutorial on how to use and incorporate this method ?

    • @donrikk8546
      @donrikk8546 2 года назад

      @@WorldofDepth because i use 3d combine to do automatic depthmaps and convert into anaglyph 3d just fine but it comes out full color when i would prefer dubois so isther a way to just run the video to get dubois color hue then run it through the 3d combine program after.basically i donmt want to use ffmpeg to convert to 3d i simply want to convert the video to dubois so i can then import it into 3dcombine

    • @WorldofDepth
      @WorldofDepth 2 года назад

      @@donrikk8546 I don't know 3d combine at all. But what that ffmpeg filter above does is take a left video + right video as inputs, and merges them into an anaglyph video. So if your other program can export L + R videos, you could use then use ffmpeg to convert to Dubois anaglyph video etc.

    • @WorldofDepth
      @WorldofDepth 2 года назад

      @@donrikk8546 official documentation for ffmpeg itself is at ffmpeg.org/ffmpeg.html. For something like a video tutorial on how to use it, you'd have to search around.

  • @metamind095
    @metamind095 2 года назад

    Can I use Midas to convert 2d Video to 3D somehow? What in your opinion is the best tool to do 2d to 3d video conversion? Thx for the video.

    • @WorldofDepth
      @WorldofDepth 2 года назад

      It's possible to do that with MiDaS frame by frame, but the resulting video will flicker, so I think it's best to use tools made specifically for video instead. “Consistent Video Depth Estimation” is near the bottom of my collected links (see video description), plus another, but I haven't used them myself. There are surely other similar AIs out there now as well.

    • @Rocketos
      @Rocketos Год назад

      @@WorldofDepth can u upload a tutorial for using consistent video depth estimation ? Please

    • @WorldofDepth
      @WorldofDepth Год назад

      @@Rocketos I don't have one to upload. Perhaps if I have time in the future.

    • @Rocketos
      @Rocketos Год назад

      @@WorldofDepth thanks i love your workshops

  • @Ffwee
    @Ffwee 2 года назад

    Wow - this video absolutely nails some of the issues I've been having with anaglyphs. Masterful work on the lower left window violation - can't wait to try that out. I've been resorting to cropping, which more than once has just moved the window violation to another object. I'm hoping I can use some of your color correction suggestions to deal with ghosting too. Every once in a while I have a really nice photo with some good depth that's just destroyed by ghosting. I know I can reduce the ghosting by tweaking the contrast and brightness or reducing the depth in the photo , but I hate to do that - it's not like I have objects popping out in front of the stereo windows. Anyway, really, really good stuff here, and I thank you for taking the time to share this.

    • @WorldofDepth
      @WorldofDepth 2 года назад

      Thank you, Greg, I appreciate that! Ghosting is always tricky (and ghosting in print is another matter entirely as well). Glad this has helped.

  • @user-fv6nc7qi2x
    @user-fv6nc7qi2x 2 года назад

    when are you uploading about the AIs you talked about at the end? instantly subbed

    • @WorldofDepth
      @WorldofDepth 2 года назад

      Thank you! This workshop was for the NSA 3D convention, so I may possibly revisit this topic with those other AIs and newer ones for next year's Con. In the meantime I recommend checking out Ugo Capeto's YT channel for reviews of additional AIs.

  • @AntonBarcelona
    @AntonBarcelona 2 года назад

    those masks just looks stupid...

  • @AntonBarcelona
    @AntonBarcelona 2 года назад

    Looks like fake 3d generated video... ( the depth is to big

    • @WorldofDepth
      @WorldofDepth 2 года назад

      Indeed it IS generated, per the video description. The depth is designed for laptop or smaller size display.

    • @AntonBarcelona
      @AntonBarcelona 2 года назад

      @@WorldofDepth didn't read description... )... My mistake

  • @MichaelBrownArtist
    @MichaelBrownArtist 2 года назад

    12/16/21 MiDaS v.3 - Failed at the second step (load a model): ModuleNotFoundError: No module named 'timm'

    • @WorldofDepth
      @WorldofDepth 2 года назад

      Hmm, I just tried the ‘upload version’ notebook and it worked fine. Did you miss the first code box, above “Uploading Your Image”? That is the step that installs timm. If you did run that, it may be that the bandwidth limit for a certain external file was reached, and it was temporarily unavailable, but it should notify you if that's the problem.

    • @MichaelBrownArtist
      @MichaelBrownArtist 2 года назад

      @@WorldofDepth , Maybe I did miss it. I'll try again. Thanks for preparing such a great presentation.

    • @MichaelBrownArtist
      @MichaelBrownArtist 2 года назад

      Your suspicion was correct. I missed the first step (instal timm). I was able to run it, but the final depth map was very tiny: 284x217 px. Not sure why.

    • @WorldofDepth
      @WorldofDepth 2 года назад

      @@MichaelBrownArtist ah, good. But yes, MiDaS v.3 outputs very small depth maps. I would recommend trying 1) starting with a larger input image, 2) using BMD + MiDaS v2.1 as a possible alternative, which outputs at original size, and 3) upscaling the v3 depth map and using something like a symmetrical nearest-neighbor interpolatation method to smooth it. Ugo Capeto recommends the latter; I don't have a program which offers that method, so I've use Imagemagick and the "Kuwahara edge-preserving noise filter" to pretty good results.

    • @MichaelBrownArtist
      @MichaelBrownArtist 2 года назад

      @@WorldofDepth , thank you.

  • @denapixking
    @denapixking 2 года назад

    This is soo good. Wonder how you made the stereo rig, was it DIY or you purchased an existing rig?

    • @WorldofDepth
      @WorldofDepth 2 года назад

      Thank you! For this I used 2 x Sony RX100 mark II on a simple Swiss-Arca rail I put together, plus sync cables. The connector for 2 sync cables is a bit tricky to find, but otherwise these parts are widely available. I recommend the "Sony3D" discussion board on groups.io if you want full specs/details on similar rigs.

  • @CabrioDriving
    @CabrioDriving 3 года назад

    Do you have knowledge how to convert 2D photo to 3D, the way you feel like standing say 1 or 2 feet from a huge 3D-world-window, with proper, deep depth of the scene, without seeing everything flat and too wide? I was thinking about some mathematical function to convert the image to different "lens angle", but not sure if this is the good direction in thinking. Also, there is a formula for 2d to 3D convertion with camera focus point variable, near plane cut variable and far plane cut variable. I wonder which parameter of this (or other formula?), you need to manipulate to have 3D photo with ideal depth, of scene as close to you, as possible. Imagine like the 3D photo would be a huge window to your garden, which is from floor level to the ceiling and you are standing just next to it. Typically I notice that videos/photos are the best to be viewed with 10 feet virtual (perceived distance in VR) distance, but that kills the feeling of presence in that world - it is just like seeing some window with 3D, too far away.

    • @WorldofDepth
      @WorldofDepth 3 года назад

      What you describe sounds more like VR 180º 3D images to me. To have a feeling of standing close to a 3D scene, I think that's the only option, unless you render in full 3D. I don't work in 180º or 360º, but check out the 3D-Con workshops and Special Interest Groups about VR, from both this year and last year.

    • @CabrioDriving
      @CabrioDriving 3 года назад

      @@WorldofDepth I wasn't thinking about super wide angle of VR180, but let's say 100-110. Just with deep scene depth. I will study the topic if it is possible to do what I described. Cheers

  • @CabrioDriving
    @CabrioDriving 3 года назад

    So you have 2D images + depth maps. Which software/github to use to produce the two stereo images? 3d stereo photo maker is not good in my opinion.

    • @WorldofDepth
      @WorldofDepth 3 года назад

      I use SPM. Which, by the way, produces much smoother stereopairs if you appropriately upscale your 2D image + depth map first. For example, if you're using an SPM deviation value of 60 = 6% of image width, for a 256-level depth map, you need 4267px-wide image. You could also take advantage of the 3PI AI and produce a panning video with it, then extract a stereopair. Not time efficient but would have the best inpainting.

    • @CabrioDriving
      @CabrioDriving 3 года назад

      @@WorldofDepth Hi. Thanks for your time and valuable answer. What I noticed is SPM has problems with depth recognition despite depth map seems to be ok, visually. Also, it tears away surfaces like faces when you produce 3D, even with deviation of 25 or 30 (default). Also, produced images look downscaled in quality. I have a depth map and I see in it, the depth is represented correctly. Then SPM makes flat foreground and correct background hmmm... or vertically sliced/layered depth or some things are flat and other are correct. I have spent a lot of time on this software (even with google AI working from my disk ) and never produced a great 3D photo out of depth maps made with MIDAS or LeRES and some other AI software. So, that is why I asked about some other project to produce correct images. Good point on 256-level depth maps and image size.

    • @WorldofDepth
      @WorldofDepth 3 года назад

      @@CabrioDriving Almost every AI-produced depth map needs manual corrections, I think, especially if it's an image with people's faces at any significant size. As I say in the video, it can be convenient if you're producing 2D animations that are more forgiving of the depth map, but otherwise it's always going to be work, at this stage of the tech… The sliced/layered depth problem you mention is exactly the image size issue I mentioned. Upsizing for stereopair generation and then downsizing back to original size should smooth those areas.

    • @CabrioDriving
      @CabrioDriving 3 года назад

      @@WorldofDepth Thank you for your priceless comments. 1. How did you calculate that needed resolution of 4267 pixels wide 2. what should be image width for 3% deviation? 3. What deviation % you suggest for best effect?

    • @WorldofDepth
      @WorldofDepth 3 года назад

      ​@@CabrioDriving 1) If you use an 8-bit grayscale depth map that is normalized to go all the way from pure black to pure white, it has 256 different depth levels. If you want to differentiate all of those in a stereopair, then those levels will correspond to horizontally shifting parts of the original image between 0 and 255 pixels. If the maximum shift you want ( = deviation) is 6%, that means 255 pixels must be 6% (or less) of your image width, so WIDTH * .06 = 255, and WIDTH = 4250px. (My original number was slightly off.) 2) By the same method, you need an 8500px-wide image if deviation is 3%. 3) It depends on the picture and the intended use. 3% was the old rule of thumb, and that's good for TV size, I think. For laptop or phone screen size, I probably use 4.5-6% more, or rarely up to 8% for a very deep scene. For display via a projector at wall size, maybe you'd want to go down to 1.5%.

  • @WorldofDepth
    @WorldofDepth 3 года назад

    Note that as of 8/17/21, the 3D-Photo-Inpainting Colab Notebook is NOT working due to missing files. I’ve opened a new issue report with the researchers on GitHub and will update this comment with developments.

    • @WorldofDepth
      @WorldofDepth 3 года назад

      8/18/21: Files restored and working again :) Reference: github.com/vt-vl-lab/3d-photo-inpainting/issues/131

  • @kbqvist
    @kbqvist 3 года назад

    Thanks Gordon, great overview of what is possible, and what can be expected soon, and also very helpful to get started with Google Colab. For people who have access to the latest version of Photoshop (22.4.x), it may be worth noting that Photoshop now has a number of so-called neural filters, including ones for making a depth map from a single image (the depth blur filter, check the 'output depth map only' box), colorization, and resizing. The ones mentioned are all at a beta stage of development - or put differently, under constant improvement. They are accessed through the filter meny: Filter/Neural Filters/'filter of your choice'

    • @WorldofDepth
      @WorldofDepth 3 года назад

      Thanks for the details. Yes, I’d heard about some of these, but do not have PS. Ugo Capeto has said he believes the PS depth map algorithm operates in a way similar to BMD.

    • @kbqvist
      @kbqvist 3 года назад

      @@WorldofDepth I actually provided the PS depth maps he used for making his comparison :-)

  • @jimpvr3d289
    @jimpvr3d289 3 года назад

    VERY interesting! But if the depth AI produce only depth maps.. what produces the missing pixel information (like the clouds behind Mr. Rogers head?) Is stereophotoMaker doing that and why doesn't AI do that also? Thank you

    • @WorldofDepth
      @WorldofDepth 3 года назад

      So the painting in of those kind of background spaces is exactly what the 3D-Photo-Inpainting AI does-the zoom-in animation at 22:42 is an example. And it does that based on everything it has learned from massive amounts of training. The SPM animation at 23:25 does do some amount of inpainting as well, but I think it's based on straight calculations and copying existing pixels, rather than on AI, and I think it's not as smooth.

  • @kbqvist
    @kbqvist 3 года назад

    Really impressed with what you have managed to do here Gordon :-)

  • @jimpvr3d289
    @jimpvr3d289 3 года назад

    I never knew you can take stereo with ONE mirror! I always experimented with splitters 2 or 4 mirrors. Thank you. I'll try my hand at this with my cell phone! Your videos are so informative...as I was watching I thought "I never heard of a 'front surface' mirror, and would would I find that!" and of coarse you answered both. Thank you

    • @WorldofDepth
      @WorldofDepth 3 года назад

      Yes, there are so many methods. Folks get terrific results with custom 4-mirror camera attachments and such. But as most people (myself included) don't have the machining skills/means to make those, I think this is a great alternative! By the way, it's not in this video, but I use a simple DIY rig for the mirror now: worldofdepth.com/daily/201023.html

  • @jimpvr3d289
    @jimpvr3d289 3 года назад

    Thank you for posting some cross-view content. You may know about old.reddit.com/r/CrossView/ Crossview is my favorite because there is no restrictions of size. I suppose the audience is limited because it seems to be an acquired skill. The 'demise' of stereo is probably because there is not just one way to create and view content. (youTube used to have us covered by being able to play ANY format (2d,parallel, cross, anaglyph), but now their player will only handle 2d, anaglyph or VR. Have you ventured into VR yet? Thank you

    • @WorldofDepth
      @WorldofDepth 3 года назад

      Cross-view is my default method as well, for anything much larger than phone screen. Yes, the formatting is a perennial question for 3D… I got into VR last fall and love it. You can see some 3D videos I extracted from an Oculus Quest 2 on this channel. I'll take at look at yours the next time I browse in VR.

  • @3dtimetraveler806
    @3dtimetraveler806 3 года назад

    So cool Gordon! Would love to have you do a segment for a future 3D Grab Bag if you were interested! ruclips.net/video/CYILZTiFvv0/видео.html This one featured Dr. Stereo (who I think you know), one great shot from David Hazan (who you also know and I greatly admire) and 3D Phil Brown. LOVE YOUR VIDEOS!

    • @WorldofDepth
      @WorldofDepth 3 года назад

      Thank you-I appreciate that! And wow, sounds like a great show-I'll check that out, and would surely be interested in joining sometime. Email is best to contact me-bottom of the page here (don't want to put it on YT): worldofdepth.com/about.html. Cheers!

  • @viveksmartguy
    @viveksmartguy 3 года назад

    Very catchy and upbeat Gordon! Love your stereos on instagram. -Vivek Waghmare aka @vidi.3d (instagram)

  • @viavillecinque
    @viavillecinque 3 года назад

    very amusing

  • @kbqvist
    @kbqvist 4 года назад

    Thanks, really interesting, never head about this method before! Wonder if there are any good idea to help us fixing the position of the mirror relative to the lens? Or is this a handheld exercise where part of the fun is that you never know if you have succeeded before you process the image :-)

    • @WorldofDepth
      @WorldofDepth 4 года назад

      Thank you. If by "fixing" you mean with a built device, see the video description link to Donald Simanek's gallery. For handholding, it's very important to be steady and consistent, and I do recommend systematic experimentation when starting out, in order to find the best mirror position and angle. Once you've determined that and can reproduce it reliably, you should be able to get consistently good stereos this way (with a few expected failures as always). See the 2nd half of this article for more on fine-tuning mirror position/angle: worldofdepth.com/tutorials/mirror.html

    • @kbqvist
      @kbqvist 4 года назад

      @@WorldofDepth Thanks Gordon!

    • @WorldofDepth
      @WorldofDepth 3 года назад

      Karsten, in case you didn't see it, I did post about a very cheap mirror + phone (or small camera) rig you can assemble yourself. It's how I take catadioptric stereos now, and the stability also enables videos! worldofdepth.com/daily/201023.html