DaVinci Resolve Depthmap Covert 2D to 3D

Поделиться
HTML-код
  • Опубликовано: 5 сен 2024
  • In this tutorial I try to simplify the process of making 2D media into stereoscopic 3D media! Using DaVinci Resolve Studio and the depthmap function I will show you how you can covert 2D video into 3D video quickly, you probably won't get Hollywood style results but DR does a good job at adding depth!

Комментарии • 37

  • @Meteotrance
    @Meteotrance 4 дня назад

    The depth Map generator was made for relighting or even make some bokeh blur effect. Can also be made for 3D compositing with many plate, but it so memory and GPU intensive, that i need to make my flat 3D compositing directly in Blender using ACES and open EXR files sequence. Blender can handle many 2D plate with displacement and normal map made with DaVinci without crashing i also do a normal map extract in DaVinci fusion, it work very well for relighting stuff, also i need to generated alpha Map for good separation between the converted layer, for example i shot a background and i shot myself or my actor in green screen or use a rotoscopie separation to control the distance between plate, but my background can also be virtual in full Real 3D with blender.

  • @Asgaurd64
    @Asgaurd64 7 месяцев назад +2

    Great Video, A lot Easier to follow, I have tested a lot of the so called 2d to 3d (SBS) convertors but they leave a lot to be desired to be honest. At least this workflow allows you more control. Cheers.

    • @WheatstoneHolmes
      @WheatstoneHolmes  7 месяцев назад +1

      Thank you! I was thinking maybe one could render the depthmap only as a video and then paint more details in using Photoshop or the like, bring that edited depthmap video back into DR and use THAT as input to the Displace3D!

  • @rainbowtoyabc7155
    @rainbowtoyabc7155 7 месяцев назад +2

    To fix the black bars you have to go into the project settings and adjust the dimensions from there as well as when you build it. This took me way to long to figure out lol

  • @clamojoat8543
    @clamojoat8543 10 дней назад

    try this. click on your imput, then click on the imgplain button, then the camera. this will set you up for a conversion. then on merg3di add displace3di and then add render3d. then hook that up to output. add depthmap to displace. to set up anaglyph viewing see the ... upper right side video window, hit that and at the bottem select stereo and at bottem of that hit enable. this turns on ana viewing mode. then you can go back in that menu and change other settings. if interested I can give you more info to further improve them

  • @silvertube52
    @silvertube52 7 месяцев назад +1

    You should also be able to output custom dimensions of 3840x1080 for full side-by-side. I just made a short video of stereoscopic animations rendered from blender as 3840x1080 (1920x1080 side-by-side), simply output as same format.

  • @storyfirstfilms149
    @storyfirstfilms149 4 месяца назад +1

    fantastic. thank you very much

  • @makinnoizemedia
    @makinnoizemedia 7 месяцев назад

    Thanks again for this - I'll be trying it!

  • @mm2dip
    @mm2dip Месяц назад

    Thank you so much for this information.
    My question is can this be used for 360 videos?
    Thank you for your time.

    • @WheatstoneHolmes
      @WheatstoneHolmes  21 день назад

      I think you would just use a spherical camera output at the end of the flow.

  • @escalus84
    @escalus84 7 месяцев назад +1

    It would be interesting to see a comparison between this and Owl3D, such as payments, depth map accuracy, and ease of use (which, of course, Owl3D wins, but at what cost or compression)

    • @WheatstoneHolmes
      @WheatstoneHolmes  7 месяцев назад

      I can't find any details on Owl3D regarding price. But I haven't downloaded it to try it yet. On the video demo it looks very similar to DR+depthmap.
      Owl3D looks like it will have more options for output, such as VR, etc. and will be easier to use. DR Studio version cost $295(US) or is included free with BlackMagic cameras.

    • @rwernyei
      @rwernyei 6 месяцев назад

      @@WheatstoneHolmes Once installed, select Upgrade top left and it will show you the prices and features of the Starter v. Plus v. Pro plans. Also worth noting, the Free Starter version limits output to 1080p and only 1 minute video length.

    • @clamojoat8543
      @clamojoat8543 10 дней назад

      @@WheatstoneHolmes don't bother with owl3d loads of problems with it. also old accounts don't work with the new versions. I also talked to the developer and hes an idiot. people just like the program cause its simple to use. but its junk.

  • @antv311
    @antv311 4 месяца назад +2

    Im not sure why you're crapping on the last attempt, I've had to refer back to that video at least 20 times to make my home movies into VR content . The only thing I changed was I cut the subject(s) out of the background. Then I projected the background and map it using the method below (*sometimes in tight shots I need to piece together a background or completely fabricate one .) Copy it cut the background out so its just the subject this time and boom I use your method on both of them , line them up and we have vr .
    I used this method ruclips.net/video/FnAjIDcJdhM/видео.htmlsi=mSQPlZ2C2C5XQL9n

  • @ShaunRoot
    @ShaunRoot 5 месяцев назад +1

    This doesn't work well at all for me. It's either in 3D a little bit and super distorted or you can't notice the 3D effect at all. For anaglyph, I think what you want to do is split the channels to R, G, and B, then displace the R and B channels separately with the depth map/displacement maps. I would have to play with it, but that at least seems like there would be more control there.

  • @sweetleaf7751
    @sweetleaf7751 2 месяца назад

    Is there a template to Use for Meta Quest II or III?. i asked because so far no software worked for me but only gave the movie depth but not 180 or 360 true VR

    • @WheatstoneHolmes
      @WheatstoneHolmes  2 месяца назад +1

      I'm not sure if depthmap 3D conversion is good enough for VR. There is a spheroid camera built into DaVinci to render 360. I think you just set the cameras to spheroid near the end of the flow before the render node.

    • @sweetleaf7751
      @sweetleaf7751 2 месяца назад

      @@WheatstoneHolmes thank You it funny though fish eye give it even more effect too

  • @williamcousert
    @williamcousert 24 дня назад

    Do you have any updates to this?

    • @WheatstoneHolmes
      @WheatstoneHolmes  19 дней назад

      No, sorry. I haven't made any more experiments in this area. Maybe I'll try some 360 stuff in the future because at least two people have mentioned it.

  • @donrikk8546
    @donrikk8546 6 месяцев назад

    Would u be able to add disparity for pop out purposes say for bigscreen VR cinema experience? I know there is a disparity mode but how exactly would I set it up in unison with this mode map uve created? I know other standalone programs use disparity to increase popout of foreground I assume u can do the same with this since the disparity node exists and also uses the depthmap

    • @WheatstoneHolmes
      @WheatstoneHolmes  6 месяцев назад

      Like yourself, I know disparity exists, but I have no experience with using it, sorry. It sounds plausible though.

  • @eyeemotion1426
    @eyeemotion1426 6 месяцев назад

    Hi, I used this method to try and convert an 'old' movie from 2D to 3D. Or rather, from your older video, with the Camera3D still in it.
    First try didn't give a good result, but when I found out what I did wrong, the 2nd try was much better. Although it is servicable, it could still be better. Because of the extrusion of the imageplane, you obviously will end up with warping around the '3D' parts, especially the harder you want to make things pop. So I was thinking of creating 'layers', with different Depth Map settings. So for example, one for things in the back, one for the main subjects in the movie and one for some elements in the foreground (maybe even 4 'layers').
    As I'm not yet too familiar with Davinci Resolve beyond basic editing, I was wondering if you could help me figure out how to do some things I want to do:
    - From a MediaIn, I want to use a Depth Map by using the 'Isolate' settings, to use as a mask for a certain level of depth: so 1 for the background, one for the main subjects (where the focus of the movie is) and one for stuff in the foreground. So each of them probably needs its own Imageplane? So I can offset them a bit in 3D space. Although at the moment, I only seem to get 1 Imageplane in view.
    - I want to use these Depth Maps to mask another Depth Map, whereby the settings are specifically set for that 'layer'. So for example, for the 'background' Imageplane I can set values more extreme to get a 3D "pop" in there, without having to worry that the main subjects are extruded (and thus deforming) too much. Same for the other 'layers'. So each 'layer' would also have its own Displace.
    - Can 1 MediaIn go into several nodes, or do I have to keep copying a new MediaIn?
    So how would that look like, using the setup like in this video (or better yet, your older video with the Camera3D)? I think I'm combining in the wrong order and/or attaching to the wrong input/outputs.

    • @WheatstoneHolmes
      @WheatstoneHolmes  6 месяцев назад +1

      Good idea, you would probably need to make many copies of MediaIn. Like I said, I don't know what I'm doing, I haven't used DR for very long. I'll see if I can stumble upon a better solution.

    • @eyeemotion1426
      @eyeemotion1426 6 месяцев назад

      @@WheatstoneHolmes Well, I'm willing to experiment and try it all myself. But as my Davinci Resolve knowledge isn't that great. I don't even know what color stands for which with the nodes. Just that blue is the alpha channel. so the mask would need to be pinned in there. But as its not working at the moment, I'm probably pulling out other information instead of the 'mask'.
      I know I'm close though, I'm just fumbling somewhere. I have a feeling I'm doing just a thing or 2 wrong, which prevent me for pulling it off.
      I know I could use MagicMask aswell to cut out the main subjects, but that still involves some manual selection each shot.
      It's better to have most things automated and then some manual work to touch up.
      Btw, do you know the difference between a Imageplane node and a bitmap node? As I've seen soms tutorials using a bitmap node to mask, or something.
      Edit: Ok, got a bit further and was able to finally mask out the porion I needed, with a DepthMask. Don't know if it is the correct way of doing it, but it's progress nonetheless.

    • @eyeemotion1426
      @eyeemotion1426 6 месяцев назад

      Currently discovered that if I mask the MediaIn first, then everything masked automatically becomes white in the following DepthMap, which actually hinders of getting a decent 3D pop from what was cut out. So I need to first make the 3D pop and apply a 'mask' afterwards. That's the part I'm trying to figure out now.
      Also seems, because of the offset between the 2 camera's, I'm seeing the openings aswel. I don't know if Fusion has some 'inpainting' aswell?

    • @WheatstoneHolmes
      @WheatstoneHolmes  6 месяцев назад

      @@eyeemotion1426 hover your mouse over the node input/output colors and it should pop up a label telling you what the color means. Good luck!

    • @eyeemotion1426
      @eyeemotion1426 6 месяцев назад

      @@WheatstoneHolmes With my limited knowledge in Davinci Resolve, I was finally able to get it 'working'. It works, but it also doesn't. For a still frame, it would be sufficient what I have now. But as the nature of video, it falls apart when things start moving. A whole slew of new problems arise, as characters change in and out of different depths and such.
      So it's back to the 'drawing board' and further refine what I already have working. Because I still think I'm on the right track. But I think it's going to get more elaborate as I'm going.
      Alteast it's a good way to learn Davinci Resolve more than just editing and rendering with it.
      Do you by any chance now how to some in-painting/content aware fill in Fusion? I tried it with CleanPlate, but that doesn't give any satisfactory result.
      I need something for the background layer, to fill in the gaps that were cut, because with stereoscopy, you'll see those gaps.

  • @rwernyei
    @rwernyei 3 месяца назад

    I combine both depth maps from Owl3D and DaVinci Resolve Studio. Anaglyph looks fantastic on my 75" Roku Plus tv. My workflow used WheatstoneHolmes' first video along with additional nodes. ruclips.net/video/nxGD6NgXpgE/видео.html