Great info! You are at the very forefront of a massive wave of new ways to communicate. Thanks so much for helping newbies like me wade into the technical requirements.
Does Hugh Hou have a tutorial on stitching two ordinary fisheye lenses into a 3D video? I am willing to pay, thank you. It's not a VR lens. I plan to use two 24mm lenses to splice and display on the Vision PRO
You said in the video that Davinci has the most powerful 3D alignment tool you've seen in the industry. Can this replace Mistika VR? I'm going to be experimenting with different selfmade 3D cameras soon and I'm wondering if I can skip "Mistika VR" straight away? Can the usual alignments be carried out completely in Davinci?
Great video for beginners! I purchased the Davinci Resolve to edit 3D videos but hit a snag. The 3D option is not clickable. You mentioned that with the paid version, there is no need to split the right and left videos. Do you have a video for this?
I will release it soon but I do mentioned it here - the Resolve 19 now built in 3D so you don't need to split them anymore. It is smart enough to get left and right eye. I will make an more indepth tutorial soon: ruclips.net/video/PJWsscXmJiE/видео.htmlsi=J-9gHe9-pUF4f-o7
@@hughhou Thank you! I have imported the videos, but for some reason, they are not showing the 3D, and I have not found any good tutorials. Do you think this is because of the way the videos are imported from iphotos (video taken with iPhone 15
Still the same question: Is it possible to crop vr180 video to regular 16x9 spatial video using daVinci Parse/Fusion? It would be great to have a tutorial on this!
Hi Hugh… awesome info… Just a question…. There is any alternative to extract left and right video different from the FFMPEG workflow… always I have seen this option as a complicated option based on coding/command structure. Thanks a lot for your effort communicating new information about the whole VR and 3D workflows.
Oh yes, just use any NLEs to crop left and crop right to render it out. I do that too something and it is easier as you can copy and paste and use Media encoder to queue them all up.
Wait this one I shot on iPhone and did not upscale. Which previous video you referencing to? Feel free to make a new comment of comment on that video directly so I know which one you are. talking about.
So, does this workflow behave any differently if I’m starting out with Equirectangular left/right footage? Color correction wouldn’t, but won’t you have issues with spherical/planar elements with any graphics, and does MV-HEVC have any connection with spherical video metadata?
For immersive video (Apple term), there are too many unknown. I will nee to get my hand on Vision Pro first to do more revert enginnering to figure this out. So hold tight on immersive video - which is 180 and 360. I will follow up as soon as I figure out how to do it (I have not figure it out yet lol - to be 100% honest)
@@hughhou I’ve thrown K1pro footage through halocline and tried in Simulator and it plays as 2.5k square left eye using their media player calls. Can’t wait to see immersively…
In this video description, there is a download link for FFMPEG command. Download the docs and there is a command line to use to do exactly that. Just use it and it will work for RUclips 3D
I tried building the Spatial Media Metadata Injector v2.1and running it on an M1 Mac, but nothing happens. Based on previous videos, it seems this is needed to inject the metadata for Da Vinci Resolve videos. Any clues on how this might work to export to RUclips VR with this issue, whether it's still needed to inject the metadata? Thanks!
Thank you. I built using Python3, and there was a resulting gui.app. I clicked on it but nothing happens. Not sure if there are logs I can look at to debug the issue. I may build on a Windows to get around this.
Hello, I am so sorry to have to ask. Very new with all of this and really interested in learning how to do all this. Why can't we use the video recorded in iphone 15 pro and edit it directly in davinci and have to the left and right if the video captured in iphone15 is already 3d? Really appreciate any answers. Please don't hate me. Thank you.
Don't be sorry to ask. This is why I am here. Always feel free to ask anything you don't get! So you can actually edit the video (trim) directly on your iPhone 15 - you don't need any software. But if you want to get the best stereo, control the disparity, add graphics, then you need other software. You can edit it straight up but then you don't know your stereo quality and risking making your viewer cross eye - which is #1 rule in 3D - depth editing. I hope that make sense. If you edit directly on your iphone then you don't need to worry about any of this.
Thank you so much for getting back to me@@hughhou really appreciate you! Further risking sounding really clueless. Does this mean I can treat it like a normal video in an iphone and stitch the video clips using Imovie and play it on vision pro and it would work? Not minding the video quality etc. of course.
Hello at @hughhou, really struggling here. How can I splice the spatial videos we took using iphone 15 without really needing to edit it. Just splicing it and getting it back to format that is going to play the video in vision pro. Sorry, been scouring the net these past couple of days and have not gotten anywhere. Your help would be truly appreciated.
Hi @hughhou so i followed your video step by step but my boss says that when she plays it on apple vision pro she says “on the video you sent.... the entire window is filled with the video, by stretching it in all directions.. like those weird mirrors on fairs.. and the feeling is that you are at the edge of the cliff.. that stretch your image in all sides… not sure what I am doing wrong.. i followed all your steps.. video source is from iphone 15 pro.. used ffmpeg.. spatialify.. davinci resolve studion.. all steps… i don’t know what I am missing…. 😢😢😢
excellent video, I'm playing it to learn, I just had a problem with dividing the video into left and right, the command gives this error, can you help me? [in#0 @ 0000028512fd87c0] Error opening input: Invalid argument Error opening input file *.MOV. Error opening input files: Invalid argument
Merge mean merge into 360 3D photot or still VR180 photo? Yes, you can, just paint out the overlap but again you need to do it in 3D. 3D photomaker is what I will approach this
@@hughhou I have 2 photos that I took on my insta360 EVO. One is a female model in a white studio and the other is a nature photo. I wanted it to look like the model is in the nature photo. I will try the overlay method you mentioned, thank you.
Same as any other VR180 video. You just need to inject spatial metadata - I will talk about it when I get my Vision Pro and can do more indepth testing. Stay tuned!
You can use any 3D camera btw! Just skip the MV-HEVC part and go straight to Resolve to edit 3D and then encode for vision pro. You can play the vidoe on VisionOS emulator.
The lens width is in my experience needs to be near to eye width . As it is on my Fuji 3d camera & Panasonic 3D1.Following this development with great interest as i still like my own 3D gear & results but lack , now , a 3DTV . So i would like a VR "googles" for mainly 3D .
So maybe I’m misunderstanding a basic concept… I was under the impression that apple “spatial” video had a slight element of “real” depth (that you could look at someone in a video from a *slightly* different angle or get *slightly* closer) because of the depth sensor, contrary to “regular” 3D where you could never do that and can only see the angle recorded by the camera. Am I understanding incorrectly? I thought this video dealt with that but maybe it is my lack of understanding of the term…? Love your videos 👍🏼
Everyone tho that as well but when it came out, it’s just same old 3D video with a efficient codec. There is no LiDAR data embedded in the MV HEVC file and there is no low resolution depth map like in the new Apple map. We will need a low resolution depth map to do what you mention but there is no depth map at all… just left and right images. And yes, disappointing.
Hi - 11'33" "There is no way for Davinci Resolve to export a full size side by side video". Would it not be possible to do this by creating a video collage (Resolve FX Transform in the Edit page)?
OK got it now - many thanks. It was a stupid question from me :) Just walked through the tutorial with one of your provided files - superb work, many thanks for everything you do! One question - I currently don't have an iPhone 15 Pro, but I do have a Canon R5C. Since the dual fisheye lens and the 15 pro cost about the same, is there any benefit whatsoever in getting the iPhone to create spatial videos, or am I better off getting the dual fisheye so that I can do everything in 4K per eye?
There is not one Tutorial who show how to sync the left and right Video, if they are in time not together. If we shoot the left and right Video with 2 independent cams its essntial important to can sync in time the 2 Videos. Not on Tutorial say how we can do this in Davinci Resolve and its possible.
Thank you for posting this. I also have two other 3D cameras with different resolutions, so I'll adapt the scripts to split it at 960 instead of 1920. I originally thought I could do this using crops in Fusion, but I think FFMPEG is a great alternative.
Crop in Fusion also work. We might have a Spatial Video prep script out soon to auto crop it for you. After this video, the community suggested several improving workflow. Will gather all the knowledge and make an updated tutorial. You can also upscale your 960 to 1920 and then remerge them in Fusion.
Dude, I am so frustrated. How the f-$;-did you upload the video to RUclips? The first problem is that people can’t follow your terminal code while using a windows machine, so I had to figure it out, and create a workflow so I can also use my blender renders in the time line. Now that I am able to edit things in davinci, I had no idea what you are doing afterwards. Also how the hell can I see this videos using my Vision Pro? I heard people saying the word moonlight but not even gpt 4o can make me understand what’s the deal. Come on, dude, I have been looking at your freaking tutorials left and right, I don’t see an answer.
the ONLY way to change depth on an already made 3D video is to re do the footage. for example.....the croods, it has stronger depth that the croods 2. to change the crrods 2 the studios must go back and re-calibrate the stereo camera and re shoot it. another way would be to take all the shots in a program that is capable of generating disparity maps from a stereo pair to generate the new eye for each shot. but its WAY faster just to re-shoot it. and that's bull shit that the software to convert this format to sbs isnt free 😡fucking apple and there proprietary bull shit.
Thanks for another wonderful video Hugh. You're a real asset to the VR and 3D community.
My pleasure!
Thanks for sharing what nobody else does, and exactly what we are interested in!
Oh wow and thank you for the support and keep our light on to share more unique workflow and research 🙏🏼
this opens up some great possibilities. thanks for the great explanation
thank you so much for all you do for us 🙏 an I use that App with a iPhone 12 Pro Max... ???
Only iPhone 15 Pro / Pro Max can do Spatial Video now. I don't think iPhone 12 can.
Great info! You are at the very forefront of a massive wave of new ways to communicate. Thanks so much for helping newbies like me wade into the technical requirements.
Happy to help!
Hi, I really enjoy your contents. if I purchase the qoocam ego can I edit on FCPX and Davinci Resolve? Thanks.
Does Hugh Hou have a tutorial on stitching two ordinary fisheye lenses into a 3D video? I am willing to pay, thank you. It's not a VR lens. I plan to use two 24mm lenses to splice and display on the Vision PRO
You said in the video that Davinci has the most powerful 3D alignment tool you've seen in the industry. Can this replace Mistika VR? I'm going to be experimenting with different selfmade 3D cameras soon and I'm wondering if I can skip "Mistika VR" straight away? Can the usual alignments be carried out completely in Davinci?
Resolve is not VR - just 3D alignment. VR you still need Mistika VR
That was completely awesome. Thank you for taking the time.🤩
Our pleasure!
Great video for beginners! I purchased the Davinci Resolve to edit 3D videos but hit a snag. The 3D option is not clickable. You mentioned that with the paid version, there is no need to split the right and left videos. Do you have a video for this?
I will release it soon but I do mentioned it here - the Resolve 19 now built in 3D so you don't need to split them anymore. It is smart enough to get left and right eye. I will make an more indepth tutorial soon: ruclips.net/video/PJWsscXmJiE/видео.htmlsi=J-9gHe9-pUF4f-o7
@@hughhou Thank you! I have imported the videos, but for some reason, they are not showing the 3D, and I have not found any good tutorials. Do you think this is because of the way the videos are imported from iphotos (video taken with iPhone 15
Still the same question: Is it possible to crop vr180 video to regular 16x9 spatial video using daVinci Parse/Fusion? It would be great to have a tutorial on this!
Yes, absolutely. I just have not find the time to make one lol. It is planned :)
Hi Hugh… awesome info… Just a question…. There is any alternative to extract left and right video different from the FFMPEG workflow… always I have seen this option as a complicated option based on coding/command structure. Thanks a lot for your effort communicating new information about the whole VR and 3D workflows.
Oh yes, just use any NLEs to crop left and crop right to render it out. I do that too something and it is easier as you can copy and paste and use Media encoder to queue them all up.
Can you kindly provide a link to that phone lens? Is this like a cpl filter?
It’s somewhat confusing. What did you used to upscale it to 4K in the previous video? And how to upload to RUclips?
Wait this one I shot on iPhone and did not upscale. Which previous video you referencing to? Feel free to make a new comment of comment on that video directly so I know which one you are. talking about.
Thank you so much Hugh! Start to make 3D videos again…🎉
You are very welcome! Yes time to back to 3D.
@@hughhou What do you think of my first spatial editing? ruclips.net/video/XX37eWkVu9I/видео.htmlsi=IQwGlO7ITaXHAMJJ
Any tutorials on taking our existing VR180 and converting it to Apple Spatial Video? Would be nice to bring new life to some old VR180 footage
I will need the get the Vision Pro on hand to figure it out. So will follow up on February 2nd! Doing my best!
@@hughhou amazing thanks Hugh! You're the best
So, does this workflow behave any differently if I’m starting out with Equirectangular left/right footage? Color correction wouldn’t, but won’t you have issues with spherical/planar elements with any graphics, and does MV-HEVC have any connection with spherical video metadata?
For immersive video (Apple term), there are too many unknown. I will nee to get my hand on Vision Pro first to do more revert enginnering to figure this out. So hold tight on immersive video - which is 180 and 360. I will follow up as soon as I figure out how to do it (I have not figure it out yet lol - to be 100% honest)
@@hughhou I’ve thrown K1pro footage through halocline and tried in Simulator and it plays as 2.5k square left eye using their media player calls. Can’t wait to see immersively…
How do you output to the meta quest 3. I'm trying to figure it out.
Spatial video? You will need an iPhone and the latest Meta Quest iOS app then just upload it.
@@hughhou sorry, I mean can you output from resolve to the quest without first exporting the file
Do you also use Prores on a Windows PC? Even though it has no hardware acceleration there?
(I speak of 3603D8K footage if this makes a difference)
ProRes work well in PC as well with NVIDIA GPU and hardware accelecration
Another great video! Are you using Davinci Resolve Studio (the paid one)? It looks like the 3D tools might not be in the free version??
That is correct. All 3D or VR180 feature are only in Studio version. I bought a Blackmagic 6K camera so I have it free.
I think you mentioned in your previous vid that you upscaled to 4K using topaz. Do you do that before you do any editing?
Is there a way to export a RUclips ready file from Spaatialify? I tried all of the export files and none of them worked when watching on RUclips VR.
In this video description, there is a download link for FFMPEG command. Download the docs and there is a command line to use to do exactly that. Just use it and it will work for RUclips 3D
I tried building the Spatial Media Metadata Injector v2.1and running it on an M1 Mac, but nothing happens. Based on previous videos, it seems this is needed to inject the metadata for Da Vinci Resolve videos. Any clues on how this might work to export to RUclips VR with this issue, whether it's still needed to inject the metadata? Thanks!
You need Python and run the Phython file in terminal. But anyway, Resolve DO NOT inject any metadata. You will need to inject them manually.
Thank you. I built using Python3, and there was a resulting gui.app. I clicked on it but nothing happens. Not sure if there are logs I can look at to debug the issue.
I may build on a Windows to get around this.
Thank you for this video. Well done!
My pleasure!
Hello, I am so sorry to have to ask. Very new with all of this and really interested in learning how to do all this. Why can't we use the video recorded in iphone 15 pro and edit it directly in davinci and have to the left and right if the video captured in iphone15 is already 3d? Really appreciate any answers. Please don't hate me. Thank you.
Don't be sorry to ask. This is why I am here. Always feel free to ask anything you don't get! So you can actually edit the video (trim) directly on your iPhone 15 - you don't need any software. But if you want to get the best stereo, control the disparity, add graphics, then you need other software. You can edit it straight up but then you don't know your stereo quality and risking making your viewer cross eye - which is #1 rule in 3D - depth editing. I hope that make sense. If you edit directly on your iphone then you don't need to worry about any of this.
Thank you so much for getting back to me@@hughhou really appreciate you! Further risking sounding really clueless. Does this mean I can treat it like a normal video in an iphone and stitch the video clips using Imovie and play it on vision pro and it would work? Not minding the video quality etc. of course.
Hello at @hughhou, really struggling here. How can I splice the spatial videos we took using iphone 15 without really needing to edit it. Just splicing it and getting it back to format that is going to play the video in vision pro. Sorry, been scouring the net these past couple of days and have not gotten anywhere. Your help would be truly appreciated.
Hi @hughhou so i followed your video step by step but my boss says that when she plays it on apple vision pro she says “on the video you sent.... the entire window is filled with the video, by stretching it in all directions.. like those weird mirrors on fairs.. and the feeling is that you are at the edge of the cliff.. that stretch your image in all sides… not sure what I am doing wrong.. i followed all your steps.. video source is from iphone 15 pro.. used ffmpeg.. spatialify.. davinci resolve studion.. all steps… i don’t know what I am missing…. 😢😢😢
Any news about the Canon? I need a 3D VR180 camera 😢
Me too 🙂
I hope soon. I am gonna ask them at CES 2024 - and if there is any news, you will hear it here first
那个R7的镜头太牛了,啥时候量产@@hughhou
excellent video, I'm playing it to learn, I just had a problem with dividing the video into left and right, the command gives this error, can you help me?
[in#0 @ 0000028512fd87c0] Error opening input: Invalid argument
Error opening input file *.MOV.
Error opening input files: Invalid argument
You don’t need it now! New update in 19 beta support spatial video natively!!!!
Excellent tutorial on the steps required to properly edit 3D Stereo content.
Glad it was helpful!
Would you be able to edit in Final Cut Pro since they have VR editing?
Not that I know of. FCP do not support Stereo 3D editing YET.
Hello Hugh. Love your work,
I wanted to ask a small question.
Is there a way to merge 2 vr180 photos together like can be done with 2 normal photos?
Merge mean merge into 360 3D photot or still VR180 photo? Yes, you can, just paint out the overlap but again you need to do it in 3D. 3D photomaker is what I will approach this
@@hughhou I have 2 photos that I took on my insta360 EVO. One is a female model in a white studio and the other is a nature photo.
I wanted it to look like the model is in the nature photo.
I will try the overlay method you mentioned, thank you.
@@hughhou Is there a 3d photomaker that you can recommend using?
I found that Halocine’s compression is really bad - if you use the paid does it get better?
No. Use this workflow and FFMPEG instead: ruclips.net/video/BHtKOxGEiAw/видео.html - it is the last step in this tutorial, 3D to spatial.
@@hughhou thanks for the response and for all of the useful resources
is it possible to convert vuze xr 180vr video to apples spacial video? if so then how?
Same as any other VR180 video. You just need to inject spatial metadata - I will talk about it when I get my Vision Pro and can do more indepth testing. Stay tuned!
Great video! Very helpful 😊
I'm so glad!
Thanks for your tips, you rock my « boy Hugh » … Too bad I still haven’t got the money to get these devices (iPhone 15 pro 1To / Apple Pro Vision).
You can use any 3D camera btw! Just skip the MV-HEVC part and go straight to Resolve to edit 3D and then encode for vision pro. You can play the vidoe on VisionOS emulator.
The lens width is in my experience needs to be near to eye width . As it is on my Fuji 3d camera & Panasonic 3D1.Following this development with great interest as i still like my own 3D gear & results but lack , now , a 3DTV . So i would like a VR "googles" for mainly 3D .
Yes, AVP will have a better Stereo camera. iPhone 15 is an after tho from Apple to get some buzz before AVP drop this month.
Is there an option in adobe to do the Stereo 3D Sync? 6:10
No, it is a resolve only thing and why Resolve is what Apple recommanded NLE
@@hughhou thank you, your videos are awesome btw.
Keep doing the good work mate, great stuff…..maybe change the Xmas jumper 😂
Good suggestion! Xmas is over but Kimchi on my cheat is still good!
@@hughhou good excuse to get Kimchi clothing for all holidays and occasions
So maybe I’m misunderstanding a basic concept… I was under the impression that apple “spatial” video had a slight element of “real” depth (that you could look at someone in a video from a *slightly* different angle or get *slightly* closer) because of the depth sensor, contrary to “regular” 3D where you could never do that and can only see the angle recorded by the camera. Am I understanding incorrectly? I thought this video dealt with that but maybe it is my lack of understanding of the term…? Love your videos 👍🏼
Everyone tho that as well but when it came out, it’s just same old 3D video with a efficient codec. There is no LiDAR data embedded in the MV HEVC file and there is no low resolution depth map like in the new Apple map. We will need a low resolution depth map to do what you mention but there is no depth map at all… just left and right images. And yes, disappointing.
@@hughhou So there’s nothing “special” about “spatial” 😖😂
Hi - 11'33" "There is no way for Davinci Resolve to export a full size side by side video". Would it not be possible to do this by creating a video collage (Resolve FX Transform in the Edit page)?
Sorry I don’t know what you mean.
@@hughhou Apologies for not being clear. I'm probably missing something obvious - I'll have a play around.
OK got it now - many thanks. It was a stupid question from me :)
Just walked through the tutorial with one of your provided files - superb work, many thanks for everything you do!
One question - I currently don't have an iPhone 15 Pro, but I do have a Canon R5C. Since the dual fisheye lens and the 15 pro cost about the same, is there any benefit whatsoever in getting the iPhone to create spatial videos, or am I better off getting the dual fisheye so that I can do everything in 4K per eye?
There is not one Tutorial who show how to sync the left and right Video, if they are in time not together. If we shoot the left and right Video with 2 independent cams
its essntial important to can sync in time the 2 Videos. Not on Tutorial say how we can do this in Davinci Resolve and its possible.
Thanks!
Thank you for posting this. I also have two other 3D cameras with different resolutions, so I'll adapt the scripts to split it at 960 instead of 1920. I originally thought I could do this using crops in Fusion, but I think FFMPEG is a great alternative.
Crop in Fusion also work. We might have a Spatial Video prep script out soon to auto crop it for you. After this video, the community suggested several improving workflow. Will gather all the knowledge and make an updated tutorial. You can also upscale your 960 to 1920 and then remerge them in Fusion.
👍🏾
謝謝!
十分感谢🙏🏼
Dude, I am so frustrated. How the f-$;-did you upload the video to RUclips? The first problem is that people can’t follow your terminal code while using a windows machine, so I had to figure it out, and create a workflow so I can also use my blender renders in the time line.
Now that I am able to edit things in davinci, I had no idea what you are doing afterwards. Also how the hell can I see this videos using my Vision Pro? I heard people saying the word moonlight but not even gpt 4o can make me understand what’s the deal.
Come on, dude, I have been looking at your freaking tutorials left and right, I don’t see an answer.
the ONLY way to change depth on an already made 3D video is to re do the footage. for example.....the croods, it has stronger depth that the croods 2. to change the crrods 2 the studios must go back and re-calibrate the stereo camera and re shoot it. another way would be to take all the shots in a program that is capable of generating disparity maps from a stereo pair to generate the new eye for each shot. but its WAY faster just to re-shoot it.
and that's bull shit that the software to convert this format to sbs isnt free 😡fucking apple and there proprietary bull shit.
Where do I find the folder "samplefor3deditingtutorial"?
It should be in the link.