Metashape accepts fisheye images, meaning you can create jpegs from the original video of each lens, rather then creating an equirectangular video first (and avoid all the re-compression artifacts) Even better results can be had if you set the Insta 360 One to take still images every couple of seconds instead of video. Video codecs compress the image WAY more then jpeg does. This translates into more detail in your 3d model and better resolution textures. Let me know if you end up trying either of those suggestions.
I just export my insta 360 videos in mp4 360. Than import in Agisoft ( standard function ) than choose spherical in camera settings en away we go . no extra steps needed.
@@Instant_Nerf - try Luma with the 360 images. But , what works better is using the Photogrammetry capture workflow using a normal (good) camera. Online Luma or Polycam . Local on PC use Postshot.
@oebleh I have used all of them. The best one is scanverse on iOS..as it does Gaussian splats in real time. However I bought this 360 camera and I have seen a few people create Gaussian splats with one and they look amazing. Lume hasn’t updated taking 360 to Gaussian, only nerf.. and it’s not that great.
Have you tried to make Gaussian Splats? I keep failing. Im gonna try photogrammetry now, but really would love to get some splats from 360 video. Im using Insta 360 RS ONE
Excelente explicación y muy buena comparacion de diferentes maneras para generar un modelo fotogrametrico con camaras 360. Consulta: Cual programa usas para realizar el video del recorrido a través del modelo generado?
Hello Jérôme! I'm a big fan of your videos, they're incredibly informative and inspiring! I'd love to hear your thoughts on photogrammetry for indoor spaces of buildings. What equipment and software would you recommend for this specific use? Keep up the great work!
Awesome video, thank you so much! I always loved the equirectangular results that MetaShape could generate so easily! But splitting the images with Meshlab is new to me, i will try that. I wonder: How would you use Meshlab with those 4 split-images? That would be a great follow-up video!
Amazing video!! But I have one problem, how do you export the video to JPG stills? I can't see that option. That would save a ton of time in the Metashape process.
Did you used free version of Metashape? Is free one enough to for example walked around and through large hill with race track on it? I would like to recreate one track in 3D and I'm looking for cheaper alternative to getting DJI mini 4 pro
You can't use sequential with the split images because the other three angles always appear right after the first -- but what if you added a sequence with only one of the splits at a time? Then it would truly be sequential
Will thos be bettwr with the Insta360 Pro 2? Can I combine them so I get low angle shots by the 3602D camera together with the 3603D camera at head height?
The software doesnt really care from which camera or where its from, as long as it can find the new position, so i think this will be possible. Even mixing and matching multiple cameras, high quality pictures of the walls and ground, and lower quality drone footage for building-shapes and such
I just export my insta 360 videos in mp4 360. Than import in Agisoft ( standard function ) than choose spherical in camera settings en away we go . no extra steps needed.
@@hamdisferhat6881 I ended up running ffmpeg and got the img files that way. There's not a lot of info out there (that I have found) that goes from point A to point B
@@hamdisferhat6881 I can do this in Blender, but how do you subsample so you don’t have an image for every frame, but rather get an image every .5 seconds or so?
Thank you for this video. Here is the first result made using your method: ruclips.net/video/YYTjipAwMVk/видео.html A small clarification: the initial video was not made for this purpose, I used it because it seemed to me a difficult case to handle and therefore revealing of the limitations and advantages. I will take the time to test it indoors, in the city... Thanks again.
Metashape accepts fisheye images, meaning you can create jpegs from the original video of each lens, rather then creating an equirectangular video first (and avoid all the re-compression artifacts)
Even better results can be had if you set the Insta 360 One to take still images every couple of seconds instead of video. Video codecs compress the image WAY more then jpeg does.
This translates into more detail in your 3d model and better resolution textures.
Let me know if you end up trying either of those suggestions.
Thanks for providing such valuable info, I will try your suggestion!
I just export my insta 360 videos in mp4 360. Than import in Agisoft ( standard function ) than choose spherical in camera settings en away we go . no extra steps needed.
How can I export the images to use with another software since I’m doing Gaussian splatting. I have a 360 camera ..
@@Instant_Nerf - try Luma with the 360 images. But , what works better is using the Photogrammetry capture workflow using a normal (good) camera. Online Luma or Polycam . Local on PC use Postshot.
@oebleh I have used all of them. The best one is scanverse on iOS..as it does Gaussian splats in real time. However I bought this 360 camera and I have seen a few people create Gaussian splats with one and they look amazing. Lume hasn’t updated taking 360 to Gaussian, only nerf.. and it’s not that great.
Great information in this video, it would be awesome if you could do a 3d tour of an interior home or building
Great Video!! Thanks for sharing!!
How did it handle YOU being in the footage? Did you pre-mask it, or did it just figure it out?
Excellent tips and a great tutorial.
Fantastic tutorial, Thank you !
why not take a time lapse as you walk?
Or FFmpeg to set the timing for the output: ffmpeg -hide_banner -i INPUT.mp4 -vf "setpts=4.3333*PTS" -an -y OUTPUT-4.3333.mp4
I bet the results would be appreciably better with the X4!
Thank you very much for your effort, I would like to ask
how did you manage to extract the frames without motion bluer
Have you tried to make Gaussian Splats? I keep failing. Im gonna try photogrammetry now, but really would love to get some splats from 360 video. Im using Insta 360 RS ONE
Un super tuto 😮 merci on va tester🤞
Thank you for this great video. Did you ever tried it indoors? Could it be a solution for capturing a floor plan?
Excelente explicación y muy buena comparacion de diferentes maneras para generar un modelo fotogrametrico con camaras 360. Consulta: Cual programa usas para realizar el video del recorrido a través del modelo generado?
tres interessant , very interesting thanks
I would love a tutorial combining this with drone images to get the roof data.
This is a fantastic Work Flow! It can be possibile to use it for a 2 km road scanning?
Hello Jérôme!
I'm a big fan of your videos, they're incredibly informative and inspiring! I'd love to hear your thoughts on photogrammetry for indoor spaces of buildings. What equipment and software would you recommend for this specific use?
Keep up the great work!
I agree with everything you say, @hamdisferhat6881, and would like to join in with my interest in indoor Photogrammetry, @Jérôme!
Awesome video, thank you so much! I always loved the equirectangular results that MetaShape could generate so easily! But splitting the images with Meshlab is new to me, i will try that. I wonder: How would you use Meshlab with those 4 split-images? That would be a great follow-up video!
how do you compensate for potential motion blur, i.e. is there a specific walking speed that provides optimal results?
super video..
what specs the computer you had?
Amazing video!! But I have one problem, how do you export the video to JPG stills? I can't see that option. That would save a ton of time in the Metashape process.
import video...to create the images =standard function in metashape
Did you used free version of Metashape? Is free one enough to for example walked around and through large hill with race track on it? I would like to recreate one track in 3D and I'm looking for cheaper alternative to getting DJI mini 4 pro
What type of subscription do you have for metashape ? Is the standard edition sufficient for that type of use ?
Yes...and see my other comments. 🙂
Amazing, I need to get a 360 camera, looks like getting a drone was not the optimal first choice
A drone unlocks amazing viewpoints though, and it's super fun to fly ! I'd say it's a very good thing to have one.
NICE
nice video. is this also pssible using zephyr?
Yes, you could process the images in zephyr
You can't use sequential with the split images because the other three angles always appear right after the first -- but what if you added a sequence with only one of the splits at a time? Then it would truly be sequential
What is your 360 cam?
Hi, thanks you! Model camera please ?
Insta 360 One X3
Ate you able to help me convert a video to vr I am willing to pay
i followed the tutorial but still get no model or some cameras not working.
Will thos be bettwr with the Insta360 Pro 2? Can I combine them so I get low angle shots by the 3602D camera together with the 3603D camera at head height?
The software doesnt really care from which camera or where its from, as long as it can find the new position, so i think this will be possible.
Even mixing and matching multiple cameras, high quality pictures of the walls and ground, and lower quality drone footage for building-shapes and such
@@tarakivu8861 Great thanks! :)
@@tarakivu8861 great thanks! :)
I just export my insta 360 videos in mp4 360. Than import in Agisoft ( standard function ) than choose spherical in camera settings en away we go . no extra steps needed.
did not work for me ;(
Make splats instead of mesh?
Upload stills or video to Luma or Polycam .
How to export the video to jpg ?
Import video in Metashape. when creating the model choose spherical as camera.
what ver of DaVinci are you using? I can't seem to get the custom export of 5760x2880 in 18.6
I had the same probleme with Davinci standard, You can use blender to export PNG Images
@@hamdisferhat6881 I ended up running ffmpeg and got the img files that way. There's not a lot of info out there (that I have found) that goes from point A to point B
@@hamdisferhat6881 I can do this in Blender, but how do you subsample so you don’t have an image for every frame, but rather get an image every .5 seconds or so?
Can we have guide for blender?
Hello @@avigold6632 In Blender, we can adjust the speed by frame count, by selecting the timeline and adding the speed effect.
can any one here help me? i have waited for 6 month.
Thank you for this video.
Here is the first result made using your method: ruclips.net/video/YYTjipAwMVk/видео.html
A small clarification: the initial video was not made for this purpose, I used it because it seemed to me a difficult case to handle and therefore revealing of the limitations and advantages.
I will take the time to test it indoors, in the city...
Thanks again.