How to Use 360 Video for 3D Gaussian Splatting (and NeRFs!)

Поделиться
HTML-код
  • Опубликовано: 24 янв 2025
  • НаукаНаука

Комментарии • 156

  • @marcomoscoso7402
    @marcomoscoso7402 Год назад +4

    Looks so straightforward. I wonder what will this technology look like in 5 years.

    • @c0nsumption
      @c0nsumption Год назад +2

      Can’t help but think that gaming and sims are going to change dramatically 🤔
      Like is this the future of memory? 🤷🏽‍♂️

    • @marcomoscoso7402
      @marcomoscoso7402 Год назад

      @@c0nsumption there are implementations with Unreal engine already with this technology. I think this is the future of games

    • @c0nsumption
      @c0nsumption Год назад

      @@marcomoscoso7402 dang, been switching over to unreal since about a year ago cause I had iffy feelings after being a Unity dev for 10 years. Thank God I did. I gotta try em out. Researching over the weekend

  • @Thats_Cool_Jack
    @Thats_Cool_Jack Год назад +11

    I find meshroom's image outputs from 360 video to be very limiting, it only goes along the middle of the frame which has you missing out on up close things and only focusing on things in the horizon. My solution was to put the video on an inverted sphere in blender, with some cameras (12 of them) facing outwards from the center at varying angles, and then create a bunch of camera markers (Ctrl b) that switches between all the cameras every frame. I found I got way better results doing this, especially because I have a lower end 360 camera thats only 5k res. Hope this helps someone

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Год назад +1

      You want to avoid high fov to minimize distorted edges, which tend to be useless in photogrammetry

    • @thenerfguru
      @thenerfguru  Год назад

      @@Thats_Cool_Jack interesting. I have almost zero experience with Blender. What is your experience with your method being trained into a NeRF or used for photogrammetry output?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Год назад +1

      @@thenerfguru it works really well. The images are the same quality as they would be if you were using the meshroom method but you can choose the angles that the cameras are looking. When I record the 360 video I sway the camera on the end of a camera stick back and forth while walking to create as much parallax as possible, which gets the best depth information, but can be somewhat blurry in low light situations. I've done both nerf and photogrammetry. I made a vrchat world using this method in a graffiti alleyway.

    • @jiennyteng
      @jiennyteng Год назад

      Thanks for your awesome try cloud you introduce more details about how to import 360 video in the blender and output multi-view perspective images

    • @LukeMor
      @LukeMor 6 месяцев назад

      @@Thats_Cool_Jackwould it be possibile to get your blender file? Thank you :)

  • @360_SA
    @360_SA Год назад +2

    Thank you much-needed video

    • @thenerfguru
      @thenerfguru  Год назад +1

      I was rushing this one out for you! I could use tips on how to get better 360 video footage. I have the two cameras at the start of the video. A Insta360 Pro II and a RS One 1-Inch

  • @secondfavorite
    @secondfavorite Год назад +3

    thanks a bunch! This is what I needed. I have an Insta360 but didn't know where to start.

    • @thenerfguru
      @thenerfguru  Год назад

      Great! Let me know if you run into any roadblocks.

  • @jimj2683
    @jimj2683 3 месяца назад +1

    There is so much depth information with the parallax effect and lighting/shadows.

  • @brettcameratraveler
    @brettcameratraveler Год назад +3

    When it comes to NeRFs and GS, can you foresee any advantage to shooting with that larger 360 camera when in 3D mode? I have the Canon dual fisheye 3D 180 8K video camera and hoping to take advantage of it in new, unintended ways, but seems like stereoscopic wouldn't help for this purpose as you could just take more pictures with a single lens, no?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Год назад +1

      It can help but what helps the most is constantly moving the camera. I have my camera at the end of a stick and I sway it back and forth as I walk to create the most parallax

  • @John-b3k7c
    @John-b3k7c Год назад +2

    Thanks for your detailed and professional video. We follow your steps and we can indeed get gaussian splatting results, but we also found that the 6K panoramic video (200MB bitrate, H265) shot using insta360 RS one is converted into a 360 image by ffmpeg, and then converted into perspective images using alicevision. The images are not very clear. Could you please give us some guidance on how to improve the clarity of the picture?

    • @RobertWildling
      @RobertWildling Год назад

      Having the same problem. But it seems to be the insta360 RS One that simply does not deliver good image quality.

  • @bradmoore3778
    @bradmoore3778 Год назад +2

    Really great! If you mounted three cameras to one post at different heights could you combine the three videos to make a better result? Or does the source have to come from the same device moving the one device to different heights? Thanks

    • @thenerfguru
      @thenerfguru  Год назад

      That could work. However, I would want all 3 cameras to be the same camera model.

  • @DanyDinho91
    @DanyDinho91 Год назад +1

    Hi, thanks for these tutorials. Is it possibile to export the point clouds or a 3d model of these results? Thanks

  • @mcmulla2
    @mcmulla2 Год назад +6

    Perfect! i've been messing with this thanks to your Splatting tutorial, excited to mess with some 360 footage I captured too!

    • @thenerfguru
      @thenerfguru  Год назад

      Awesome! Follow me on social. If you share anything you come up with, tag me and I’ll repost it.

  • @frankricardocarrillo1094
    @frankricardocarrillo1094 Год назад +2

    Hello Jonathan, I already have a NeRF and a Gaussian Splatting from the same scene, and I would like to make a video comparision to show how better are the GS, any recomendations about how to do it??
    Thanks

    • @thenerfguru
      @thenerfguru  Год назад +1

      You bet! You can either manually resize all of your photos ahead of time, or when you prep the images it should make a half, quarter, and 8th scale version.

  • @benbork9835
    @benbork9835 Год назад +2

    Wow this is epic! What I did not quite understand, in this training data did you only record the alley way once or did you record it multiple times walking different paths as you told us to do?

    • @thenerfguru
      @thenerfguru  Год назад

      This was a single walk through. You can see that I didn't have the best movement freedom in the end. Unless I stuck to the single trajectory, the result falls apart fast.

  • @JustThomas1
    @JustThomas1 Год назад +4

    Regarding the conversion of the rectangular images to cubemaps - I'm afraid I don't understand the need for this.
    My experience with COLMAP is intermediate, but I typically experienced fewer camera pose misalignment issues when I didn't perform any operations on the input images. Not to mention the extreme slowdown on bundle adjustment & block matching when you start having tons of image tiles.
    Does Insta360 Studio not allow you to export the raw video from each camera independently? Or are you performing this workflow for some other reason?
    Additionally I'd love to hear why you're using meshroom for the cubemaps instead of something like 'ffmpeg -i input_equirectangular.mp4 -vf "v360=e:ih_fov=90:iv_fov=90:output_layout=cubemap_16" cubemap_output.mp4'

    • @thenerfguru
      @thenerfguru  Год назад +3

      Great questions:
      1. I cannot export raw images from each lens. I use that workflow from my Insta360 Pro II, but I still drop a lot of the extreme warped sections of the images.
      2. As far as FFMPEG, That shows you have far back in updates I’ve been focused on with this software! After a few comments, I have written a Python script to extract 8 images and added some additional controls for optimization.
      3. For getting 8 cubemapped images, I’m going off of what I have tested in the past and works best. Using just the front, back, left, right, up and down images do not yield a great result.

    • @JustThomas1
      @JustThomas1 Год назад

      @@thenerfguru Thank you very much for the clarification.

    • @jtogle
      @jtogle Год назад +1

      I have an Insta360 Pro II also and would like to try your workflow out! Other than dealing with a balling ball on a pole overhead, does the workflow for a Pro II differ from this video?@@thenerfguru

    • @panonesia
      @panonesia 11 месяцев назад

      @@thenerfguru if you dont mind, can you share your Python script to extract 8 images please

  • @caedicoes
    @caedicoes Год назад +3

    This is such an important video you can't even imagine!

  • @narendramall85
    @narendramall85 Год назад +2

    How can i download the 3d environment into some .glb or other file format?

    • @thenerfguru
      @thenerfguru  Год назад

      Not possible with this current project. However, this workflow will get you okay results with software like Reality Capture or Object Capture.

  • @mankit.mp4
    @mankit.mp4 Год назад +1

    Amazing tutorial thanks! While I don’t have a 360 camera I do have a full frame camera and a fish eye lens, how would you compare this workflow to if I was to take 4k video with the fisheye and obviously walk back and forth multiple time at different height?

    • @thenerfguru
      @thenerfguru  Год назад +1

      Walking back and forth work. Just make sure you don’t make any sharp rotations with the camera.

  • @loganliu1573
    @loganliu1573 Год назад

    Sorry, I am using machine translated English. I hope I can understand it
    --------------------------------
    Thank you very much for your video. I have learned a lot,
    Here is a small question to ask, if I created a ply model in a video I filmed
    I found that the ground was missing, and then I was filming a video of the ground and creating another ply model
    So how can we merge these two ply models into a complete one? If I can merge, then I can segment and shoot more videos, making a scene perfect without dead corners.
    笔记

  • @pixxelpusher
    @pixxelpusher Год назад +2

    Asked on a previous video, but wondering if you'd know how to view these in VR?

    • @thenerfguru
      @thenerfguru  Год назад +1

      My next video will be how to view these in Unity. I’m not Unity expert, but I think you can do it in there.

    • @pixxelpusher
      @pixxelpusher Год назад

      @@thenerfguru Sounds great, look forward to it.

  • @Photonees
    @Photonees Год назад +2

    Awesome, def gonna try and play with it. But how do you get the sky/ceiling rendered? As you said you didnt include the top? Also wonder how you can remove yourself if you use a 360 camera. I wonder if this would work with a fisheye and 4K video. Then you are always out of the image and can get very high res images or just pictures on my Canon R5 with fisheye. Any idea on what command you need then?

  • @deniaq1843
    @deniaq1843 4 месяца назад

    Dear Jonathan. I have a question. When you cut the pano images with meshroom into cube maps they will have the size 1200by1200 with your line of code. I wonder if there is a formula with which one can calculate the maximum size possible for the input pano. I for example will be able to use a 8k 360 camera soon and i wonder what would be the ideal cube map size for the corresponding input material. Do you have any idea how to calculate or figure this out? Or is it a simple try out process? Thanks :-)

  • @chithanhle3404
    @chithanhle3404 3 месяца назад

    Hi, thank you for your awesome work. I want to know if you ever try to use 360 video of indoor environment for Gaussian Splatting and is it ok in term of output quality?

  • @GooseMcdonald
    @GooseMcdonald Год назад +3

    Do you know a way to use a point cloud, i.e., some Leica scans, and use that point cloud to 3D Gaussian Splatting?

    • @thenerfguru
      @thenerfguru  Год назад

      You need source images. I am not the most well versed in Leica solutions. Do you get both a point cloud and images from a scan station?

    • @RelicRenditions
      @RelicRenditions Год назад

      I know that for the Leica BLK2GO, it captures both the LiDAR scans and the 360 panorama stills as you go. In the Leica SW, you can export the images every x feet that you want. The devices use both the laser and the RGB sensor to do SLAM as you move.

  • @EconaelGaming
    @EconaelGaming Год назад +1

    Why do yous plit the images with meshroom? Can't colemap deal with fisheye lenses?

    • @thenerfguru
      @thenerfguru  Год назад

      That’s a good question. Give it a shot. I bet you’ll have a fun time with COLMAP 🙃. Also, not sure how to export native fisheye images from this camera. I can do it with my Insta360 Pro II, but I still prefer using my only dewarp calibration.

  • @deniaq1843
    @deniaq1843 9 месяцев назад

    Thanks for your time and effort. I want to try it out myself soon. Was the whole progress in real-time? I especially mean the creation of the 3d gausian file. I just wonder how fast this can be. Thanks so far and best wishes :)

  • @choiceillusion
    @choiceillusion Год назад +1

    Very cool. I'm headed to a cabin on top of a mountain and Im going to do some loops with a drone in attempts to turn it into some sport of radiance field. Thank you for this tutorial.

    • @thenerfguru
      @thenerfguru  Год назад

      Loops are amazing for this technology!

  • @kyle.deisgn4626
    @kyle.deisgn4626 10 месяцев назад

    hi I went through the process of the convert.py but ‘Mapper failed with code’ showed up after hours of processing. 😢

  • @christianfeldmannofficial
    @christianfeldmannofficial 4 месяца назад

    Hello, do you think it would be possible to create a complete race track and then map it using Blender etc.?

  • @animax-yz
    @animax-yz 10 месяцев назад +1

    What would be the result if you didn't move back or turn around while capturing the video? I tried to create a nerf after capturing a video inside a room, moving from one end to the other, but it didn't work out. Why its happening?

    • @thenerfguru
      @thenerfguru  10 месяцев назад

      Was it a 360 camera? Rooms can be tough if the walls are bare. You end up with cubemapped images without unique features.

  • @JWPanimation
    @JWPanimation 7 месяцев назад

    Thanks fro posting! Would it have been better to shoot stills every 5' or 2m with the 360 1 inch? As per your suggestion, would a higher elevation pass walking one way and then a
    lower elevation going back the other way be ideal?

  • @KeyPointProductionsVA
    @KeyPointProductionsVA Год назад +2

    I’m still having issues just getting my computer to run python and such so I can start making nerfs. But I have a drone with 360 camera attachments I would love to start making using this

    • @thenerfguru
      @thenerfguru  Год назад

      What’s happening with Python? Is it not added to your path?

    • @KeyPointProductionsVA
      @KeyPointProductionsVA Год назад

      @@thenerfguru I am not sure why it wasn't working with my C: drive as that is where my OS is, but I put it on an old OS drive, now python is working just fine. technology, its weird sometimes 😆

  • @alvydasjokubauskas2587
    @alvydasjokubauskas2587 Год назад +1

    How can you remove your head or body from all this?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Год назад +2

      when recording the video always have your body at the end of the camera stick, turn off horizon stabilization

  • @joselondono
    @joselondono 7 месяцев назад

    is there an option within the aliceVision command to also include the view upwards?

  • @kawishraj3558
    @kawishraj3558 4 месяца назад

    is it possible for you to share the 360 video for practice. I haven't been able to find good 360 videos to try gaussian splatting on. I have tried it successfuly on a lot 2d videos but just can't seem to find a good 360 one. Thanks for you the beginner guide, it was really helpful

  • @LuisGustavoJulio
    @LuisGustavoJulio 2 месяца назад

    Pued usar esto con three js?

  • @hyunjincho5972
    @hyunjincho5972 10 месяцев назад +1

    Can I know your gpu spec used to build the gaussian splatting model? Thanks

    • @thenerfguru
      @thenerfguru  10 месяцев назад

      I am using an RTX 3090ti

  • @AD34534
    @AD34534 Год назад +1

    This is freaking amazing!

  • @roscho-dev
    @roscho-dev Год назад

    Just found you on youtube after following you on linkedin for a while now! Great stuff! One question, do the scans have the correct real world measurements, for example could i measure a kitchen counter that is scanned and it be correct?

  • @spaceghostcqc2137
    @spaceghostcqc2137 Год назад +1

    Can you multicam nerfs and splats?

    • @thenerfguru
      @thenerfguru  Год назад

      Do you mean record with multiple cameras at once? Could be achieved if all of the cameras were the same model/lens

    • @spaceghostcqc2137
      @spaceghostcqc2137 Год назад

      @@thenerfguru Thank you, I'm picturing two 360 cameras. perhaps one on a stick for sweeping around and one on a pole sticking up from a backpack? Or two at different heights on a walking stick. Do you have any guesses as to how two insta360 X3s used like that would do vs a single RS ONE 360 edition? Also imagining a frame to put 3 of them for quick one pass scanning of cooperative humans.

  • @S41L0R
    @S41L0R Год назад +2

    how long did this take to convert and train for you?

    • @thenerfguru
      @thenerfguru  Год назад

      It really depends. Convert takes usually around 5-20 minutes depending on the scene. Could take longer for a lot of images. Train takes 30-45 minutes.

    • @S41L0R
      @S41L0R Год назад

      hm thats weird. small videos ive done have taken hours and hours just to convert. maybe i missed this in your tutorial video but do I need to capture at a lower res?@@thenerfguru

    • @thenerfguru
      @thenerfguru  Год назад

      Perhaps. Maybe less total images in the end. Set the fps to like 1 or .5

    • @S41L0R
      @S41L0R Год назад

      OHH ok ive always done 30fps@@thenerfguru

  • @Aero3D
    @Aero3D 11 месяцев назад

    Buying one of these today just for GS generation! I am super excited to try this out!!!

  • @briancunning423
    @briancunning423 Год назад +1

    Would this work using Google Street view 360 images?

    • @thenerfguru
      @thenerfguru  Год назад

      I have not tried it. Can you great a clean image extract from Google?

    • @briancunning423
      @briancunning423 Год назад

      Yes, there is way you can download and view them. I took 1080 X 1920 stills and free then into photogrammetry software but the result was a sphere with the image protected onto it.

  • @Instant_Nerf
    @Instant_Nerf Год назад

    It seems like the algo has a hard time with more data. For example you normally go around something and have a small area to look at with nerf or Gaussian .. however how do you maybe even combine for a larger scene ? You go around something and then you expand by getting more footage and try and combine all the data so you have more to look at .. or just create a larger scene .. it seems to have problems with that .. any thoughts ??

  • @vassilisseferidis
    @vassilisseferidis Год назад

    Great video Jonathan. Thank you. Have you tried any footage with the Insta360 Pro to compare the results with the One RS 1-inch?

    • @thenerfguru
      @thenerfguru  Год назад

      I would like that too! Do you mean the x3? If only Insta360 could send me a loaner :)

    • @vassilisseferidis
      @vassilisseferidis Год назад

      Hi Jonathan,
      When I follow your workflow the quality of the generated Gaussian splatting looks good only if you follow exactly the same path with the original (recording) camera.
      In your video you show the 6-camera Insta360 Pro model. Have you tried to create a Gaussian Splatting using that camera? I'd expect that the higher resolution would produce better results(?).
      Keep up your excellent work.

  • @sashachechelnitsky1194
    @sashachechelnitsky1194 Год назад +1

    @thenerfguru i wonder if using this method, you can create stereoscopic 3d gaussin splatting using a VR180 camera? i have footage i can provide for testing purposes

    • @thenerfguru
      @thenerfguru  Год назад

      Interesting. My next video will be how to display this all in Unity. I bet it can be accomplished in there.

    • @sashachechelnitsky1194
      @sashachechelnitsky1194 Год назад

      @@thenerfguru rad! ill be on the lookout for that video - keep crushing it man

  • @tribaltheadventurer
    @tribaltheadventurer Год назад

    Is anyone getting a this app can't run on your PC, check software publisher, even though this has worked before?

  • @darrendavid9758
    @darrendavid9758 6 месяцев назад

    Awesome, exactly what I was looking for! I do want my reconstruction to have the views looking up and down though, not just 360 horizontally. Is there a way to extract that data from a spherical 360 video?

  • @hangdu4417
    @hangdu4417 Год назад

    Can I measure the relative width in the result of Gaussian? Which software you suggest? Thank you!

  • @wrillywonka1320
    @wrillywonka1320 Год назад

    so after we get a gaussian splat where can we even use it? no adobe programs can run them, da vinci cant, blender does it very poorly, ue5 costs $100, i think maybe unity is the only program that can use a gaussian splat. they are awesome but its like havin 8k video and youtube only plays 1080. where can i actually use these splats to make a cool video?

  • @JaanTalvet
    @JaanTalvet Год назад

    You mentioned around 15min that you could have gone back over the scene again. Would that significantly increase processing time but also significantly improve image quality (remove floaters, blur, etc.. )?

    • @thenerfguru
      @thenerfguru  Год назад +1

      It probably wouldn't make training time too much longer. However, it would reduce floaters and bad views. You basically would have a greater degree of freedom.

  • @martondemeter4203
    @martondemeter4203 Год назад

    Hi!
    What are the exact convert.py parameters you run the 360 vid?
    I tried with mine, I shoot with insta 360 X3, good, slow recording, 4K equirects, I do exactly how you show and colmap only finds 3-6 images... :S

    • @thenerfguru
      @thenerfguru  Год назад

      Do you have plenty of parallax in the scene? Of all of the objects are far away, not enough parallax this can happen.

  • @panonesia
    @panonesia 11 месяцев назад

    can you make custom FOV, i like to add more top part to the exported frame

    • @thenerfguru
      @thenerfguru  11 месяцев назад

      Maybe, I have not looked into the python scripts provided by Meshroom. However, you may be able to modify them.

  • @27klickslegend
    @27klickslegend Год назад

    Hi, Do i need GPS data in my photos for this? The QooCam3 can only do this by pairing with my phone

  • @CristianSanz520
    @CristianSanz520 Год назад +1

    Is it possible to extract a point cloud?

    • @thenerfguru
      @thenerfguru  Год назад +1

      Not currently. I wouldn’t be surprised if a new project comes out where geometry is exportable. I’ve seen a paper on it and a demo code, but it’s not usable today.

  • @hasszhao
    @hasszhao Год назад +1

    WHAT KIND OF CAMERA?

    • @thenerfguru
      @thenerfguru  Год назад

      In this video I used an Insta360 One RS 1-Inch Edition.

    • @hasszhao
      @hasszhao Год назад

      @@thenerfguru thanks dude

    • @hasszhao
      @hasszhao Год назад

      ​@@thenerfguru Hey, I got the same device and wanted to try reproducing the similar thing like you did, but I could only generate almost-one frame result after rendering although the aliceVision_utils_split360Images did a lot "subimages". I checked the result "output" directory, actually there were only few images used.
      Do you have any idea about the problem I had?

  • @XiaoyuXue-xw9wf
    @XiaoyuXue-xw9wf Год назад

    What's the camera name?

  • @RelicRenditions
    @RelicRenditions Год назад +5

    Such a great video. Thank you. I have been doing historic site and relic capture for a while now using photogrammetry and different NeRF solutions like Luma AI. I am excited to get started with Gaussian Splatting because: 1. It should render a lot faster for my clients, 2. may look better, and 3. It honestly seems easier to set up that many of the cutting edge NeRF frameworks I've been experimenting with that require Linux. Much of my workflow involves Windows because I also do a lot of Insta360 Captures, Omniverse, etc. This is great stuff!

  • @mattizzle81
    @mattizzle81 Год назад +1

    Insane idea. I was thinking of using iPhone lidar to capture point clouds but then that has a limited field of view and hence more waving the camera around.
    Capturing in 360 could be much more efficient.

  • @benjaminwoite6136
    @benjaminwoite6136 Год назад

    Can't wait to see how you make a Gaussian Splatting scene from Insta360 Pro footage.

  • @Aero3D
    @Aero3D 11 месяцев назад

    Ok, so I bought one and tried this and my resulting GS seemed to be as if it was a single frame? a tiny section of the total recorded space. Any ideas why this may happen? I might be doing something wrong, my first attempt ever
    I have all my 360 frames each. I split them with ffmpeg. I see all the split frames, I put them into the "input" folder of my COLMAP root. Although after its done, I see in COLMAP "images" there is only 3 and that is the spot that i see in my GS. It only processed 3 of the 4600 images

    • @thenerfguru
      @thenerfguru  11 месяцев назад

      Are you attempting to work with the equirectangular images or splitting them with Meshroom?

    • @Aero3D
      @Aero3D 11 месяцев назад

      @@thenerfguru splitting them with meshroom

    • @Aero3D
      @Aero3D 11 месяцев назад

      I tried with an all new data set and got the same result. I must be missing something

    • @方川-g8z
      @方川-g8z 5 месяцев назад

      have you finished your problem?

    • @Aero3D
      @Aero3D 5 месяцев назад

      @user-kd2uw1oy1d the entire splat needs to handled in less than the entirety of your VRAM, that was the issue. I bought an XGRIDS K1 scanner, boom problem solved, insane quality

  • @felixgeen6543
    @felixgeen6543 Год назад

    Anyone knows how to use equirectangular images without breaking them into separate FOVs? This would seem the best use of the data.

    • @thenerfguru
      @thenerfguru  Год назад

      Perhaps your best bet is to try Nerfstudio's 360 image supported training. Then, convert it to 3D Gaussian Splatting format. I don't have a tutorial for this though.

  • @Povilaz
    @Povilaz Год назад +1

    Very interesting!

  • @lolo2k
    @lolo2k Год назад +1

    I have worked with the insta360 RS one inch, and it is not worth the price tag! Bigger sensor is great for low light and higher dynamic range but this model has a few drawbacks. 1 high price, 2 the flare as seen in this video. I suggest buying a qoocam 3 at much lower price and better specs. It just released but will be on shelves soon.

    • @thenerfguru
      @thenerfguru  Год назад

      That sun flare issue is terrible!

    • @RelicRenditions
      @RelicRenditions Год назад

      I have been using The Insta360 RS One Inch, the Insta360 X3, and the iPhone 13Pro. All three have their place in captures. The higher resolution and the larger sensor on the One Inch is great, but I really find the in-camera, realtime HDR video of the X3 helpful in outdoor scenes. If you can keep your subject in front of you as you orbit, even an older iPhoneXR is worlds better than the Insta360s. If you need to get in somewhere tight like a smaller building, out come the 360s. The 13Pro has much better low light and high pixel density captures than either, if you can orbit your subject. This is especially true now that they added shooting in RAW as an option in the Pro phones. Keep capturing!!

    • @RelicRenditions
      @RelicRenditions Год назад

      ​@@thenerfguru Indeed. You already know this, but for others on here, try to walk in the shadows like a thief in Skyrim. You can often pull up a map of your target area, evaluate when you will be there to do the capture, and try to stay in the shade as the sun moves during the day. This is a little easier in towns and cities since you can use the buildings' shadows. Sometimes, you just need to sidestep a foot to the right or left and it makes all the difference. Not always an option, but it can help. You can also tape a piece of paper up to the camera on the side with the sun (just wide enough) so it will keep the sun off the lens. You will lose some degrees of capture on the side with the sun, but what you do capture will be glare free. might be a fair trade.

  • @Legnog822
    @Legnog822 Год назад

    it would be nice if tools like this could eventually take 360 photos as input natively

    • @thenerfguru
      @thenerfguru  Год назад

      You could batch it and not have to deal with the different steps.

    • @foolishonboards
      @foolishonboards 10 месяцев назад

      apparently LUMA AI allows you to do that via their cloud service

  • @liquidmasl
    @liquidmasl Год назад +1

    would be awesome if it could just directly process 360 pictures directly to get it all

    • @thenerfguru
      @thenerfguru  Год назад +2

      This call could be batch scripted so you don’t have to go through all of the steps one by one.

  • @R.Akerblad
    @R.Akerblad Год назад

    Looks well made💪, but Abit unnecessary ;)
    I usually use a long screw,40 mm. Screw it in 20mm into the corner and stick the magnet to it. Completely hidden by the sensor 🤙

  • @melkorvalar7645
    @melkorvalar7645 Год назад +1

    You're great!

  • @Moctop
    @Moctop Год назад +1

    Feed in the all the street view data from google maps.

    • @thenerfguru
      @thenerfguru  Год назад

      I don't know how to scrape all of the street view data, but yes that would technically work.

  • @lodewijkluijt5793
    @lodewijkluijt5793 Год назад

    I just tried a dataset of 1456 images (1200x1200) and my 24 gb vram wasnt large enough, going for 728 (half) now to be safe

    • @lodewijkluijt5793
      @lodewijkluijt5793 Год назад

      727 of the 728 images linked, and uses around 18gb of dedicated VRAM

    • @foolishonboards
      @foolishonboards 10 месяцев назад

      how does the model look ? @@lodewijkluijt5793

  • @tribaltheadventurer
    @tribaltheadventurer Год назад

    Thank you so much

  • @kachuncheng-s1v
    @kachuncheng-s1v Год назад +1

    thank you very much~!!

  • @monstercameron
    @monstercameron Год назад +1

    I'm gonna give it a try

    • @thenerfguru
      @thenerfguru  Год назад

      Comment if you get stuck! I was literally losing my voice while making the video. 😅

    • @monstercameron
      @monstercameron Год назад

      @@thenerfguru well I ran it last night and I was seg faulting. I think it's my cuda toolkit version, I hope. Thanks for sharing I'll reference your videos for help

    • @thenerfguru
      @thenerfguru  Год назад

      This also works for NeRFs and photogrammetry!

  • @lucho3612
    @lucho3612 Год назад

    fantastiic technic

  • @allether5377
    @allether5377 Год назад

    oh nice!

  • @underbelly69
    @underbelly69 Год назад +1

    outstanding - see you in the next one