Which photogrammetry tool is the best ? (3DF Zephyr, Metashape, Reality Capture, Meshroom)

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 215

  • @rsilvers129
    @rsilvers129 11 месяцев назад +51

    It's best to show meshes without texture. Every software can do texturing so the mesh difference is what matters.

    • @HDesvideos
      @HDesvideos 4 месяца назад

      Because Realitycapture has 70 millions triangles, it will be more accurate that the others soft (20 millions for Metashap)

    • @tarakivu8861
      @tarakivu8861 4 дня назад

      @@HDesvideos More doesnt mean better aligned

  • @greeenlabsstudio2199
    @greeenlabsstudio2199 5 месяцев назад

    Great vid
    But what to use if your gear is MacOs?

  • @hanelyp1
    @hanelyp1 Год назад

    A point barely mentioned, GPU usage. Nvidia support doesn't help if your workhorse PC has an AMD GPU.

  • @davidtadic4716
    @davidtadic4716 Год назад

    VRAM

  • @Baleur
    @Baleur 6 месяцев назад +2

    It just isnt very fair to do comparisons when just using default settings.
    To my eyes Metashape seemed the most competent, but you didnt up the texture resolution to match zephyr and the others.
    Likewize, zephyr generated far less triangles than Metashape did, so zephyr's mesh isnt truly representative.

  • @thecorbies
    @thecorbies Год назад +22

    Hi. I'm only just becoming familiar with photogrammetry, so I REALLY appreciate the time you took to produce this video review comparison. It was extremely enlightening.
    Regards Mark in the UK

  • @marcelosantos5683
    @marcelosantos5683 5 месяцев назад +8

    I loved the comparison, very complete. I just want to comment that I just saw that Reality Capture will ditch the PPI model and offer free versions for students/hobbyists and $1250 / year for companies making >$1M per year

  • @Alexs_Music_Comps
    @Alexs_Music_Comps 7 месяцев назад +7

    One important thing to understand… never ever use metashape with default preset buttons. Try to always use custom numbers for poly and tie points. Never generate the dense point cloud. I usually run 2- 5 million tie points on camera align step. Then you need to use the gradual selection masking to delete failed points in all of the categories like reconstruction uncertainty.. later generate the mesh only by using depth maps.. set the mesh density to custom and at least 10 million polys.. you can decimate it later. Have a good scan

    • @BrunoRamosSouza
      @BrunoRamosSouza 3 месяца назад +1

      Sorry I dont understand very well, how do I do that?

    • @Alexs_Music_Comps
      @Alexs_Music_Comps 3 месяца назад

      @@BrunoRamosSouza try to watch tutorials about custom point clouds and depth reconstruction. Read the manual of metashape.

  • @artisans8521
    @artisans8521 5 месяцев назад +7

    Most programs use the same algorithms, so you are more or less testing the default settings. But for noobs this is valuable information. I used Metascan until 2022, and then I changed to Zephyr. Results are pretty much the same. It's the input that defines the output via the settings (metashape has no holes in the roof, but fuses the planks, zephyr has hole in the roof but seperates the planks) . Drone shots are always challenging. Good sharpness and depth of focus are paramount. But so is lack of noise, with a small sensor camera that is always challenging. Good work, BTW.

    • @CR3DT
      @CR3DT 5 месяцев назад +3

      curious why you changed from MS to 3DF-Z

  • @ultrafunk3247
    @ultrafunk3247 Год назад +11

    This is exactly the video I needed. I'm wanting to try photogrammetry, and this is very helpful.

  • @MillisecondFalcon
    @MillisecondFalcon Год назад +22

    This is an interesting video but it's impossible to make a choice based only on you using each application with the default settings. It would be much more useful to see a comparison after you have got used to the software and used more optimal settings. It would also be very useful to know your PC specifications, especially CPU and memory, to get an idea of approximately how long processing might take on a different system.
    Would you consider redoing the processing with the same dataset? Thank you.

    • @trollenz
      @trollenz 10 месяцев назад +6

      "would you mind spending 6 months to fine tune every piece of photogrammetric software available on the market on various projects and let us know the results" 🤣🤣👌🏻👏🏻

    • @penguinsushi8442
      @penguinsushi8442 6 дней назад +1

      @@trollenz if only there was a piece of software to bench test every photogrammetry software that is currently available on various projects.... so much effort 😂

    • @trollenz
      @trollenz 6 дней назад

      @@penguinsushi8442 I've started once and stopped at the alignment step... Underestimated the time consuming aspect of the project 🤣 But from what I've got it was Context capture > Metashape > Reality Capture (which was by far the worst in terms of alignment)...

  • @adamfilip
    @adamfilip 4 месяца назад +1

    Reality Capture is now free

  • @erikals
    @erikals 4 месяца назад +3

    to make MetaShape less blurry >
    - generated masks based on the actual mesh model,
    - estimated image quality for all cameras (with ignore masked regions option enabled),
    - disabled cameras with image quality lower than 0.8,
    - disabled cameras looking at the model from behind,
    - rebuilt texture.

    • @DaTruthHeals
      @DaTruthHeals 4 месяца назад

      what program do you use most?

    • @erikals
      @erikals 4 месяца назад

      @@DaTruthHealsMetashape / Reality Capture
      i think i'll be using RC more down the road

  • @Pototoes949
    @Pototoes949 Год назад +51

    I just started the video and noticed that you were shooting at f1.7 which gives you a very shallow depth of field and makes it tough for the programs to recreate a sharp image. I would try f8 and raise your ISO so you keep at least 1/200 shutter speed. I think you will find great results.

    • @MakeTeachRepeat
      @MakeTeachRepeat Год назад +1

      Difficult to do this on a small sensor at reasonable ISO though

    • @Pototoes949
      @Pototoes949 Год назад +2

      @@MakeTeachRepeat try it and see

    • @jotch_7627
      @jotch_7627 Год назад +5

      ​@@MakeTeachRepeat what do you consider a "reasonable" ISO? because f/8, 1/200s, and 400 ISO yields an approximately equivalent exposure, and those seem like pretty reasonable parameters to me. and if the shutter speed starts to become a worry, there are plenty of f-stops between 1.7 and 8

    • @lukastemberger
      @lukastemberger Год назад +12

      Mini 3 Pro doesn't have a variable iris. You can't change it from 1.7. this it totally fine for a drone. Also, the camera is on a literal gimbal, so if you make slow moves you won't get any motion blur or shakes.

    • @lukastemberger
      @lukastemberger Год назад +3

      ​@@jotch_7627 The only reasonable ISO for any camera is its base ISO. This gives you the most dynamic range and the least noise. For the Mini 3 Pro it's 100 and 500.

  • @stephanbuth8195
    @stephanbuth8195 Год назад +32

    Thank you for that laborious comparsion! I think it is difficult to compare them with "default" values. Some software closes, lets say, 10% holes of the mesh, while the other app clothes 80% of it at default. So, maybe you append a "part two" here with the best presets you found out.

  • @avi8tor330
    @avi8tor330 Год назад +6

    Phantastic Review! Thank you for all your time and effort. Cheers Martin

  • @wallacewainhouse8714
    @wallacewainhouse8714 Год назад +25

    Hi there, thanks for the comparison. The reason RC made a single 8k texture is because that is the default. To get the best texture, make a new unwrap using the unwrap tool using "fixed texel size" style with the texel size set to "optimal". Then texture again.

    • @jerometabeaud9400
      @jerometabeaud9400  Год назад +2

      Thanks for the tip !

    • @HDesvideos
      @HDesvideos 4 месяца назад +1

      @@jerometabeaud9400 You can increase as you want the number of textures. You can input 20 textures of 8k if you want. But number of textures that realityCapture will generate depend on the numbers of photos and the acurate of the mesh. You can choose numbers of textures in 2 ways : 1) just the number "maximal texture count" 2) the optimal quality. And the maximal texure size in RealityCapture is 16k, not 8k.

    • @HDesvideos
      @HDesvideos 4 месяца назад +1

      And there is 2 options to project your texture : Geometric (by default) and mosaicing based (experimental and for "small" projects). Mosaicing is more accurate in projections.

  • @ericeaton3551
    @ericeaton3551 Год назад +6

    Great video. As I was waiting, I thought that it it would be really cool if, next time you compare software or other things, you could have each software's stats (time to generate the model in this case) over the video of the respective softwares instead of as columns from left to right. It would also allow you to update the data on screen making watchers more engaged.

  • @RandomPickles
    @RandomPickles 6 дней назад

    Zeohyr keeps telling me to make sure my email address is correct when they make me put it in for a trial request. Nothing fishy about my email. ANd I dont want a trial anyways. SO I guess I cant use that one

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 Год назад +3

    Ok, the photos --> crappy high-poly mesh is an interesting step, but much more important is the crappy mesh --> usable mesh + texture. Can you do a review of level of automation and quality of this step for diff packages?

  • @PorscheGt4
    @PorscheGt4 4 дня назад

    hi great video ! wich one can provide the best result with a view to transforming the file for 3D printing ?

  • @PaulChapman-j6v
    @PaulChapman-j6v 5 дней назад

    In realitycapture you can go to workflow > settings and change the amount of triangles / points that will be displayed.

  • @tamasr.8514
    @tamasr.8514 Год назад +2

    What program did you use for displaying the model and how did you export it? I can’t seem to be able to display it like this

  • @BrianHunt-e4x
    @BrianHunt-e4x Год назад +3

    For vertical infrastructure, bentley's Context Capture was amazing. I think they bought it from acute a few years back. Might be screwing up the names, been forever since I've used it but I was the most impressed with it for anything going vertical. But great comparisons. I'm only familiar with Meta and Reality Capture, Meshroom scared me when I started playing with it.

  • @bennoreuter4393
    @bennoreuter4393 5 месяцев назад +1

    Your pronounciation is very good for a french guy!

  • @MondoMurderface
    @MondoMurderface 12 дней назад

    Green is just the grass bouncing light, bro. Its really natural actually.

  • @GrzegorzBaranArt
    @GrzegorzBaranArt Год назад +3

    Thanks for such nice and fun to watch comparison :), cheers!

  • @stevesloan6775
    @stevesloan6775 Год назад +4

    A beautifully produced video. I’m keen to see future more in-depth videos on how to use the individual programs

  • @danielkotzer636
    @danielkotzer636 Месяц назад

    Can you do that with metashape? Take close up pictures that capture only part of the objects?

  • @luculucu879
    @luculucu879 Год назад +3

    Very interesting video.
    Would it also be possible to compare PhotoCatch, which uses the Apple API ( Apple Object Capture)?
    I heard it produces quite decent results with much less processing time.
    Also, would you mind if you share the data set? Would love to do my own photogrammetry of that barn.

    • @jerometabeaud9400
      @jerometabeaud9400  Год назад +1

      Thank you !
      I do not own any suitable Apple device (phone or iPad), so I won't be able to compare the results. From what I understand of this app, it makes use of the new lidar on the latest versions of the IPhone, in order to speed up the results. It mostly works in close proximity, so for a building like the one I scanned in the video, it might be a bit cumbersome.
      For the dataset, I'll see if I can find it and post a link to it. I'd be interested in seeing what you can achieve by tweaking the parameters ! Keep in mind that I did not completely cover all the parts of the building, and some photos are grossly underexposed/bad.

  • @ektorthebigbro
    @ektorthebigbro 2 месяца назад

    bro i use reality capture with 100 mil polys and works just fine just go in the settings and increase the limit

  • @flat---line
    @flat---line 6 месяцев назад +1

    Reality Capture is not only backed by Epic Games and the Unreal Engine but will also soon be completely free to anyone with under $ 1 million in revenue.
    So... learn Reality Capture. It will soon dominate the competition.

  • @jatvic
    @jatvic Год назад +2

    Great video for the newbies in the photogrammetry world! Thanks for the thoroughly comparison and keep the good job. i’m a civil engineer and usually work with lidar scans for field surveys of buildings. Can you tell if all the models that you make could be usable for measurements of element dimention and distances?. The lack of detail in the barn truss in meshroom model seems to be a negative point. Do you believe that using the doble of resolution could bring this elements into the model?. Thanks again for the great video 🎉.

  • @nobody_there_
    @nobody_there_ 6 месяцев назад

    Which one is free ? Capturing Reality is asking fro money to export... what a f°_____

  • @rogiervdheide
    @rogiervdheide Год назад +2

    Thank you for making this video, it is very informative. Once you come to the comparison of the four apps side-by-side, your focus is on the features. That many images aligned, so many triangles produced... But more important is how do you overall like the end result? It depends on the application of the model of course, but taking everything into account which package would you go with to model existing buildings, with a good, detailed, small-scale texture, to allow lighting to be added once the model has been made and imported in 3D design software? Thanks.

  • @discgolfedlife5505
    @discgolfedlife5505 Год назад +3

    Thank you for the introduction to photogrammetry. Now I have a basic understanding thanks to your efforts. Nicely done sir!

  • @mrdixioner
    @mrdixioner Год назад +1

    Metashape has two big flaws: it's idiotic viewport navigation (a moronic sphere) and in some cases incorrect texture generation. It also has several other drawbacks, including the high price. Otherwise, this is the best program for photogrammetry! However, I got pretty good results in the VisualSFM program, but this is on one condition: when there are foreign objects around the subject, so that the program can identify the object in space.
    Meshroom just takes an awfully long time to render a scene (in most cases, up to 10 times longer than other programs), and the result is rather mediocre.

  • @andreasthegreat8509
    @andreasthegreat8509 Год назад +1

    OpenDroneMap! Best bang for the buck!

  • @BradMorrisKA3YAN
    @BradMorrisKA3YAN 5 месяцев назад

    Would love for you to compare WebODM. Also free, not sure if it's open source.

  • @cyberdyne7790
    @cyberdyne7790 Год назад +1

    Hiw can I compare if you lower the quality of the last program how silly

  • @makersprototype5115
    @makersprototype5115 Месяц назад

    you did well making a video, but not found a proper word for "wobblin"? lol

  • @RealCocaCola
    @RealCocaCola Месяц назад

    Sorry but why would you be so pressed for time to not show meshroom at full quality

  • @constantinosschinas4503
    @constantinosschinas4503 Год назад +1

    Thank you for the testing, but the verbal conclusions had almost zero value. Reality Capture is the king of all as seen from the Video Captures. Spitting out numbers does not speak of quality, of which RC is the absolute beast. Dedicated talk on edge quality, geometry quality (straight edges when there) is the most important thing, as well as texture tone quality (the way the software outputs colour and shadows/highlights). Leaving out edge and surface quality is like talking only to people that will retopologize the entire model, which is 1% of the users, at best.

  • @dainjah
    @dainjah 4 месяца назад +1

    Reality capture is free now so the choice is obvious ! :)

  • @SG-dx4di
    @SG-dx4di Год назад +1

    Can you give me the original video of the drone and let me try another software for mac

  • @4dtwinmaps
    @4dtwinmaps Год назад +1

    You should compare the point cloud not the mesh, since mesh can be done with many different parameters from the original point cloud

  • @DennisKreker
    @DennisKreker Год назад +3

    Just a tip for the cinematic shots at the beginnig. Buy a ND Filter Set for the Mavic so you can use the 180° rule for your shutter speed. The shots look really "jittered" because you used a really high shutterspeed.

    • @trollenz
      @trollenz 10 месяцев назад

      You guys don't understand what photogrammetry is... It's miles away from what you guys do with video

    • @DennisKreker
      @DennisKreker 10 месяцев назад +2

      @@trollenz i meant the cinematic shots in the beginning, not the photogrammetry stuff. I know what photogrammetry is and i know that you don't need a nd filter for it.

  • @Baleur
    @Baleur 6 месяцев назад +1

    Really ought to upload in 60fps, alot of the panning shots are hard to follow due to the stutters.
    5:00 and 6:00 example

  • @СергейТимофеев-й3ь

    Cool video, but for the perfect comparison there would be great to take the shots during the overcast.

  • @TheBlackBaku14
    @TheBlackBaku14 4 месяца назад

    Merci pour ta vidéo, maintenant reality capture est gratuit pour les entreprises qui font moins 1M € Ce qui est génial pour commencer et s'entrainer avec !

  • @Fahnder99
    @Fahnder99 Год назад +1

    worthy! your system specs would be of interest, too. Ram, Proc, Cuda cores for example.

  • @kass1564
    @kass1564 Месяц назад

    1:43 thought you took damage for a second

  • @jamesharrison3798
    @jamesharrison3798 Год назад +3

    Good video. Thank you for taking the time to do this. You mentioned the excessive time it took Meshroom to process this. I'm very curious, what hardware are you using to get these times? CPU, memory, graphics card(s). Knowing that would be valuable to the viewer.
    Thanks again :)

    • @luculucu879
      @luculucu879 Год назад +1

      I think he is using an RTX 3060 GPU.
      (He had this card in one of his other videos.)

    • @jerometabeaud9400
      @jerometabeaud9400  Год назад +4

      I'm using a RTX 3060 GPU, as lucu lucu mentionned, along with a Ryzen 5800X with 32 GB of RAM. All the files were hosted on a 2.5" SATA SSD Samsung drive. The same machine was used for all the softwares with no other background task running so it should be relatively fair of a comparison.

    • @jamesharrison3798
      @jamesharrison3798 Год назад

      @@jerometabeaud9400 Thanks for the reply. You have a decent setup. Meshroom must really require some horsepower. Odd that the other software wasn't as hindered considering it's doing pretty much the same thing.

  • @Npc733T
    @Npc733T 5 месяцев назад

    Reality capture just went fully free

  • @m.a2943
    @m.a2943 Год назад +1

    Thank you ❤ pls try tutorial for 3df Zephyr ❤❤

  • @RodMcDonald01
    @RodMcDonald01 Год назад +1

    What about LumaAi?

  • @azharismail6844
    @azharismail6844 3 месяца назад

    Anyone tried PIX4D?

  • @paolocompagnucci2925
    @paolocompagnucci2925 Год назад +1

    I agree with you it's a very interesting job, but with reality capture you can get much better results.

  • @WorkingIdeas
    @WorkingIdeas Год назад +1

    Very good explain and comparison ❤

  • @djstorzek
    @djstorzek 2 месяца назад

    What app for dron control?

  • @simonb1009
    @simonb1009 5 месяцев назад +1

    Usually for serious tasks where I need quality, I choose Metashape. It collects sharp, accurate meshes. Sometimes I try to work in Reality Capture, but it often cannot glue an object into one part and splits it into several components, and gluing them together manually with dots is much more time-consuming.
    When comparing these two programs, I understand that Metashape is more geodetic software and of course the quality is more serious. Reality Capture began to lose in the last two years, although in some moments it is more convenient. But convenience does not equal quality. Despite the fact that I work in Unreal Engine and export my meshes to this engine, and of course RC exports it to the engine more conveniently, while in MS I have to tinker with dimensions and pivots. In general, choose what is convenient for you for a specific task.

    • @PaulChapman-j6v
      @PaulChapman-j6v 5 дней назад

      Alignment settings maybe? Have to tried using control points to merge components?

    • @simonb1009
      @simonb1009 5 дней назад

      @@PaulChapman-j6v this may help, but why, if in metashape everything is merge together automatically?

    • @PaulChapman-j6v
      @PaulChapman-j6v 3 дня назад

      ​@@simonb1009because using control points can create more accurate alignments... Well in realitycapture at least

  • @LogicCYT
    @LogicCYT Год назад +2

    What about WebODM?

  • @_.Dave._
    @_.Dave._ 4 месяца назад

    600 photos took 1 hr,.. i have 200 photos in zephyr and i'm 2 hrs in with a 4090

  • @verslemonde6208
    @verslemonde6208 Год назад +2

    Hi, thank you for this great comparison! Not directly related but to the process: instead of taking multiple single photos, how about taking a clip and splitting it into photos later. The resolution of each might be lower but the amount of information might be override this. I would be interested in hearing your thoughts

    • @02CanGT
      @02CanGT Год назад +2

      When the drone takes a photo, it embeds the geospatial info within the file: lat lon, camera data at time of image, etc. Stills extracted from video will lack the geo data required to be ingested by these photogrammetery apps.

    • @merseyviking
      @merseyviking 11 месяцев назад

      @@02CanGT It isn't a requirement at all to have any image metadata, even camera intrinsics can be inferred from the images. Meshroom can take a video and use every n frames, it even has a motion blur filter that removes images that appear blurred.
      While having lots of images is generally a good thing, it's going to struggle with the lower resolution of a video. Better to have fewer high res images than a load of dross.

    • @tarakivu8861
      @tarakivu8861 4 дня назад

      @@02CanGT These programs make no use of the GPS positions.. its not good enough for that anyways

  • @torquebiker9959
    @torquebiker9959 4 месяца назад

    What hardware do you use? Especially which GPU. Which GPU would you recommend for photogrammetry?

  • @TheShanehiltonward
    @TheShanehiltonward 5 месяцев назад

    I really like Meshroom (with a powerful computer). I also use WebODM in Docker (Linux). Also free and great.

  • @corvusmoonpottery
    @corvusmoonpottery 7 месяцев назад

    Fantastic, informative video, mate. THANK YOU! I'm a starving artist, so I'll stick with Meshroom for now!

  • @uniteddroneservicesllc7184
    @uniteddroneservicesllc7184 3 месяца назад

    Very nice video and comparisons. What are your thoughts on WebODM??

  • @serdarklnc4759
    @serdarklnc4759 Месяц назад

    Great work

  • @jrs80920
    @jrs80920 Год назад

    Will this process work on a very large object, like a small mountain?

  • @InspiredScience
    @InspiredScience 9 месяцев назад

    Jerome, thank you for the detailed, objective comparison. One question: Can you clarify which *versions* of the software were used? For example, 3DF v7+ is completely different than prior versions, making this a very important detail. Thank you!

  • @garyrendano4717
    @garyrendano4717 3 месяца назад

    what do you do.. just fly it around. and point the camera while taking photos all manually?

  • @rcane6842
    @rcane6842 11 месяцев назад

    Have you tried comparing these 4 softwares with DroneDeploy and DJI Terra? Also, maybe a review of the DJI M3E since it is touted to have a survey
    -grade output?

  • @PhillipInglesby
    @PhillipInglesby 11 месяцев назад

    Do you know if it is possible to use Mushroom on a bunch of digital pictures... pictures without shadows or backgrounds? I have 60 images of a 3d rendering from every angle and I would like to convert it back into a 3d rendering. Thanks

  • @jamesburne3893
    @jamesburne3893 10 месяцев назад

    Hi I am only one day into finding out about all this. (I'm also still busy trying to select the drone which is right for me).
    If someone could answer this I would be very grateful. Q: What would be the best software for a beginner with high aspirations to achieving professional results?

  • @panoramapalacevirtualtours
    @panoramapalacevirtualtours Год назад

    ty for the comparison.
    27 4k (216 mpix) textures is only "double" from 4 x 8k textures.. yes.. they are so many either way

  • @BuGa654
    @BuGa654 9 месяцев назад

    Hi, really interesting results. I didn't know until today Meshroom existed. Could you do a test like this with OpenDroneMap as well?

  • @eugenmalatov5470
    @eugenmalatov5470 7 месяцев назад

    A naive question concerning the preparation phase (taking the picture): did you do everything manually or are there scripts you can use?

  • @TheNateweaver
    @TheNateweaver Год назад

    Just a comment as a professional photographer: the green on the more outside bits under of the model is because sunlight bouncing upwards off the grass is green. That’s more just something you learn as a photog after doing it for 20 years. Your own brain has auto white balance and you won’t notice it often.

  • @supergospodshest7338
    @supergospodshest7338 Год назад

    I think you didn't capture the texture on Metashape at all. The colors you see are just the vertex color. So the comparison is not relevant...

  • @flat---line
    @flat---line 6 месяцев назад

    You can easily combine the components in Reality capture using control points...

  • @NikolaosTsarmpopoulos
    @NikolaosTsarmpopoulos 8 месяцев назад

    Which version of Metashape did you use, Standard or Pro?

  • @niloytesla
    @niloytesla 8 месяцев назад

    great comparison but could you please mention the versions of all softwares you are have used?

  • @christianholmstedt8770
    @christianholmstedt8770 9 месяцев назад

    Is thdere an alternative that doesn't require CUDA cores?

  • @Splarkszter
    @Splarkszter 8 месяцев назад

    This video is very unfair and there is zero tecnical detail and zero control, i consider this comparison invalid.

  • @Davido2369
    @Davido2369 Год назад

    In metashape the best results come from tweaking...

  • @adlep
    @adlep 8 месяцев назад

    you forgot the best one which is Itwin/Context Capture

  • @DannyPeron
    @DannyPeron Год назад +4

    can you make a video tutorial for the shooting process only?

  • @crazyxprosanta8096
    @crazyxprosanta8096 10 месяцев назад

    How many usable pictures did you end up with?

  • @redsnow936
    @redsnow936 Год назад

    i think the comparison would be more precise if u had single rendern engine for all 4 models, furthemore there are a lot of settings that can be tweaked during the process of aligning photos and camera positions, in all programs that can misslead the final result
    great video tho)

  • @Unplanted
    @Unplanted Год назад

    Why not redo the Meshroom test instead of just putting it into the video with faulty parameters?

  • @Christophe-fh7dk
    @Christophe-fh7dk 11 месяцев назад

    Excellente vidéo, je fais la même chose que toi Jérôme, mais sans faire la publicité à Metashape. Moi c'est pour mon travail.👍

  • @geo30197
    @geo30197 Год назад

    I also do some photogrammetry and to be honest metashape is ok only the correct settings I have some lecture but they contradict themselves...

  • @코립
    @코립 5 месяцев назад

    무료값은 하네요

  • @Odin31b
    @Odin31b 11 месяцев назад

    Automatic like! I've been teaching myself this skill through YT university and your video hit the spot.

  • @Photoscrisco
    @Photoscrisco 6 месяцев назад

    Voici la vidéo parfaite que je cherchais pour me décider vers quel programme me diriger. Un grand merci. Salutations de Suisse. Christophe

  • @MizoxNG
    @MizoxNG 3 месяца назад

    another big advantage of Meshroom: there's an AMD GPU compatible build called MeshroomCL, its development lags a bit behind the main version, but it's an option for people who bought an AMD GPU and don't want to buy another one

    • @moravianlion3108
      @moravianlion3108 3 месяца назад

      I usually just slap ZLUDA over any CUDA exclusive program I'm about to use. I tried it with Meshroom (main CUDA build), but haven't seen any GPU acceleration out of the box. I'm not sure, how much GPU acceleration in general the program even has though. This is not image generator (per say), chat bot or Resolve. So, I'm not sure which part of the process is suppose to be GPU accelerated. Maybe I'm doing something wrong, but can't say for sure.

  • @ElevateTechnology
    @ElevateTechnology Год назад

    If I have a video recorded of drone footage and I export all of the JPEG images from it, can I do a scan using those images?

    • @djgeorgetsagkadopoulos
      @djgeorgetsagkadopoulos 10 месяцев назад

      You can, but it won't be easy. It will take A LOT of time since the software will have to position the "frame" somewhere in space.
      You see, one good thing with photos taken from a drone, is that they have embedded their geolocation (position, height, angle of camera e.t.c.)
      Also if the given video was shoot in a fast pace (and therefore blur is present on the frames) it will also make it much harder for the software to process.

    • @trollenz
      @trollenz 10 месяцев назад

      ​@@djgeorgetsagkadopoulosyes and no, you don't use GPS tags for pair selection but since it's a video, sequential can be pretty fast as well

  • @DiegoDneo
    @DiegoDneo 10 месяцев назад

    Great Work! I really like Metashape, and its predecessor, PhotoScan, I´v been using since back than. I think is very fast even for testing and there are some "hidden" features, as map transfer based on the original photos used (pretty cool).
    Now, seeing all the other options i can be sure that the Metashape fits better to me. Thank you

  • @alexkam5622
    @alexkam5622 10 месяцев назад

    wonderful introduction! Thanks