I'd love to run these images through RealityCapture if you'd like a 4th comparison :). The Substance tech is a licensed version of Bentley's Context Capture, but super limited. No UDIMs is a deal breaker, as is the ~6 Gigapixel input limit (150 42mpix images) which is not enough for complex scans. It's a great gateway for people, hopefully they can get a better license in the future for complex scans
Apparently the whole setup is even weirder. In this article (80.lv/articles/a-deep-dive-into-substance-3d-sampler-s-3d-capture-tool/) one of Substance's product managers mentions that they use two solutions. One for PC and one for the Mac. "Development really started this summer, but I can tell you that we had many passionate and intense design and tech discussions before that. To accelerate the development, we partnered with Bentley for Windows and Apple on MacOS to use their powerful technologies for photogrammetry. " Which opens up a whole other set of issues. Depending on the platform you will get different results!! Something definitely worth investigating further.
@@marvelousdecay Oh _wow_. That's so bizarre, given cross-platform repeatability is important in some pipelines. I wonder if they'll swap that out with their own solution in the future.
@@marvelousdecay It sounds a bit crazy that the result can vary depending on the platform where the program is run. In that case it is preferable that they stay with a single operating system and improve it as much as they can.
I suppose that for beginners like me it is better to wait for Adobe to improve and use metashape if I don't have a Mac or iPhone but photocatch really caught my attention, I didn't know it existed and it has good results... besides, I don't know what is more profitable than license level! Thanks for the video
The RAW version in Photocatch exports a model that’s made out of several pieces. Each of these pieces has its own texture. That’s what I import in Zbrush. Check my photogrammetry playlist for the whole process. This is the video you need: ruclips.net/video/HhW5wLXDVVY/видео.html
Hope they will tackle the main limitation of this kind of tech (at least for me): the inability to properly scan any reflective or transparent object, maybe A.I. will help in the future. Thank you for he walkthrough, Dimitris!
Hello, thank you for this special video. I'm trying Photocatch for a while and export the models as .obj file and export as glb file from Blender for viewing in AR on a webpage. Generally the glb files sizes 7-8 mb. But some glb files become big size like 70 mb espacially if I took 40-50 pictures of the model. How can I minimize the glb file sizes? Thanks.
Not sure what the solution could be without looking at the files but I suspect it has to do with polygon count and textures. Try reducing the polygon count and textures to something that fits the small file size you have in mind.
Yes I do. It requires Zbrush and you can find all the details about the process in another video of mine here: ruclips.net/video/HhW5wLXDVVY/видео.html
I haven’t tried it myself but the dev team has contacted me about the issues and after sharing the files with them, they have informed me that they have fixed the underlying bugs.
The RAW option which is the one I’m using in the video, focuses on geometry only. It tries to produce the most detailed mesh possible and doesn’t output any other maps other than diffuse. That option assumes that you will then create your own maps based on the high resolution geometry.
from what i saw in the video, photocatch is very good for small or medium objects. But is it capable of capturing larger or more massive objects? What was the largest object you could capture with photocatch? I saw people with reality capture making a 3d model of a house or even a mountain
I’ve been using Polycam. The latest updates are really nice. Does anyone have any thoughts on the comparison of that as photogrammetry? Or how well it imports to Substance Sampler?
Hey Shiloh. Do you mean how well Polycam performs compared to other photogrammetry apps? That's a good suggestion. I'll give it a try and report back with another video probably. It'll take some time to do proper testing. As for importing in to Sampler: Adding your textures and adjusting them won't be an issue. But importing your mesh in order to evaluate those textures, is not possible. Unfortunately this is a huge limitation and one that doesn't seem to be high on Adobe's priority list. Have a look at this thread here: community.adobe.com/t5/substance-3d-sampler-discussions/import-custom-mesh-in-3d-viwer/m-p/12411688 I wanted to mention that in the video too, but in my effort to keep things short and sweet it ended up in the cutting board.
Thanx for the feed back, just wanna ask, i have to shoot like 20 dishes ''Food Plate" how many photos i have to take like the minimum to get the nest quality, and i use Metashape on Pc and it's too slow like 1h.30 for a plate to get the mesh + texture
It’s really difficult to tell. It depends on how complex your object is, but if it’s a simple enough plate I would imagine 60-70 pictures to be enough. Just make sure that you’re not using really high setting when solving the cameras or when creating the point cloud.
Strange results...at least when it comes to Agisoft Metashape. I tested 89 20MPx photos on almost 4 years old laptop: Win 10 Home (installed on more than 13 years old 128GB SATA SSDwith only 42GB free space!) CPU i7-9750H GPU 1660Ti (6GB) 32GB RAM It took 17:30 (ca 1:36 "align photos", the rest "build mesh") - all at high settings.
Nope that’s not the reason. That has to do with how meticulous the matching of points between photographs has to be. Even in high the result is the same.
Yeah unfortunately 8GB is a bit on the low side. Photogrammetry can eat up a lot of memory! I have 64GB and sometimes it’s all eaten up. I didn’t have any issues in that regard but I’ve stumbled on a lot of bugs. Like scrolling a menu item and all the options disappearing, crashes, of course the bug I mentioned in the video and so on and so forth.
Very interesting to hear that Photocatch can run on 8GB of memory! I'm guessing at some point it will utilise virtual memory but it's nice that it can do all the work with limited resources!
Hi, nice video! Have you used Build Model from Dense Cloud in Metashape? If so - it can be the reason of sub-optimal performance. In my experience Build Model from Depth Maps works much better (i.e. skipping completely stage of Dense Cloud building).
hey, nice comparison! A weird issue I am having when running 3D capture from 3D sampler on mac is that I don't have the mesh construction option; instead, I only have dataset & alignment -> post process. I can't find any reference to that online. Hence, I'm curious, have you encountered this?
Hmm not sure what the issue might be. Maybe your computer doesn’t meet the specs for the mesh reconstruction part (memory, cpu etc)? Better contact Adobe’s support.
You didn't specify what equipment you used to capture the images. I take it you used an iPhone 13/14 pro with lidar/depth sensor which benefits Apple's Photocatch greatly moreso than the other two alternatives.
as long as i keep my computer and myself free from the apple filth, Substance Sampler is THE solution. Output UVs suck anyway and a remapping would be needed in all cases from any of these apps, so fragmented workflow won't go away. For what i got in my tests, whatever they spit out at 8K can fit well at same texel density in a well packed UV at 2K.
CRAPPLE GGGGGGG let me go get an Apple product only to be able to use a sub par camara and lens that is built in then do the prosses on a 6 inch long screen. Garbage of convenience! Bottom line there is no way to get a good scan for cheap!!! If you can live with good enough then go for it!!! OOO lets do a rock easy vary arbitrary.
I'd love to run these images through RealityCapture if you'd like a 4th comparison :).
The Substance tech is a licensed version of Bentley's Context Capture, but super limited. No UDIMs is a deal breaker, as is the ~6 Gigapixel input limit (150 42mpix images) which is not enough for complex scans. It's a great gateway for people, hopefully they can get a better license in the future for complex scans
Apparently the whole setup is even weirder. In this article (80.lv/articles/a-deep-dive-into-substance-3d-sampler-s-3d-capture-tool/) one of Substance's product managers mentions that they use two solutions. One for PC and one for the Mac.
"Development really started this summer, but I can tell you that we had many passionate and intense design and tech discussions before that. To accelerate the development, we partnered with Bentley for Windows and Apple on MacOS to use their powerful technologies for photogrammetry. "
Which opens up a whole other set of issues. Depending on the platform you will get different results!!
Something definitely worth investigating further.
@@marvelousdecay Oh _wow_. That's so bizarre, given cross-platform repeatability is important in some pipelines. I wonder if they'll swap that out with their own solution in the future.
@@marvelousdecay It sounds a bit crazy that the result can vary depending on the platform where the program is run. In that case it is preferable that they stay with a single operating system and improve it as much as they can.
Great comparison and info !! thank you ! I didn't know this photocatch software
Did you use depth-map reconstruction or dense pointcloud?
sampler can't process any flat surface unless I tilt my camera to get some high perspective images
My metashape output lack detail. How can I make it look like your output? I'm curious about your build mesh settings.
I suppose that for beginners like me it is better to wait for Adobe to improve and use metashape if I don't have a Mac or iPhone but photocatch really caught my attention, I didn't know it existed and it has good results... besides, I don't know what is more profitable than license level! Thanks for the video
I apologize for the stupid question - I understand that from the photo this program allows you to make a material and then use it in subs painter?
How do you deal with converting the UDIMs that come out of Photocatch into a single retopologized texture map? Zbrush doesn't support UDIMs.
The RAW version in Photocatch exports a model that’s made out of several pieces. Each of these pieces has its own texture. That’s what I import in Zbrush. Check my photogrammetry playlist for the whole process. This is the video you need: ruclips.net/video/HhW5wLXDVVY/видео.html
what abput polycam?
Hope they will tackle the main limitation of this kind of tech (at least for me): the inability to properly scan any reflective or transparent object, maybe A.I. will help in the future. Thank you for he walkthrough, Dimitris!
Hello, thank you for this special video. I'm trying Photocatch for a while and export the models as .obj file and export as glb file from Blender for viewing in AR on a webpage. Generally the glb files sizes 7-8 mb. But some glb files become big size like 70 mb espacially if I took 40-50 pictures of the model. How can I minimize the glb file sizes? Thanks.
Not sure what the solution could be without looking at the files but I suspect it has to do with polygon count and textures. Try reducing the polygon count and textures to something that fits the small file size you have in mind.
Is this a Mac only channel? Just want to be sure before I subscribe.
I'm a Mac user but a lot of the videos apply to both platforms
Thanks for the comparison! Do you have a good workflow for generating the textures for these scans?
Yes I do. It requires Zbrush and you can find all the details about the process in another video of mine here:
ruclips.net/video/HhW5wLXDVVY/видео.html
@@marvelousdecay Fantastic, thank you!
Is it still bad after the updates?
I haven’t tried it myself but the dev team has contacted me about the issues and after sharing the files with them, they have informed me that they have fixed the underlying bugs.
@@marvelousdecay Thats great. Will there be a new video to see if its a proper contender now? 🤩
Curious if the Sampler adds that missing detail with a normal map?
The RAW option which is the one I’m using in the video, focuses on geometry only. It tries to produce the most detailed mesh possible and doesn’t output any other maps other than diffuse.
That option assumes that you will then create your own maps based on the high resolution geometry.
from what i saw in the video, photocatch is very good for small or medium objects. But is it capable of capturing larger or more massive objects? What was the largest object you could capture with photocatch? I saw people with reality capture making a 3d model of a house or even a mountain
Photocatch can work with bigger objects. Have a look at this video where I'm scanning a big interior: ruclips.net/video/xPkB6I30cxg/видео.html
@@marvelousdecay Awesome! Congrats on your content, very cool stuff. Keep it up!
I’ve been using Polycam. The latest updates are really nice. Does anyone have any thoughts on the comparison of that as photogrammetry? Or how well it imports to Substance Sampler?
Hey Shiloh. Do you mean how well Polycam performs compared to other photogrammetry apps? That's a good suggestion. I'll give it a try and report back with another video probably. It'll take some time to do proper testing.
As for importing in to Sampler:
Adding your textures and adjusting them won't be an issue.
But importing your mesh in order to evaluate those textures, is not possible. Unfortunately this is a huge limitation and one that doesn't seem to be high on Adobe's priority list. Have a look at this thread here:
community.adobe.com/t5/substance-3d-sampler-discussions/import-custom-mesh-in-3d-viwer/m-p/12411688
I wanted to mention that in the video too, but in my effort to keep things short and sweet it ended up in the cutting board.
Hey, thanks for the comparison! What computer specs do you use?
It's a 10 core iMac Pro with a Vega 64 16GB GPU.
Thanx for the feed back, just wanna ask, i have to shoot like 20 dishes ''Food Plate" how many photos i have to take like the minimum to get the nest quality, and i use Metashape on Pc and it's too slow like 1h.30 for a plate to get the mesh + texture
It’s really difficult to tell. It depends on how complex your object is, but if it’s a simple enough plate I would imagine 60-70 pictures to be enough.
Just make sure that you’re not using really high setting when solving the cameras or when creating the point cloud.
Strange results...at least when it comes to Agisoft Metashape. I tested 89 20MPx photos on almost 4 years old laptop:
Win 10 Home (installed on more than 13 years old 128GB SATA SSDwith only 42GB free space!)
CPU i7-9750H
GPU 1660Ti (6GB)
32GB RAM
It took 17:30 (ca 1:36 "align photos", the rest "build mesh") - all at high settings.
Of course I don't have the same photos, but still 89 20MPx photos.
So, the same photos, same HW, but the new Metashape 2:
It took 16:28 min.!
I really don't know, what old crappaton do you have :)))
hmm. photocatch seems good, but it seems there is no windows app? so it disqualified itself here, isnt it?
At 1:52 in sampler you chose "Precision: Low". I would assume this has to do with the less detailed result.
Nope that’s not the reason. That has to do with how meticulous the matching of points between photographs has to be. Even in high the result is the same.
Excellent comparison! Did you have issues with RAM using Substance Sampler? I tried on my MacBook Air M1 8GB RAM but it's unusable, it uses up to 48GB
Yeah unfortunately 8GB is a bit on the low side. Photogrammetry can eat up a lot of memory! I have 64GB and sometimes it’s all eaten up.
I didn’t have any issues in that regard but I’ve stumbled on a lot of bugs. Like scrolling a menu item and all the options disappearing, crashes, of course the bug I mentioned in the video and so on and so forth.
@@marvelousdecay I guess so… no problem with PhotoCatch though! You convinced me to stick with it :)
Very interesting to hear that Photocatch can run on 8GB of memory! I'm guessing at some point it will utilise virtual memory but it's nice that it can do all the work with limited resources!
Hi, nice video! Have you used Build Model from Dense Cloud in Metashape? If so - it can be the reason of sub-optimal performance. In my experience Build Model from Depth Maps works much better (i.e. skipping completely stage of Dense Cloud building).
If I'm not mistaken I used Depth maps. I'll give it another try though just to make sure.
hey, nice comparison! A weird issue I am having when running 3D capture from 3D sampler on mac is that I don't have the mesh construction option; instead, I only have dataset & alignment -> post process. I can't find any reference to that online. Hence, I'm curious, have you encountered this?
Hmm not sure what the issue might be. Maybe your computer doesn’t meet the specs for the mesh reconstruction part (memory, cpu etc)? Better contact Adobe’s support.
You didn't specify what equipment you used to capture the images. I take it you used an iPhone 13/14 pro with lidar/depth sensor which benefits Apple's Photocatch greatly moreso than the other two alternatives.
The images were captured with my camera. The Panasonic GH5. There's no benefit for photo catch.
Thank you!
Excellent.
Jesus Christ Photocatch looks amazing....too bad it requires a Mac
no reality capture in the comparison... LOL
It's not available for the mac... LOL
@@marvelousdecay leaving the industry leading solution out of a comparison makes it... less usable.. LOL
@@danilodjurovic8445 find a useable video then and watch that... LOL
@@marvelousdecay all jokes aside, you did your best and I appreciate it.
.
.. LOL
Adobe's half-baked 3D solution as usual. Just wait another 4-5 years and it will be perfect...so they will remove it.
as long as i keep my computer and myself free from the apple filth, Substance Sampler is THE solution. Output UVs suck anyway and a remapping would be needed in all cases from any of these apps, so fragmented workflow won't go away. For what i got in my tests, whatever they spit out at 8K can fit well at same texel density in a well packed UV at 2K.
Why is everything Adobe makes so bad even though they charge you and arm and a leg for it
CRAPPLE GGGGGGG let me go get an Apple product only to be able to use a sub par camara and lens that is built in then do the prosses on a 6 inch long screen. Garbage of convenience! Bottom line there is no way to get a good scan for cheap!!! If you can live with good enough then go for it!!! OOO lets do a rock easy vary arbitrary.