Hi Jonathan, looks like you have complete control over this domain so I would like to ask you for a bit of help. Given a single photo (though I have 2 more from different positions) and knowing with good accuracy dimensions of the objects (car, person house) in that scene, what tools could I try to use to find the relative distance of object X (know its dimensions) from the other objects ? I know the focal length of the camera, aperture size and sensor size. Intuitions tells me that given enough control points it should be possible to compute what the coordinate transform is between the 3-D World and my 2-D image. I would be very grateful if you could point me in the right direction. Cheers
So I had an idea. I'm not sure how much you know about stable diffusion but I originally just found this video after I had this idea while generating images of Rivendell using stable diffusion, and many of them came out amazing. This got me thinking, and I was wondering if technology such as this could expand on some of the really cool results to make 3D environments for DND like activities, storytelling, and video games: Would it be possible for someone to use something like this on an image of a street generated by stable diffusion, then go into the somewhat "gimmicky" 3d model you generated in this video, change the angle from the original origin of the photo ever so slightly so that it doesn't look to wonky, take an image of that angle, but it BACK through stable diffusion img2img using the original text prompt for stable diffusion to clean things up, and then take THAT result and add it to the list of "photos" to use in photogrammetry all over again? Kind of a circular cycle, in which for each iteration you pick the stable diffusion results that look the best/most like the original environment to essentially fabricate additional images to make the photogrammetry better to then get more and more photos of slightly new angles and on and on until you have enough "images" of the scene to use "traditional photogrammetry software to map out what looks like a legitimate 3d model of the image result of your original prompt
I *_really_* like that phrase "reality capture". I'll be using it wherever I can. Thanks! #FeedTheAlgorithm.
It's the name of the software tho
This can be used to create blocking or reflection planes in cg workplaces.. The geometry will be good... Nd we can add on top of it to add more detail
Hi Jonathan, looks like you have complete control over this domain so I would like to ask you for a bit of help. Given a single photo (though I have 2 more from different positions) and knowing with good accuracy dimensions of the objects (car, person house) in that scene, what tools could I try to use to find the relative distance of object X (know its dimensions) from the other objects ? I know the focal length of the camera, aperture size and sensor size. Intuitions tells me that given enough control points it should be possible to compute what the coordinate transform is between the 3-D World and my 2-D image. I would be very grateful if you could point me in the right direction. Cheers
So I had an idea. I'm not sure how much you know about stable diffusion but I originally just found this video after I had this idea while generating images of Rivendell using stable diffusion, and many of them came out amazing. This got me thinking, and I was wondering if technology such as this could expand on some of the really cool results to make 3D environments for DND like activities, storytelling, and video games:
Would it be possible for someone to use something like this on an image of a street generated by stable diffusion, then go into the somewhat "gimmicky" 3d model you generated in this video, change the angle from the original origin of the photo ever so slightly so that it doesn't look to wonky, take an image of that angle, but it BACK through stable diffusion img2img using the original text prompt for stable diffusion to clean things up, and then take THAT result and add it to the list of "photos" to use in photogrammetry all over again? Kind of a circular cycle, in which for each iteration you pick the stable diffusion results that look the best/most like the original environment to essentially fabricate additional images to make the photogrammetry better to then get more and more photos of slightly new angles and on and on until you have enough "images" of the scene to use "traditional photogrammetry software to map out what looks like a legitimate 3d model of the image result of your original prompt
hi i am new from max novak channel i will learn blender from you bro
cool Video Idea. Have a great weekend :)
With fspy and Blender you can take the original photo and turn it into model (with the same quality as the initial animation) pretty quickly.
How?
@@jrM5492 an example of using fspy to match camera parameters and then model in blender: ruclips.net/video/704NgSWO5fk/видео.html
Interesting ... 👍🍻
no module named numpy T.T