This is my last depth rant
HTML-код
- Опубликовано: 6 фев 2025
- Head to squarespace.co... to save 10% off your first purchase of a website or domain using code CGMATTER
💝 ➟ www.cgmatter.c...
post also available on patreon, but i prefer the website anyhow:
www.patreon.co...
We need to try this on some 360 video mapped to sphere...
I had the same idea! Though I think those ai depthmap algorithms could have a hard time decifering depth from a 360 video since it's a very wacky projection and they weren't trained on that. But I need to see it nonetheless
@@wozniakowski1217because I cbf looking through the literature, I remember seeing a couple of approaches. One includes figuring out what kind of 360 camera people use to correct the distortion, the other tries to convert the segmenting model to handle a curved plane instead of a flat one (which passes over the image and is used to guess depth). So yeah, definitely an area of research!
Okay I will train one on 3d video sometime
@@AirNeat random question. Any good panoptic datasets, outside of cityscapes? Like, with people and stuff in it?
@@harry1010 Maybe SUN360, Pano3D, or 3D60
Very in-depth video. I'll see myself out.
👏👏👏👏👏👏👏🎬
well done
nono, that was good come back
now you can exit *3-dimensionally*
It depthends, I think it was good too, hold the door.
A couple of small, but VERY important additions:
1. Any photo is a perspective projection, not an isometric one. So, instead of extruding along axis, you should extrude towards a single point (camera), which is at some height right above the center of frame. The height itself could be found by eye looking at the etruded mesh.
2. You should also undistort the image before it and re-apply distoryion after render.
These two things together can make the 2.5D mesh MUCH more representative to the geometry of real scene (and thus, more correct normals, shadows, VFX integration etc).
yep i was just about to comment the projection thing. change the camera height by moving it until an object appears to have the same width at different depth levels
There’s actually a very nice piece of free software for that that will tell you camera position and lens by defining two or three lines in your image. It’s called fspy
How would I extrude towards a point instead of a axis?
@@khaim0919 did you find a solution?
This is like a Corridor Crew video that gets straight to the point. 🎉
I actually really like just the depth map look without the original video tied in.
I can see this being a nice, easy way to do a Kitty Pride effect with little to no masking. Just have your actor run through a door and hide the doorway with a plane that blends into or replaces the wall.
I had a similar thought. Also: "Oh, in some weeks/months we are going to get a Corridor video on this."
Current developments in computer graphics and AI are insane. I am currently generating photorealistic images on my Macbook Air M1 using Stable diffusion to make a dumb Meme. Hell, inpainting using AI has replaced photoshop (gimp really, I don´t have actual the PS) in a lot of aplications for me.
The deal breaker for this method if we can actually take a non shrinking depth for a room , as if you tried it on a room perspective it kinda go in a curved trapezoid shape
I’m not sure, but it should be possible to apply some exponential correction on depth. Usually in CG graphics then we want to save linear depth, we compress it to smaller range with logarithmic conversion, so objects closer to camera have more depth information compared to objects further away. So to get linear depth we need to apply reverse conversion
@@xabblll exactly, Most of monocular depth techniques are pretty good for close objects , but when when it comes to perspective it breaks down quite quick
@@khalatelomara So you mean make the doorframe the same relative size as the objects near the camera? Well if you know the parameters of the capturing camera, you could easily apply the inverse perspective transform on the depth map, and retrieve the original scale. You would need to know the aspect ratio and FOV, and if you wanted real-world scale you'd have to measure something in the real world scene and scale the resulting geometry appropriately. It might take some faff because the near and far clip planes generated by the ML algorithm are arbitrary (well I presume they are, or at least are rough guesses).
@@merseyviking Most monocular depth estimators are trained to output relative depth, but a few can do 'absolute' metric depth estimations. Also, the most bleeding-edge model I know of simultaneously calculates camera intrinsics.
in the end, you could de-light your scene using the Ian Hubert's trick, to make it absolutely de-lightful!
What trick?
@@Dude_Blender Dividing projected textures by the light values of an hdri that was taken at the same place let's you flatten the image and remove shadows and highlights. InLightVFX had a good video called "How Ian Hubert Hacked VFX (and you can too!)" that goes over the whole process. It's REALLY cool.
@@omgbutterbee7978 thanks mate!
@@omgbutterbee7978 You can fake the surrounding lights as well, if you do not have HDRI, but HDRI is simpler. If you can make them.
Somehow Ian was doing that 18 years ago before hdri was even a thing. The man is a wizard.
combine this with delighting and you have crazy possibilities for dynamically lighting a scene
FYI you can open a command prompt for a specific folder by clicking into the path at the top of the explorer and then just typing cmd and pressing enter.
Yep! Also if you have Windows Terminal installed, you can right click on the window and select "open in terminal"
How did I go decades of using windows without knowing this. Thank you!
brilliant! thanks :)
THANK YOU
I'VE BEEN USING LINUX SO LONG THAT IT FEELS WRONG TO NOT HAVE A "open in terminal" BUTTON I DIDNT KBNOW LOL
Shift + right click in explorer also gives an option to "Open PowerShell window here" (or Command Prompt in earlier versions)
Dude that's wild. You're right that the applications are plenty.
I'm the kind of person that can't think of any when people say that. can you give us some examples?
I don't think this really has any applications
this video should have MILLIONS of views!!! This could CHANGE youtube VFX forever, and it's so accesible that ANYONE with a decent PC rig can take advantage of it. I'll definitely be looking into how to use this for my own projects going forward
Anyone else remember photoshop having depth map filter since the extinction of dinosaurs, but got it removed with the introduction of AI?🤔🤔🤔🤔
My dude I've been looking for how to make detailed 3D objects from depth maps like this for a project totally unrelated to yours, you just saved my blender-inept ass, thanks a lot.
Ian Hubert will have fun with these tools 100%
Exactly what i'm thinking
If you're already using Resolve to deflicker, you can also generate your depth map and relight within resolve while you're at it
Anyone else remembering the Doctor Who episode where they showed off Time Lord artwork which was a single moment of time captured into art. A full 3D model of that moment in time and space. This feels like a step towards that.
3blue1brown recently made an amazing video about holograms that’s exactly what you’re describing.
"If this can be done with a photo, why not do depth on a video?" As far as I know DaVinci Resolve can do this quite easily (at least in the Studio version), so I would assume some other video editing software can do this as well...?
Would you be surprised if Adobe didn't? 😢
Only workaround I've found is exporting footage as a PNG sequence and then running a batch in Photoshop using the blur neural filter with depth output checked. Is it janky? Yes. Is it accurate? No. Does it work? Barely. Am I an idiot for still using Adobe? Absolutely.
@@tomcattermole1844 No, why should I be surprised...? Never thought Adobe was the ne plus ultra. Maybe there is other software out there than just Adobe? I don't know, I'm not using all video software that exists... I just said DaVinci Resolve can do depth on a video and it probably isn't the only software.
@@gordonbrinkmann unfortunately Blackmagic knows exactly what their customers want and puts in the effort to implement it. Most other softwares don't have customers that want the features or don't want to put the effort in to implement it.
IIRC, this is a somewhat common thing to do in Davinci Resolve so you can use it as a mask for various adjustments
Is it possible in premiere pro? I spent a good few hours in last few days for masking tediously and often getting it wrong
@@this_is_mac I would imagine you could use a very similar technique to this and import the video to premiere pro to use as a mask
0:45 You can just click into the address bar in Explorer and type cmd and press enter and it will open the command prompt from that folder.
or even just right click -> open terminal
or use Linux
This is SO sick!! I really wish you showed the that last clip’s full render in the video
Davinci also has another useful feature which is color stabilizer, I don't know if it is in the free version or not, but it can fix exposure shifts like this, I've used it many times when asked to use videos shot without professional equipment
A horror/exploration game with a mechanic where your vision only works well at a certain distance would go hard!
1:59 he started talking like that and i subscribed.
I think I just saw my favorite video of .. the month... the semester...
Awesome thanks
dude. this was the maybe second or third best tutorial video I watch. I don't remember first two so this is first one now.
Gotta say - as a non native speaker I was surprised I managed to get 100% of what you said despite the blazing fast speed, and I believe it's largely due to the fact you're pretty darn clear with your pronunciation so kudos for that ;)
If only we pressed on with light field cameras, could seriously elevate things
Dude just dropped a nuclear bomb💀
its like every time you reappear there's some fun shit on blender to do
Fun fact, if you want to open command prompt in a specific folder, you can type cmd into the file explorer path, and it'll open cmd in the current folder
Your profile picture invokes so many great childhood memories (:
@@InterPixelRUclips same, do you by any chance remember the name of that game
@@slavsit7600 Is called "Cut the rope".
@@uusfiyeyh thx
Distort/project the z extrusion to the camera frustum would be a good addition to this workflow. I used this back in the days to convert 2d to 3d stereoscopic
I was using a similar method to this for image to video for a while now. Essentialy doing small camera pans and dollies into the scene to make it look a little fancier than just a scrolling 2D image. I was always wondering how stable the technique would be when done on a video instead and i have to say, it doesn't look too shabby. I think if you're trying to relight the scene the artiffacts will definitely be the biggest problem, but other than that it could be quite handy for some quick and dirty vfx, or enhancing a video shot on a tripod with some subtle realistic 3D camera shake.
You can use face tracking and a application to create simulated 3d videos, so you can look around the computer screen and the 3d model can tilt. It's weird but also kinda cool
Movies are gonna be crazy with this one.
Just FYI, you can type "cmd" into the address bar in explorer and it will open a command prompt at that location
i like the bit where you did the thing with the thing
I was just experimenting with marigold and was thinking about trying out other models to see if they work better with video!
Great video!
I cant wait till computers are good enough to do this live
this is neat, but it looks like the depth texture you get out is not linear. It may need a logarithmic or some other remap curve to make it so that things like your office wall appear flat when displaced
this guy’s brain is not normal
You meshed yourself! That is so cool!
3:50 Well the reason I guess it does this is because the AI just makes the closest depth pure white and the furthest depth pure black. So if you want to make the scale consistent maybe you could pick two stationary points and scale the color so that they stay the same.
I love that you're on the Davinci Resolve/Studio Train
Segment anything gives you high resolution outlines. You could dice based on camera distance with adaptive subdivision... I actually 'started' a plugin about 2 years ago to do all this...
huh??? THE LAST??? I NEED MOREE
Isn’t the top edge caused by the image being wrapped so its actually interpolating the bottom row of pixels.
Shame you missed the opportunity to show the power of this tool in comination with Ian Hubert's de-light. Would love to see it on a future video!
Since the texture of z-depth is greyscale, you don't need 2k resolution of it. it can be 2-4 times smaller than original resolution and you wont notice.
Totally, I think the best would be to match the Depth resolution to the mesh resolution
Woah, i finally understood what depht is and how i can use it
now let's see it rendered
Shows like 3 mind blowing things.
Says: "That's all I got for yah"
Crazy good video.
imagine using this with asynchronous timewarp and motion tracking to make videos feel smooth, but interpolation will be still smoother i think but they dont scale with monitors refreshrate
Can you separate moving objects from static ones? As is your body creates a constant shadow due to your body blocking all light to the right of the cube until you "walk" past it.
And that's how Minority Report videos got started.
my thought exactly. amazing that we are witnessing that future materialize
Thank you so much this is Epic, I’ve been looking for a way to make 3D titles in Videos more realistic. I will give this a try!
Not sure I understood a whole lot of that but god damn was it fascinating to watch.
can you consider camera coordinates and do perspective warping?
This guy is like the Vincent Van Gogh of 3d art. He's way ahead of other Blender tutors. No one is noticing. Many years from now they'll get it and sing his praises.
Can you do a video on how to do a parallax effect on like art pieces and old photographs and stuff using this technique? I think normally people cut out parts of the images one by one and then have the camera kind of move though the scene. I think that might sometimes work better than this specific technique but it would probably involve seperating and kind of clipping certain regions to a single plane and also have them be seperate objects or something instead of the single plane you have here.. No idea how to do it..
Results might be better if you project the depth towards the camera, rather than on a plane? I would be curious how that would affect the lighting and reflections.
YESS!! This will be so useful!
Sync multiple videos from different angles, solidify, and boolean-insersect to let you move the camera behind stuff?
Lots of people mentioning the cmd in the address bar trick, but did you know you can open a Powershell terminal via shift + right click context menu on any folder? Cmd is old and busted, Powershell is the new hotness.
So much input my head is dizzy lol
Crazy Video man!
I think that the depth is being interpreted as linear while it's being saved as sRGB. Or something like that. It would explain why the background is so flat.
2:10 Hobbyist tip for you, you can just set your texture to "extend" instead of repeat in the image texture node.
Very cool experiment, would love to see it, in it's true glorious fidelity
this is cool, but the problem with the geonodes setup is that it does not account for the perspective of the video. The geometry should get larger as it gets further away.
Is this how they did the blue cube in the minecraft movie trailer?
This is awesome. Thank you!
Are depth maps logarithmic? It seems like the closer something is to the camera, the bigger the difference in z-coordinate for an equal distance change.
Can you make a stereo video from usual video?
How difficult would multiple image sources be (i.e. another camera in the hallway)?
this is sooo fucking cool, for games you can use a post-processing injector call Reshade that uses a depth buffer aswell, this is so cool. y'all are going to get close to real time a.i. game filter. that would be awesome to see.
I wonder if this will be good enough to get rid of keying and/or rotoing....
No it looks horrendous and complicated. He just spams semi professional videos about topics no one really makes videos. But at conventions this year (fmx for example) I saw was more impressive use of ai and code for depth and modelling. So this won't be useful unless packed inside of a program which he probably can't do.
Wait, is this how they were post-converting 2D movies into 3D?
im not sure why nobody has done this exact same thing with a 360 kenect?
it's been done, but the resolution of the depth sensor of the kinect isn't very good
Ian Hubert is going to get some great use out of this trick no doubt
didnt think of these applications. Omega cool brah
so you can make a video into a 3d video model? or 3d model video? What... my mind just broke... next episode??? interacting with object like the square i put in the scene? turn scene into mesh and run simulations ???
have you tried apple depth pro model? been happy with some of the results
What if we combine depth analysed videos with image completion ones and use it to create 3D videos for VR
I have a question: why don't you use directly the depth map generator included in davinci? is there any practical reason? very good video, thanks and best regards
You sir are brilliant. Thank you for your brain.
How did you download "Depth anything v2"? am currently struggling to download it correctly, because i dont know what programm i am supposed to usse to run it!!
I tried to follow this video not being overly experienced with blender, however when I got to around 2.06 in the video, my image isnt showing any colour and is just the greyscale model, am I missing something? I feel like its probably just a button ive accidentally tapped or something lmao
I'm having the same problem right now, my image that should be in colour is B&W, and connecting up the depth map to the combine XYZ and then to the position offset doesn't seem to do anything. were you able to figure out what you needed to do?
@@artie_greene no unfortunately not, it seems to be an issue with my blender settings or something
@@artie_greene no unfortunately not, it seems to be an issue with my blender settings or something
found a fix?
@@vs22q no
YO! Finland hailing you - thanks bro!
finally some in-depth video
Confirmed: video was not flat.
That's not really a rant. It's a tutorial. There are always so many cool tools being released with no GUI's. Just command line interface through Conda or some other Python shell. Don't even have to mess up your local Python install. The way to be 1-2 months ahead of the game is to use these raw models. If you can afford a GPU with 48GB VRAM (or two Titan RTX's with an Nvlink), then you can practically do HD (or 2K). Otherwise you may have to upscale your output.
Depth Flicker? Well, yeah. Just give it a few weeks, and they'll probably add averaging to the depth generator model itself and it'll all be fine. But the easier they make it, the more people will use it, and you'll have no first adopter advantage. UYVsoft made a depth generator almost 15 years ago that could do video processing, but their tools were astronomically expensive. Finally, depth gens are available for the masses.
OFlow in Nuke could do it for a couple of years, but there the passes also flicked and need Resolve treatment. What comes out is inferior to modern AI models.
Does anyone know if IMF supports the ability to store depth data as an image sequence as a CPL composition with other assets used for video and audio?
Could this be used to make a green screen 3D studio backdrop? 😎
I can hear the adobe podcast
just change linear to cubic in image node to remove pixelation
I had wondered if this would work but didnt know how to do video depth. cool
this is so niche and genius
this can be a great weapon for 3d artist
I like your channel a lot
Could you create CG fog in the background of scenes with this?
You can simply use davinci resolves depth map filter to avoid the quirky workflow of downloading cryptic packages that are driven via the prompt and use tons of ram.