3D Programming Fundamentals [Z-Buffer] Tutorial 11
HTML-код
- Опубликовано: 28 сен 2024
- This series teaches the fundamentals of 3D graphics theory. In this video we implement a Z-buffer (aka depth buffer).
Tutorial wiki page:
wiki.planetchil...
Support my work!
/ planetchili
/ planetchili
www.planetchili...
i saw this, scrolling though youtube, instantly, my mouth watered, i have been waiting for this episode
The wait is over.
ChiliTomatoNoodle not to be impatient or anything, but when are we gonna get to using all the shit dx gives us / when we do, will we start a new framework or anything?, due to most of our code being useless as the gpu should do it for us?
When this is finished brah. And when we do it, will be starting with an empty Visual Studio solution.
again, awesome video, additionally it's great to find devs making tutorials from the fundamentals to advance. Great channel.
If you like this stuff, you'll like my new series which launches in a week or 2 covering hardware-accelerated 3D :)
Did I hear right? Hardwere 3D series??!?! OMG I am so looking forward to it! Keep up the good work! :P
It's gonna be a good time.
just found this ... and the f-bombs really make this tutorial worth watching.
Can't imagine a world without Chili. Excellent tutorial btw!
this may be the easiest topic in this series lol. Thanks Chili!!!
The real question is... what tutorials does Chili watch?
Other tutorials use chili as reference.
I love these tutorials
Great tutorials chilli :) may I ask why you store the z buffer per pipeline? shouldnt there be one zbuffer similar to the context we putpixel to? that way it's shared between pipelines
Noice!
The weird depth values that DX/GL are storing in teh depth buffer are not 'exactly' the "reciprocals" of the depth - they are also clipped by the near and far clipping planes, so it's just the depth that is spanned within just the view-frustum. It's the thing also being used before the perspective devide for clipping againgst the near/far clipping plane(s). When rasterizing, you can also "raste-clip" pixels along the Z axis, meaning - get rid of pixels who's depth is too far away (behind the far clipping plane). In DX that z-span of the depth space ends up being mapped to 0->1, while GL in GL it's -1->1. In both cases you can just "clear" the depth buffer by setting it all to just 1 (instead of infinity). And the non-linearity of that space is a "feature", not a "bug" - even with 32bit you can often get noticable Z-fighting if you just store the depth linearly in the depth buffer
have you done all of this once in the past or are you figuring all of this out as the series goes?
Somewhere inbetween those. I generally do some R&D coding a month or so in advance to make sure everything clicks and to come up with something that will fit with the overall arc of the lessons.
Just wanted to ask, how do you resolve the issue of vertices reaching a depth of 0? When I tried making my 3D engine, things went crazy during the perspective divide when depth values reach 0.f or close to it. An example would be a terrain, values near the camera are going to be positive, negative and maybe sometimes 0 if you walk over a vertex. Which means at least of of the three vertices are going to be at a depth of 0 and the other two are going to be positive, negative or both.
Near and far plane clipping eliminates that issue.
ChiliTomatoNoodle are you going to cover that as well ?
i love you - patrick star
great video
yes... but how do I get the z coordinate of every pixel of the triangle to add it to the z buffer?
I think it's the depth of the triangle face (that the pixel was generated from) from the camera, so like distance calculation between camera and average vertex location of that face
*He already did that on the previous videos.
Great Video! I know this is a bit far off, but if there were meshes with transparent materials, how will the z buffer be used there? or is a different algorithm used?
I ran into this at my internship a while back. If you're still wondering, turns out there's something called depth peeling that you have to do for transparency. I don't know how it works, and I didn't end up needing to, but I think that's the algorithm you're looking for.
I don't know what to fucking comment, so I'm just gonna do it to help the video.
your a fucking legend
for my pp high precision floating point computation is not needed :(
Oh the woes...
This was motherfucking easy to understand! Thank you.
how to do you actuallu get the depth values, as in when u pass the verticies to the draw call arent they pube space verticies and so dont have "depth" ?
student of cs, came here stoned, having great time
how to interpolate z values per pixel ?
Yes! New installment of my favorite series, thank you Chili!
You are welcome broseph.
This was my first video of yours and I have to say this was one of the best educational videos I’ve seen on here
Exactly how I imagined it as of 56 minutes ago
Awesome, well explained
loved it
Trying to understand zbuffer.
Am I right that depth buffer stores X,Y and Z coordinate s for every single pixel in view frustrum?
So it is not like a simple 2d texture, but texture with Z coordinates?
From what I understand, depth buffer stores only Z, because Z means the depth. If you create a buffer that stores the XYZ, I believe that it would be called a world position buffer. Also I believe that should be possible to extract the world position from the Z-buffer.
@@guilhermecampos8313 thx for reply! Think you're right
Potatoes.