Keep in mind Tomb Raider engine as used in the first 4 games has origins on the Saturn and the Playstation, and needed to keep targeting the Playstation for all these games. Neither of these systems has a Z-buffer. The game levels in this engine are designed on a grid of square tiles, and it's simply selecting tiles for rendering starting from the furthest away ones that do not exceed fog distance and this way in a back to front order, flood fill like. In case there's several entities in a tile besides level geometry, these are simply sorted among each other. Each tile's walls and ceiling/floor form a convex shape, so you just draw them in arbitrary order before drawing the entities in the tile. Complex entities are designed as a collection of convex chunks, ie. you only need to sort the convex chunks, and it doesn't matter which order you draw the individual polygons in within that chunk, all you need to correctly draw a convex object is back face culling (rejecting polygons based on surface normal or winding order), so the whole sorting effort doesn't become a big overhead and it's really a handful of comparisons here and there. The sorting algorithm isn't very smart so sometimes things pop through each other without Z buffer, but game designers mostly avoided things which are long and horizontal, so it doesn't break too badly. There is no individual triangle sorting at all. People always think too complicated don't they. But simple hardware invites simple solutions. Best the Playstation hardware design can do to help you is you can represent a render queue as a set of buckets, which can help radix sort, but well it's faster if you don't actually have to sort much, much less several thousand of something. Oh also thanks to the grid they didn't have to actually playtest the game, it was always obvious to the level designer what sort of jump you can make and how. It's a low budget British game OK.
Very interesting information. I hadn't realised that it was released for Sega Saturn and Playstation at the same time and that this would influence how the game engine was designed so heavily. Thanks for filling in so many details. That was a very interesting read!
@@PCRetroTech Oh it was designed for Saturn, so first title is 100% free of triangles and UV mapping, quad-only and every texture spans a whole quad! All other targets are just there because it wasn't too expensive to port to them. At some point triangles and UV mapping happened, but i don't remember when.
@@SianaGearz You should add this information to the Wikipedia in the technical section of the Article about Tomb Raider. Otherwise this information will get someday lost.
@@SianaGearz TIP: To include information in Wikipedia, all you have to do is cite any book as the source, it doesn't have to be in the book, nobody checks it anyway, it's enough if the book title you choose sounds like it would be in there. EDIT: Therefore, any book about 3D engines or game programming will do. However, sometimes it is accepted even if no source is given.
If no one mentioned this previously, the texture mapping unit is pronounced "T-Rex" (like Tyrannosaurus Rex) rather than "trex". Other than this minor thing, great video! 🍺
Same here, I got one as soon as they were available. Remember being in awe the first time I pkayed GLQuake, tomb raider 1 (I think) and later Unreal. It was mimd blowing
Found your channel while searching through some Compaq 386 videos & now found this Voodoo feature set video just what I was looking for! I started computing after 2010 so I missed out on these exiting time of computer innovations & discoveries but channels like yours are godsend to me to learn the history & relive a past that I never had. Thankyou for sharing your knowledge & keep up the good work.
My first GPU.😘 I've experienced 3 'wow' moments in my gaming life time; moments where you find yourself inexplicably smiling as you witness something breathtakingly new and excitedly think of where it will lead. 1) Playing X-Wing on my 1st PC (DX2/66). Going from my Amiga's predominantly 2D/isometric games to Gouraud shaded 3D polygons was mind blowing to teen-age me. 2) Getting my Voodoo 1, and experiencing smooth FPS gaming at VGA resolutions! 3) Playing my first VR games on my Oculus Rift (First Contact, and Robo Recall) I wonder what the next will be? AGI? AR? BCI?
I never got into VR and probably never will. There's been many iterations of it and none of them really went massively mainstream the way computing itself did. I suppose we might get to the point where we don't need to write new games because an AI is capable of just generating new worlds and game mechanics to explore. I guess that'll be the end of game companies.
WIth depth buffering off, the game is trying to sort the triangles in software, but isn't going to get per-pixel correct results, so you'll see popping like the triangle you saw above the door. And absolutely do some programming on this. The Glide API is one I've not experienced myself!
I think it would be fun to see the programming too! I found an old Glide dev kit years ago and I can compile their sample code in VC++ 6 - and it works - but it's really simple examples like initialize card, clear screen, fill with specified color. I'd love to see the code for something a bit more interesting!
@@PCRetroTech Certainly all the ones I played with back in the day were 'tetchy' especially on the subject of clipping. The card will do some clipping, but pushing it too far has a tendency to crash the card hardware and that would lock up the CPU in some horrible way that meant a power-cycle. Every test-run was a nailbiting adventure...
@@katielucas3178 I wonder if there was some kind of internal buffer overflowing. The docs for the 3Dfx list all the things that can be done simultaneously, but I bet there are ways to make it fail to keep up.
There were 386 likes on this video and it is such a nice number. But then, I remembered that 387 is also a nice number that is quite relevant, so I clicked like wholeheartedly. Since 487 is not a thing, I wonder who will be the one breaking the 486 barrier.
I don't know is somebody already said it. But if you still curious, what happen when z-buffer disabled, here am I to help. Z-buffer is actually per pixel depth buffer, which can be used for sorting pixels by depth. But you still can sort depth per objects and per triangles . So when z-buffer disabled, game starts to sort triangles by depth, rather than pixels. So occlusions still works, but not always. Originally lot of 3d games sorts depth by triangles, especially for hardware which doesn't have z-buffer, like Sony Playstation 1.
@@PCRetroTech That's really cool stuff. I'm interesting in glide programming by myself. But never had Voodoo based PC. I'm curious how it better or worse than OpenGL 1.0. I even know that glide uses pretty much same API structure as OpenGL. No surprise knowing that people working on Voodoo was from SGI. Anyway, looking forward for those videos and leave my comment there. Thank you.
@@homersimpson8955 It's definitely more primitive than OpenGL which is more of a scene renderer. Glide is more focused on the hardware itself and you have to build your own 3D scene infrastructure on top of it.
One possible reason why turning Z buffering off didn't make that much difference is because the software is doing depth sorting anyway. This sorts the polygons (here triangles) from nearest to furthest, and uses that to avoid drawing polygons at all if they're completely hidden by other polygons, which can make a big difference to performance. The problem with depth-sorting is that one polygon covers a range of depths, and therefore neither polygon is unambiguously deepest. This is obviously a problem when the two polygons cut through each other, so both are nearest for certain pixels, but also affects polygons that don't cut through each other because the way a single numeric depth is assigned to each polygon (I don't know what they did, but let's say e.g. using the mid-point of each triangle) turns out to be a bad choice. Z buffering is kind-of a back-up occlusion method that works per-pixel rather than per-whole-polygon, and fixes (some of) the details that depth-sorting got wrong. If depth sorting happens to do a perfect job, Z buffering is left with nothing to do. Of course Z buffering can also do the whole job, especially if some arrangement has been made to minimise overdraw even without depth sorting - things like using binary space partitioning on the level map so that overdraw only really occurs where there's transparency or movable objects anyway. I wouldn't like to guess whether depth-sorting is done only when Z buffering is disabled, or only done at object-level precision for movable objects such as characters being placed in the scene or whatever. Often in programming you use one algorithm at a time to solve one problem - using two different algorithms to solve the same problem at the same time is generally just unnecessary complexity that slows things down. 3D graphics is complex enough to make that guideline just plain wrong. In addition to the use of spatial data structures, depth-sorting and Z-buffering there's also backface-culling (not trying to draw polygons that face away from you), and probably more that I don't know about, all targeting the problem of only drawing what's visible.
That's an entirely feasible explanation. Binary space partitioning is pretty complex to implement, so I'd be surprised if the software was doing that given the existence of a hardware Z-buffer, but you could be right that Painters Algorithm is used to sort for depth otherwise. That would leave the Z-buffer with nothing much to do. I do some programming of this card in a later video by the way.
@@PCRetroTech That's not quite what I meant, though it turns out what I said was a bit confused - oops. Depth sorting to use with the painters algorithm means you draw furthest-first, then paint nearer stuff over the top. That works, but it's very inefficient because of all pixels drawn just to be drawn over again (overdraw). Discovering a pixel is occluded by checking the Z buffer is great for exact occlusion, but doesn't save much time as most of the work to rasterize that pixel was already done. I was thinking more that, once you've got the sorted polygons, you can do a scan from nearest to furthest through the list, eliminating polygons that will be entirely behind something else. On second thought, that seems unlikely - keeping track of the area occluded by all nearer polygons so far would either be a slightly awkward geometry job or effectively a simplified Z buffer (without the distance information) anyway. I still suspect some kind of spacial data structure for visible triangles, and BSP may be tricky to implement, but it was pretty common following on from the Doom example, and was described by Michael Abrash in his Black Book and IIRC in his series in Dr. Dobbs Journal. The first several Tomb Raider games all used the same custom engine, and Core Design had written earlier 3D games but chose to write this more powerful engine, so it seems like they took that seriously. That said, looking at map editor screenshots (bottom of core-design.com/article98.html ) suggests a grid-based model that could have avoided the need for BSP or octrees or whatever. I remember parts of the games being very obviously grid-based, but I didn't think that was pervasive (like some areas in Doom have right-angled corners, even though the engine doesn't require that) - now I'm not so sure.
@@PCRetroTech I've been thinking about that - it's basically what the AFAICT grid-based map editor made me think of - though I forgot the name raycasting. The trouble is that raycasting makes me think of Wolfenstein 3D. IIRC the major reason why Doom needed BSP trees where Wolfenstein 3D didn't was because Doom maps had floors, ceilings, and the floors and ceilings could be at different heights. In addition Wolfenstein 3D was based on a grid where Doom wasn't, but that in itself could probably have been handled with a more complex ray-casting implementation. The thing is that Tomb Raider was a platform game - it was more "genuinely 3D" and vertically complex than Doom because of all the platforms. It also had sloped surfaces, which I don't think Doom had. Although the Doom implementation of BSP trees is basically 2D, the idea certainly extends to 3D, whereas the obvious more complex extension of raycasting certainly exists but it's called raytracing and is computationally expensive even now. That said, I can think of in-between approaches that identify relevant map cells in a 3D grid (like overcomplex voxels) to account for, with each cell counted as either fully occluding or not fully occluding. It wouldn't be identify exactly one polygon to render for each ray cast, but perfection isn't necessary. I'm no expert either, and a fair amount of what I've learned was from RUclips videos - I just like trying to work things out. Sadly I don't think the Tomb Raider engine source code was ever released, and I don't know whether alternative engines such as OpenTomb tried to reverse engineer internals of rasterization or used different methods exploiting more modern hardware and only worried about getting the level maps and game mechanics right.
@@stevehorne5536 Yes, I had wondered what the difference in complexity between Wolfenstein 3D and Doom was. It had vaguely occurred to me that raycasting would be a bit complex in the case of all the different heights. But not being an expert in these things I didn't take the thought further. I'd also not heard of OpenTomb, so thanks for that reference.
It is weird about the video passthrough. You would think that is horribly complex. It must have something like a genlock, a ADC, a standard re encoding DAC. Also it has to somehow adjust from time to time the 3d window as the source video. OH just thinking if it always ran full screen it would be a lot simpler, it would just switch to its output maybe that was more like what was going on.
I think this was a pretty simple setup. I don't know for sure, but I don't think it converted it back to digital. I think it just lined the two signals up and switched between the two. It could run in a window I believe.
@@PCRetroTech It's just a switched passthrough. Until initialised the card sends VGA straight through. start it up and it switches in its own output. You can run it on a second monitor -- glide apps usually just open a full-screen blank Windows window to get the input messages from.
Yep, it was just a switch since there was no compositing, Voodoo1 and Voodoo2 only supported full screen. Voodoo1 and some Voodoo2 cards even had an electromechanical relay that you'd hear click when it switched to the Voodoo card output :)
Id love to see some 3dfx programming. Some great memories from the 90s include playing the tomb raider series. Believe Tomb raider 2 and 3 allowed accelerator use and I had a voodoo 2 banshee card at the time after replacing the a crappy trident 3dimage’ accelerator (believe the trident worked with tomb raider 2 idr). Later upgraded to the 3.
I still plan to do some 3dfx programming, but I'm currently doing quite a few other programming series on the sister channel PCRetroProgrammer, so it might be a little while off yet. So much cool hardware to program!
There is no z buffering, the geometry is just sorted front to back, like it is on the PS1. The CPU is the limit here. If you were to use some really fast CPU to eliminate bottlenecks, z buffer off would actually improve performance. Many later N64 games either used an optimized z buffering, where the background wasn't z buffered, or no z buffering. Rendering a pixel with Z buffering can use up to 4x more bandwidth.
Tomb Raider 1-5 are capet at 30FPS. also first 3 titles are not realy demanding and run good on anything faster than 133mhz CPU and most 3D card with at least 4mb of ram so Voodoo1 i actually overkill for first three titles. TR4 and TR5 are a bit more demanding on 3D especially if you want to run 1024+ resolutions so V3 or TNT2 are recomanded.
those games are very CPU dependent. With a fast CPU like a P2 400 or 450 MHz those games run very well with the Voodoo 1 even in 640x480 but especially in 512x384.
21:23 >Be PS1 >Exist >Don't have Z-Buffer >PC Retrotech wondering how can 3D rendering work without Z buffer >Still have 3D graphics This might be a shocker to you, but you don't need Z buffering. In fact, turning it off will boost performance if you're bandwidth or even memory limited. Alternatively you can use it for moving objects only and render the background without a Z buffer. On the N64, for example, turning off Z Buffering can boost performance significantly. The guy behind Portal 64 made a megatexture like engine on the N64 and it runs at non slideshow framerates even in 640x480 mode while having textures on par with the PS2. It doesn't use Z buffering. The reason you're not seeing any performance difference in Tomb Raider is because of a CPU bottleneck. When the GPU is bandwidth and fillrate constrained, that's when you see the difference in performance with Z buffering and without Z buffering. Likewise, some Factor 5 and Boss Game Studios games on the N64 also don't use Z buffering and can achieve higher quality graphics than is usually possible. World Driver Championship looks like an early PS2 or Dreamcast game and it can run in high res mode without the expansion pack due to not using Z Buffering. What's the catch? You need to sort everything manually and might run into a lot of overdraw. But it works. Indeed, the PS1 has 3D graphics without a Z buffer. IIRC the same goes for the Sega Saturn.
Whatever you do, turning off hardware Z-buffering results in an extra load on the CPU. There's a variety of algorithms you can use. Obviously the one used here is faulty. Maybe that's how they keep performance high, e.g. use the Painter's algorithm, which is not foolproof.
@@fungo6631 Ah yeah, I suppose you can cheat like that. I imagine they only need to store the cases where painters fails or something, otherwise it seems like too much data.
Dude, I haven't seen that for years... Hahahahaha. (I was working on the tmu hardware). The 4 but RGB color mapping was taken out at the next revision. It was not really worth it for the space.
@@PCRetroTech it was. My co worker was doing the fbi, I was working on the tmu in a small room. ( They was 4 front end people, except one person made the script for multiplication). There was a few bugs that needed to fib, and that took some time. To be honest, I don't remember what layout program.
@@PCRetroTech the fifos were bought outside as a contractor. Seriously, there was a lot of them. I remember that the software was called Cascade. I added the clocks structure as script. And, yes there was 3 clocks on that.
@@andyanderson3301 I guess 3Dfx was a small company at the time. I think some sources say there were only about 12 employees in those early days. You must have been one of the first engineers (is that what they called you?) that they had. I guess you worked mainly with Scott Sellers? I think he was VP of engineering or something, from what I read online.
@@PCRetroTech Yes there wasn't many people in the group. I was the 3 or fourth hw engineer (My colleague started about the same date). Basically is was Scott (hw), Gary (software). But it was odd at the the beginning. Software was doing a demo earlier on on an onyx. And they did some random things (arcade joystick I believe). Hardware took a long time... (I started at early 95).
Which part of the video are you referring to? I'm aware of multiple ways of doing 3D without a Z-buffer. This was the first generation of 3D gaming cards on the PC, so I'm not sure what you mean by "how older systems worked without a z buffer". If you didn't have a graphics card to assist with 3D, you did it yourself with the CPU. There were multiple techniques.... If you check out my channel you'll find dozens of videos where I actually write such code myself.
I would definitely love to see more 3dfx related content especially a Glide programming video. Great stuff.
I love Voodoo gfx. Something just magical about them.
i love that old box for tomb raider 3box. long time noo see it was for me. memories
Keep in mind Tomb Raider engine as used in the first 4 games has origins on the Saturn and the Playstation, and needed to keep targeting the Playstation for all these games. Neither of these systems has a Z-buffer. The game levels in this engine are designed on a grid of square tiles, and it's simply selecting tiles for rendering starting from the furthest away ones that do not exceed fog distance and this way in a back to front order, flood fill like. In case there's several entities in a tile besides level geometry, these are simply sorted among each other. Each tile's walls and ceiling/floor form a convex shape, so you just draw them in arbitrary order before drawing the entities in the tile. Complex entities are designed as a collection of convex chunks, ie. you only need to sort the convex chunks, and it doesn't matter which order you draw the individual polygons in within that chunk, all you need to correctly draw a convex object is back face culling (rejecting polygons based on surface normal or winding order), so the whole sorting effort doesn't become a big overhead and it's really a handful of comparisons here and there. The sorting algorithm isn't very smart so sometimes things pop through each other without Z buffer, but game designers mostly avoided things which are long and horizontal, so it doesn't break too badly. There is no individual triangle sorting at all.
People always think too complicated don't they. But simple hardware invites simple solutions. Best the Playstation hardware design can do to help you is you can represent a render queue as a set of buckets, which can help radix sort, but well it's faster if you don't actually have to sort much, much less several thousand of something.
Oh also thanks to the grid they didn't have to actually playtest the game, it was always obvious to the level designer what sort of jump you can make and how. It's a low budget British game OK.
Very interesting information. I hadn't realised that it was released for Sega Saturn and Playstation at the same time and that this would influence how the game engine was designed so heavily. Thanks for filling in so many details. That was a very interesting read!
@@PCRetroTech Oh it was designed for Saturn, so first title is 100% free of triangles and UV mapping, quad-only and every texture spans a whole quad! All other targets are just there because it wasn't too expensive to port to them. At some point triangles and UV mapping happened, but i don't remember when.
@@SianaGearz You should add this information to the Wikipedia in the technical section of the Article about Tomb Raider. Otherwise this information will get someday lost.
@@OpenGL4ever this information does not correspond to Wikipedia guidelines for proper sourcing.
@@SianaGearz TIP: To include information in Wikipedia, all you have to do is cite any book as the source, it doesn't have to be in the book, nobody checks it anyway, it's enough if the book title you choose sounds like it would be in there.
EDIT:
Therefore, any book about 3D engines or game programming will do.
However, sometimes it is accepted even if no source is given.
If no one mentioned this previously, the texture mapping unit is pronounced "T-Rex" (like Tyrannosaurus Rex) rather than "trex". Other than this minor thing, great video! 🍺
My first 3D card. I couldn't believe when I played GL Quake with this.
Same here, I got one as soon as they were available. Remember being in awe the first time I pkayed GLQuake, tomb raider 1 (I think) and later Unreal. It was mimd blowing
Found your channel while searching through some Compaq 386 videos & now found this Voodoo feature set video just what I was looking for! I started computing after 2010 so I missed out on these exiting time of computer innovations & discoveries but channels like yours are godsend to me to learn the history & relive a past that I never had. Thankyou for sharing your knowledge & keep up the good work.
Now this is retro tech...i owned this back in the day. When graphics memory was in MB
I'm a simple guy, I see classic Tomb Raider, I click!
My first GPU.😘
I've experienced 3 'wow' moments in my gaming life time; moments where you find yourself inexplicably smiling as you witness something breathtakingly new and excitedly think of where it will lead.
1) Playing X-Wing on my 1st PC (DX2/66). Going from my Amiga's predominantly 2D/isometric games to Gouraud shaded 3D polygons was mind blowing to teen-age me.
2) Getting my Voodoo 1, and experiencing smooth FPS gaming at VGA resolutions!
3) Playing my first VR games on my Oculus Rift (First Contact, and Robo Recall)
I wonder what the next will be?
AGI? AR? BCI?
I never got into VR and probably never will. There's been many iterations of it and none of them really went massively mainstream the way computing itself did.
I suppose we might get to the point where we don't need to write new games because an AI is capable of just generating new worlds and game mechanics to explore. I guess that'll be the end of game companies.
What about Rebel Assault?
The first interactive good CD-ROM game with astonishing graphics on rails. Just think of the flyby on the Star Destroyer.
@@PCRetroTech I think VR needs higher resolutions and a price tag below 200 Dollars to conqueror the market.
I'd love to see your take on some arcade hardware of that era. What Sega was doing with their model 2 and 3 was amazing!
I don't have any arcade hardware unfortunately. But yeah, I'm sure it would be a very interesting investigation.
WIth depth buffering off, the game is trying to sort the triangles in software, but isn't going to get per-pixel correct results, so you'll see popping like the triangle you saw above the door.
And absolutely do some programming on this. The Glide API is one I've not experienced myself!
I think it would be fun to see the programming too! I found an old Glide dev kit years ago and I can compile their sample code in VC++ 6 - and it works - but it's really simple examples like initialize card, clear screen, fill with specified color. I'd love to see the code for something a bit more interesting!
I have started a series on this on the channel. There will definitely be another video in that series in the not too distant future as well.
How could you not do any programming with this card? This is the meat of your channel and why I'm here.
Ha ha! Yes, I will have to do some I think.
@@PCRetroTech Certainly all the ones I played with back in the day were 'tetchy' especially on the subject of clipping. The card will do some clipping, but pushing it too far has a tendency to crash the card hardware and that would lock up the CPU in some horrible way that meant a power-cycle. Every test-run was a nailbiting adventure...
@@katielucas3178 I wonder if there was some kind of internal buffer overflowing. The docs for the 3Dfx list all the things that can be done simultaneously, but I bet there are ways to make it fail to keep up.
There were 386 likes on this video and it is such a nice number. But then, I remembered that 387 is also a nice number that is quite relevant, so I clicked like wholeheartedly.
Since 487 is not a thing, I wonder who will be the one breaking the 486 barrier.
Well every like counts. But thanks for being the 387th!
We need another 86 more to reach 586 now.
I don't know is somebody already said it. But if you still curious, what happen when z-buffer disabled, here am I to help. Z-buffer is actually per pixel depth buffer, which can be used for sorting pixels by depth. But you still can sort depth per objects and per triangles . So when z-buffer disabled, game starts to sort triangles by depth, rather than pixels. So occlusions still works, but not always. Originally lot of 3d games sorts depth by triangles, especially for hardware which doesn't have z-buffer, like Sony Playstation 1.
Thanks. Subsequent to making the video I figured this out. I've even started programming the Voodoo card on the channel.
@@PCRetroTech That's really cool stuff. I'm interesting in glide programming by myself. But never had Voodoo based PC. I'm curious how it better or worse than OpenGL 1.0. I even know that glide uses pretty much same API structure as OpenGL. No surprise knowing that people working on Voodoo was from SGI. Anyway, looking forward for those videos and leave my comment there. Thank you.
@@homersimpson8955 It's definitely more primitive than OpenGL which is more of a scene renderer. Glide is more focused on the hardware itself and you have to build your own 3D scene infrastructure on top of it.
I have that same pc case
Did I hear Tseng Labs ET 6000 2D card? I had this one with Diamond Monster Voodoo 1 for a while.
2:58 Yepp, I did hear it right :)
Yep, that's the one. A little hard to come by these days unfortunately.
One possible reason why turning Z buffering off didn't make that much difference is because the software is doing depth sorting anyway. This sorts the polygons (here triangles) from nearest to furthest, and uses that to avoid drawing polygons at all if they're completely hidden by other polygons, which can make a big difference to performance. The problem with depth-sorting is that one polygon covers a range of depths, and therefore neither polygon is unambiguously deepest. This is obviously a problem when the two polygons cut through each other, so both are nearest for certain pixels, but also affects polygons that don't cut through each other because the way a single numeric depth is assigned to each polygon (I don't know what they did, but let's say e.g. using the mid-point of each triangle) turns out to be a bad choice. Z buffering is kind-of a back-up occlusion method that works per-pixel rather than per-whole-polygon, and fixes (some of) the details that depth-sorting got wrong. If depth sorting happens to do a perfect job, Z buffering is left with nothing to do.
Of course Z buffering can also do the whole job, especially if some arrangement has been made to minimise overdraw even without depth sorting - things like using binary space partitioning on the level map so that overdraw only really occurs where there's transparency or movable objects anyway. I wouldn't like to guess whether depth-sorting is done only when Z buffering is disabled, or only done at object-level precision for movable objects such as characters being placed in the scene or whatever.
Often in programming you use one algorithm at a time to solve one problem - using two different algorithms to solve the same problem at the same time is generally just unnecessary complexity that slows things down. 3D graphics is complex enough to make that guideline just plain wrong. In addition to the use of spatial data structures, depth-sorting and Z-buffering there's also backface-culling (not trying to draw polygons that face away from you), and probably more that I don't know about, all targeting the problem of only drawing what's visible.
That's an entirely feasible explanation. Binary space partitioning is pretty complex to implement, so I'd be surprised if the software was doing that given the existence of a hardware Z-buffer, but you could be right that Painters Algorithm is used to sort for depth otherwise. That would leave the Z-buffer with nothing much to do. I do some programming of this card in a later video by the way.
@@PCRetroTech That's not quite what I meant, though it turns out what I said was a bit confused - oops. Depth sorting to use with the painters algorithm means you draw furthest-first, then paint nearer stuff over the top. That works, but it's very inefficient because of all pixels drawn just to be drawn over again (overdraw). Discovering a pixel is occluded by checking the Z buffer is great for exact occlusion, but doesn't save much time as most of the work to rasterize that pixel was already done. I was thinking more that, once you've got the sorted polygons, you can do a scan from nearest to furthest through the list, eliminating polygons that will be entirely behind something else. On second thought, that seems unlikely - keeping track of the area occluded by all nearer polygons so far would either be a slightly awkward geometry job or effectively a simplified Z buffer (without the distance information) anyway.
I still suspect some kind of spacial data structure for visible triangles, and BSP may be tricky to implement, but it was pretty common following on from the Doom example, and was described by Michael Abrash in his Black Book and IIRC in his series in Dr. Dobbs Journal. The first several Tomb Raider games all used the same custom engine, and Core Design had written earlier 3D games but chose to write this more powerful engine, so it seems like they took that seriously. That said, looking at map editor screenshots (bottom of core-design.com/article98.html ) suggests a grid-based model that could have avoided the need for BSP or octrees or whatever. I remember parts of the games being very obviously grid-based, but I didn't think that was pervasive (like some areas in Doom have right-angled corners, even though the engine doesn't require that) - now I'm not so sure.
@@stevehorne5536 I thought a lot of those games used raycasting, but I'm by no means an expert on how early games worked.
@@PCRetroTech I've been thinking about that - it's basically what the AFAICT grid-based map editor made me think of - though I forgot the name raycasting. The trouble is that raycasting makes me think of Wolfenstein 3D. IIRC the major reason why Doom needed BSP trees where Wolfenstein 3D didn't was because Doom maps had floors, ceilings, and the floors and ceilings could be at different heights. In addition Wolfenstein 3D was based on a grid where Doom wasn't, but that in itself could probably have been handled with a more complex ray-casting implementation. The thing is that Tomb Raider was a platform game - it was more "genuinely 3D" and vertically complex than Doom because of all the platforms. It also had sloped surfaces, which I don't think Doom had. Although the Doom implementation of BSP trees is basically 2D, the idea certainly extends to 3D, whereas the obvious more complex extension of raycasting certainly exists but it's called raytracing and is computationally expensive even now.
That said, I can think of in-between approaches that identify relevant map cells in a 3D grid (like overcomplex voxels) to account for, with each cell counted as either fully occluding or not fully occluding. It wouldn't be identify exactly one polygon to render for each ray cast, but perfection isn't necessary.
I'm no expert either, and a fair amount of what I've learned was from RUclips videos - I just like trying to work things out. Sadly I don't think the Tomb Raider engine source code was ever released, and I don't know whether alternative engines such as OpenTomb tried to reverse engineer internals of rasterization or used different methods exploiting more modern hardware and only worried about getting the level maps and game mechanics right.
@@stevehorne5536 Yes, I had wondered what the difference in complexity between Wolfenstein 3D and Doom was. It had vaguely occurred to me that raycasting would be a bit complex in the case of all the different heights. But not being an expert in these things I didn't take the thought further.
I'd also not heard of OpenTomb, so thanks for that reference.
It is weird about the video passthrough. You would think that is horribly complex. It must have something like a genlock, a ADC, a standard re encoding DAC. Also it has to somehow adjust from time to time the 3d window as the source video. OH just thinking if it always ran full screen it would be a lot simpler, it would just switch to its output maybe that was more like what was going on.
I think this was a pretty simple setup. I don't know for sure, but I don't think it converted it back to digital. I think it just lined the two signals up and switched between the two. It could run in a window I believe.
@@PCRetroTech It's just a switched passthrough. Until initialised the card sends VGA straight through. start it up and it switches in its own output. You can run it on a second monitor -- glide apps usually just open a full-screen blank Windows window to get the input messages from.
Yep, it was just a switch since there was no compositing, Voodoo1 and Voodoo2 only supported full screen.
Voodoo1 and some Voodoo2 cards even had an electromechanical relay that you'd hear click when it switched to the Voodoo card output :)
Id love to see some 3dfx programming. Some great memories from the 90s include playing the tomb raider series. Believe Tomb raider 2 and 3 allowed accelerator use and I had a voodoo 2 banshee card at the time after replacing the a crappy trident 3dimage’ accelerator (believe the trident worked with tomb raider 2 idr). Later upgraded to the 3.
I still plan to do some 3dfx programming, but I'm currently doing quite a few other programming series on the sister channel PCRetroProgrammer, so it might be a little while off yet. So much cool hardware to program!
Yeah, i would like to see some glide coding, even just a few triangles in 3D.
You can turn off Z-buffer in TR3 and run the game in 800x600 on a voodoo1 card
Interesting. I guess the Z-buffer is memory intensive rather than a performance hog, and so I can see how this would work.
@@PCRetroTech It is both. Many later games in the N64 life cycle turned off Z buffering wherever possible to save on bandwidth.
Thank you Fritzchens Fritz! :) Haha!!!
I had one. I wish I kept it.
I wished too...
Thank you very much
To me the performance looks worse when you turn zbuffer off. Probably because it is now done in software?
Yeah I thought the same, but wasn't really sure enough to say so in the video. It seems very subjective.
There is no z buffering, the geometry is just sorted front to back, like it is on the PS1. The CPU is the limit here. If you were to use some really fast CPU to eliminate bottlenecks, z buffer off would actually improve performance.
Many later N64 games either used an optimized z buffering, where the background wasn't z buffered, or no z buffering. Rendering a pixel with Z buffering can use up to 4x more bandwidth.
Tomb Raider 1-5 are capet at 30FPS. also first 3 titles are not realy demanding and run good on anything faster than 133mhz CPU and most 3D card with at least 4mb of ram so Voodoo1 i actually overkill for first three titles. TR4 and TR5 are a bit more demanding on 3D especially if you want to run 1024+ resolutions so V3 or TNT2 are recomanded.
Neat. Thanks for the info. I'll have to give TR4 and TR5 a try.
Tomb Raider 1 is playable even on a 486 DX4, which is comparable to a Pentium at 33 MHz in many 3D gaming applications.
3dfx uses the same technology like the sega dreamcast. zbuffer is normaly not needed. its rendering with a span-buffer like quake-engine.
Voodoo1 can play UNREAL and QUAKE2/HEXEN2 etc. but you have to use 512x384 and tweak the autoexec.bat.
Imo, Half-Life is also kinda playable at 512x384 too. Better than the Dreamcast prototype, at least.
those games are very CPU dependent. With a fast CPU like a P2 400 or 450 MHz those games run very well with the Voodoo 1 even in 640x480 but especially in 512x384.
16-bit argb 8-3-3-2. Now that's a weird format. Wonder if that was ever used in real games.
Sounds a bit limited
21:23
>Be PS1
>Exist
>Don't have Z-Buffer
>PC Retrotech wondering how can 3D rendering work without Z buffer
>Still have 3D graphics
This might be a shocker to you, but you don't need Z buffering. In fact, turning it off will boost performance if you're bandwidth or even memory limited. Alternatively you can use it for moving objects only and render the background without a Z buffer. On the N64, for example, turning off Z Buffering can boost performance significantly. The guy behind Portal 64 made a megatexture like engine on the N64 and it runs at non slideshow framerates even in 640x480 mode while having textures on par with the PS2. It doesn't use Z buffering.
The reason you're not seeing any performance difference in Tomb Raider is because of a CPU bottleneck. When the GPU is bandwidth and fillrate constrained, that's when you see the difference in performance with Z buffering and without Z buffering.
Likewise, some Factor 5 and Boss Game Studios games on the N64 also don't use Z buffering and can achieve higher quality graphics than is usually possible. World Driver Championship looks like an early PS2 or Dreamcast game and it can run in high res mode without the expansion pack due to not using Z Buffering.
What's the catch? You need to sort everything manually and might run into a lot of overdraw. But it works. Indeed, the PS1 has 3D graphics without a Z buffer. IIRC the same goes for the Sega Saturn.
Whatever you do, turning off hardware Z-buffering results in an extra load on the CPU. There's a variety of algorithms you can use. Obviously the one used here is faulty. Maybe that's how they keep performance high, e.g. use the Painter's algorithm, which is not foolproof.
@@PCRetroTech Some PS1 games even presorted the geometry to render at the correct order. I believe Crash Bandicoot games did that.
@@fungo6631 Ah yeah, I suppose you can cheat like that. I imagine they only need to store the cases where painters fails or something, otherwise it seems like too much data.
What the hell does SST stands for.
No one knows, but SST-1 was their internal codename for the Voodoo 1 chipset. The guess is Scott-Sellers-Tarolli after there three founders of 3dFx
@@PCRetroTech Thanks!)
Dude, I haven't seen that for years... Hahahahaha. (I was working on the tmu hardware). The 4 but RGB color mapping was taken out at the next revision. It was not really worth it for the space.
That would have been fun!
@@PCRetroTech it was. My co worker was doing the fbi, I was working on the tmu in a small room. ( They was 4 front end people, except one person made the script for multiplication). There was a few bugs that needed to fib, and that took some time. To be honest, I don't remember what layout program.
@@PCRetroTech the fifos were bought outside as a contractor. Seriously, there was a lot of them. I remember that the software was called Cascade. I added the clocks structure as script. And, yes there was 3 clocks on that.
@@andyanderson3301 I guess 3Dfx was a small company at the time. I think some sources say there were only about 12 employees in those early days. You must have been one of the first engineers (is that what they called you?) that they had. I guess you worked mainly with Scott Sellers? I think he was VP of engineering or something, from what I read online.
@@PCRetroTech Yes there wasn't many people in the group. I was the 3 or fourth hw engineer (My colleague started about the same date). Basically is was Scott (hw), Gary (software). But it was odd at the the beginning. Software was doing a demo earlier on on an onyx. And they did some random things (arcade joystick I believe). Hardware took a long time... (I started at early 95).
18:25
2 FBI and 6 TREX in Voodoo?
Ultimate Voodoo 1?
He shouldn't be making videos on this without doing research or at least googling how older systems worked without a z buffer...
Which part of the video are you referring to? I'm aware of multiple ways of doing 3D without a Z-buffer.
This was the first generation of 3D gaming cards on the PC, so I'm not sure what you mean by "how older systems worked without a z buffer". If you didn't have a graphics card to assist with 3D, you did it yourself with the CPU. There were multiple techniques....
If you check out my channel you'll find dozens of videos where I actually write such code myself.
@@PCRetroTech I believe 21:23.
The trilinear filtering w. mipmapping is clearly the game changer