0:09 to be fair, Asteroids looked a lot better back then than what's shown here. It used vector graphics instead of pixels, so this low-res capture doesn't do it justice!
It looked especially good on the horizontal twin table top set up that you could find in some cafes. CRT monitor probably smoothed out a lot and enriched the look.
@@hpt08 again, the CRT didn't have to smooth out anything. It didn't have pixels but instead, the electron beam drew the shapes directly on the phosphor layer. It was basically infinite resolution!
@@Noel-yh4xp I grew up playing an old hand-me-down vectrex. They use an oscilloscope as a display and doesn't use polygons or pixels. Instead it uses vector based graphics.
Also. The “cel” from “cel shading” is not named because the quantized shades form “cells”, but is named after the cartoon shading it was meant to emulate. The shading painted, by hand, onto cellophane sheets or “cels” used in cartoon production.
Cels are the transparent sheets used in traditional 2D animation. The individual pieces of 2D art are painted on cels and then multiple cels are layered on top of each other to form a single image. Cel shaders are named this way, because they emulate the look of hand drawn toons.
One significant change that was omitted was when fixed function pipelines went away and we started witing our own shader code. A lot of cool graphical effects came about because of this change. I think it was around 2005 but that's just from memory.
8:35 Cel shading comes from traditional animation. From Wikipedia: "The name comes from cels (short for celluloid), clear sheets of acetate which were painted on for use in traditional 2D animation."
Yes, this is one of many strange assumptions and false statements in this video. It's a "ok" video, and a good general summary but the research is flaky, especially in terminology and several technical details, that are mentioned in various comments here.
@@elecblush Cel shading is just a slightly different term in 3D graphics... It's not a false statement or anything. You're making way to big a deal about this. Besides you could use the same logic to justify the name for the original style of animation techniques on hand drawn 2D animation.
@@HerbaMachina no. Its named so because of the shading technique's resemblance to animation cels. It's not an industry specific use of the term to refer to units in a color gradient. The original pixel art of 2d sprites isn't "cel shading" because while colored and animated like original 2d animation, it doesn't _resemble_ it, which is what cel shading is named for. Resembling old school animation cels.
It's a great video, but there is some misinformation. 3D graphics cards wasn't the thing that enabled true 3D graphics. Quake was released before those and it was a true 3D game with texture maps. A lot of other games used polygons before that as well. some were fully 3D, but were flat shaded. Others were a mix of 2D backgrounds and 3D characters. The game Outcast was released a few years after the first 3D graphics cards and it didn't utilize them at all. However it was one of the most beautiful games of that time. It used Voxels for the terrain and textured polygons for the characters, buildings etc. It also had things like anti-aliasing, depth of field, bump maps, character shadows. It was game ahead of it's time. So no 3D graphics cards (or 3D accelerators if we want to be accurate) didn't enable true 3D. It just made it a lot faster. It's also a bit ironic that the game shown on screen is Virtua Fighter. That game had a PC release in 1995 and it didn't use 3D cards.
@@Ashona I have started to work part-time in order to make more videos so I will upload more regularly now! :) (in my description I have explained now why I have been unable to post a lot of videos)
09:20 - Missed opportunity talk about baked vs real time rendered shadows. Two really important phases in the history of game rendering. Baked shadows where shadow maps applied to the textures beforehand, to avoid having to do this in real time. Would have loved to see more info on shaders, and some of the more advanced renderpasses we see in modern games as well (some of the techniques that have progressed graphics from the N64 age of graphics to the graphics we see today) The video is an ok sweep through a few rendering methods and concepts, but you are missing some important milestones and you deep dive into things a bit randomly. It would have been better if you made a firm commitment to either go deep on the technical structure in each of the parts, or made it more of a history sweep with links to other deep dives. Also as you see from the comments have quite a few details and terminology wrong. Please don't take my criticism the wrong way. I liked the video, but felt there was room for improvement, especially since the idea of doing such a video (a history of rendering techniques) is great.
@Coder Husk The game shown as illustration (Virtua Fighter) is one of the earliest 'fully' 3D games, and both in its original arcade version and its PC ports it used an unusual custom 3D accelerator chip design that was based on quads. One of those quirky things that happen when a market is new and everyone is trying everything. It's just funny that of all the 3D accelerated games, DigiDigger picked one of the few that _don't_ use triangles 🙂
No. Confusingly enough "8-bit art style" refers to the "8-bit systems" art style, mainly the NES, that had 16 colors, and could only use 4 of them in each 16x16 tile. Meanwhile the "16-bit art style" is *actually* 8-bit because the "16-bit systems" like the SNES could usually display 256 different colors (or close to) on the screen.
Your videos are a fantastic introduction to understand how game mechanics are implemented! I've been trying to understand these mechanics for years, but your videos are the first that really allowed me to understand what's going on. Thank you!
I like this video but watching it as someone who's made 2D game engines is painful, most older games used palletted graphics, where a sprite would have 1 byte per pixel which would be an index on a table of colors(usually 3 byte), sprites would also have a palette index which would point to which palette to use, some early polygonal engines like Quake also used this but it stopped being used that much in the mid-late 90's
This is how I programmed all my own crappy games. The best of this technique was the ability to set the whole palette by calling an interrupt. You did not have to change the pixels to get animation. If used in clever ways you could change the palette and make for fast changes. I used this mostly to get lightning effects or even to control 7 segment displays on screen. I loved that era. It was easy to work with.
its amazing to see you posting again! your videos always get me so excited! the fact that doom uses a tree structure for level data/rendering is really cool! thanks for going into so much detail in your videos, they are always such a joy to watch~
LADS! HES BACK! The legend has returned to reunite us for a few precious minutes, but will soon return to the void again for an indefinite amount of time.
Binary Space Partitioning is not even close to what you've shown in the video. It's about compressing surfaces into a binary tree depending on their intersections, thus saving only equation parameters of the surface equation instead of actual vertex positions. What you show is more similar to an octree partitioning, but in 2d.
@@MrDavibu it's fine, since I remember it not that correct as well. To be more precise it's the way planes are sorted depending on what side they are in relation to the normal of the chosen plane. That's why it's called "space partitioning".
In Doom BSP-Tree was used to solve the sort-problem (back-to-front, front-to-back) fast. Sectors were used to determine where the player is and also the adjacent sectors. Sectors also were used for determining the floor and ceiling heights.
Ray-casting and ray-tracing are not the same thing. The main thing separating them being that Wolfenstein uses ray-casting for the sole purpose of determining whether or not a pixel intersects the mesh (consider this to be 1 bounce as the first cast is the final result), whilst ray-tracing aims to determine the path of a light ray as it bounces off surfaces before reaching the camera (usually making use of several ray-casts to achieve because light rays tend to bounce off multiple different surfaces each with their own unique qualities). Other than that, it's a great summary considering you managed to recollect 40 something odd years of game render tech development in the span of 13 minutes!
I noticed that you are not a frequent video uploader. I just want to tell you that what you sharing is pure gold. Looking for more of your quality videos.
When I was a kid I always wanted to be a Game Developer. Thank God I didn't went that path. I can't wrap my head around even this information. Respect for those developers and whoever work behind a game to make it so good.
There is a few errors i the video. Most notable... 8 bit era didnt have 8 bit grapics. It had only 2 bit grapics partitioned into sprites out of a pallet of usualy 16, some Times 32 colors. Even the 16 bit era mostly didnt have 8 bit grapics, but mostly 4 bit grapics put of a pallet of typically 16. Its also worth saying that most older consoles and also some computers in that era did have a 2d grapics chip that was driving the grapics. There was in most cases no cpu involvment in showing the grapics.. It was really mostly PC that lacked that ability... sort of. Pc the cpu renderad pixel by pixel. This pushed pc cpu to have more preformance partly to offset this disadvantage. This was why 3d exploded on pc.
Cartoon rendering: Cel is NOT a "color cel". "Cel" comes from animation. Early animation was drawn on paper and inked and painted on celluloid sheets for photographing. These sheet were called "cels" for short. The colors used in cartoons were just "paint". Cels were transparent (five was the practical limit) so you could stack cels, allowing production to only animate and photograph the parts that moved. This info brought to you by a guy that made commercial 3D cartoon rendering engines in the 1990s.
Thank you so so much for sharing your knowledge and research in a way that everyone can understand! It would be great to have another video of Game Plunge.
This video ends waayy to soon. You did great explanation in what order things were drawn in doom or quake, but int in 3d world (and diferent types of them like Forward vs deferred) Also how Different texture types changed in time from simple photo like image to various PBR types like albedo/roughnes/metalic.
Just new to your channel but really loving the informational videos. Took me only a few seconds to subscribe and like because I could already tell it was quality content. Keep up the good work!
Finally you are back. Yes, Graphics are very important. Visuals are the top 3 most important thing in the games. You should see Game-jam submissions. The graphics they make is something else in so less time.
Some feedback: there is a static background noise. You sound like being in a huge room with echo (google fixes, but recording in a closet is a funny and partially crazy solution I heard). A lot of snappy sounds that can be fixed with a shield thingy (no clue what it is in english). Interesting video still! But sadly not something I like having on in the background.
i learned something new and that is fascinating as i want to be a game programmer. It's really need hard work and patience to create different parts of a game.
Thanks for still trying to explore new topics and making deep and informative content ! I guess it requires a lot of work, and it seems motivation isnt always here (well maybe its is again). Don't give up on this, there is a lot of ppl who would enjoy your content ! Don't focus on the numbers or how it spreads, but the quality of what you create ! It will always bring you positive feedback, from the community but also from yourself ! Great video btw, always simple enough for ppl to understand it, but with enough deep for us to crave for more !
I just stumbled on this video. It's brilliant. I always wondered how the math for the graphics was done in games like Doom and Wolf3D. This video confirms some of my suspicions. Some effects like anti-aliasing don't impress me very much, but some effects like the holes in the wall in the game "Portal" are so amazing I have no absolutely idea how they implement that. I mean, you can sometimes see multiple copies of your universe when you look through a hole in Portal.
I don't think it's that complicated, you just need a render to texture, you already know the position of the player, so you know the camera position. You then need to transform this position to the relative camera position to the entry portal and apply this relative position to the exit portal. With now having the new camera position and the direction you are looking at, you can create your view-transform. After that you can then create top bottom left right for the frustum by the dimensions of the entry portal. You can create a bit mask for the non visible pixels and then render the image. And then add the Portal glow effect on top. And do that recursively for every Portal visible in the Portal until a certain depth(you can change this depth in the settings of Portal)
@@MrDavibu You said "and do that recursively." This is part of what I don't understand. Recursion goes to infinity. How does the rendering engine know when to stop? How does it know to reduce resolution on a reserved image? So many questions.
Dude I just remembered this channel and wanted more videos like the terraria and portal one and decided to look up the video again. Low and behold there was another video like those.
Imagine the reality we live in needs to refresh and reload like video games 😂😂😂 Grandparents: You're fortunate to be alive in this time my grandchild. When I was your age we had to wait much longer for the world to refresh or reload in order to say hi to our friends across the road.
10:47 UV maps in fact have nothing to do with reflections in rasterization. The only parameters needed to sample a cubemap as a reflection is the eye vector (pointing from the pixel to the camera) and the normal vector (pointing out from the surface). Using those, you can calculate the reflection vector by which the eye vector will reflect off the surface with said normal. This reflection vector is then used to sample the cubemap. I also want to note something about 8-bit palettes. It's not true that 8-bit computers and consoles necessarily output 8-bit color. A lot of them were much more limited. The NES for example has 64 colors available, of which only around 55 are unique (exact number is up for discussion). Additionally, the NES has 4 palettes for backgrounds and 4 palettes for sprites limited to 4 colors each, of which one color is shared between the background layer and one color is always transparent on the sprite layer, making for a total maximum of 25 colors on-screen at a time. Each palette is applied to a 16x16 pixel area on background layer and per sprite on the sprite layer (each sprite being either 8x8 or 8x16 pixels in size depending on the game). Another thing I'd like to add as a contributing factor to why graphics are as realistic as they are today is the fact that render pipelines as well as certain stages of rendering are completely programmable on modern graphics cards. Early 3D rendering hardware had more static feature sets and limited customization. Programmability gives graphics programmers a lot of power to customize how rendering is preformed by writing programs that run on the GPU known as shaders. This programmability is the sole reason why screen-space effects such as SSR and SSAO even exist, as well as the plethora of anti-aliasing techniques that most of them rely on image-processing.
This channel is a gem , suggestion for next video: scriptable rendering pipelines ( deferred, Forward, clustered etc) also how micropolygon rendering works ( I know geometry renderer have complexity ) but what about these new "geometry primitive shaders" as of this next gen rendering library have to offer and why developer don't use screen space micropolygon mapping ? before I know each engine engineer use different methods i.e GI systems have so many different capabilities. but they trying to achieve the same goal realistic rendering . Now theses new techniques use same principles of offline renderers ( i.e in movies & pre rendered cinematics) but multiple implementation to optimize the performance, however.. I think after ray tracing technology (and similar micropolygon mapping) new trend like adaptive path tracing and voxels rendering techniques use to archive much better quality ( like texture quality in Dreams& little big planet) you mentioned Dom use ray marching which is the same terminology for rendering equation of voxel engines .( Aka volumetric rendering) but I don't understand how shading in voxels works , it should not trace the final color as they don't have direct information of the texture projection unless from the G buffer ( deferred) . or "fake Phong shading" correct me if I'm wrong , Forward rendering is about geometry rendering aka vertex points , while deferred is texture projecting in the G buffer. ( Hence doesn't represent true 3D word) instead hybrid approach used like the scriptable rendering pipelines in unity and most recently Amazon game engine. there's some other techniques like forward+ and clustered ( voxelized GI) etc.. but these aren't new . just hybrid rendering for certain tasks & stages of the image output. I would like to dig deeper into how engine handle the scriptable pipelines .
Guys who would've thought these old games had such extremely smart stuff behind them! It was really great to even write hello world back in those years, and these gods coded tedious algorithms. Pretty smart for that era. Oof
0:09 to be fair, Asteroids looked a lot better back then than what's shown here. It used vector graphics instead of pixels, so this low-res capture doesn't do it justice!
Truth++
True. That game actually looks really really good on a vector monitor.
It looked especially good on the horizontal twin table top set up that you could find in some cafes. CRT monitor probably smoothed out a lot and enriched the look.
@@hpt08 again, the CRT didn't have to smooth out anything. It didn't have pixels but instead, the electron beam drew the shapes directly on the phosphor layer. It was basically infinite resolution!
@@Noel-yh4xp I grew up playing an old hand-me-down vectrex. They use an oscilloscope as a display and doesn't use polygons or pixels. Instead it uses vector based graphics.
Hey, props for taking such a broad topic with a long history and condensing it in into a concise overview easy for anyone to understand.
Also. The “cel” from “cel shading” is not named because the quantized shades form “cells”, but is named after the cartoon shading it was meant to emulate. The shading painted, by hand, onto cellophane sheets or “cels” used in cartoon production.
Cels are the transparent sheets used in traditional 2D animation. The individual pieces of 2D art are painted on cels and then multiple cels are layered on top of each other to form a single image. Cel shaders are named this way, because they emulate the look of hand drawn toons.
One significant change that was omitted was when fixed function pipelines went away and we started witing our own shader code. A lot of cool graphical effects came about because of this change. I think it was around 2005 but that's just from memory.
8:35 Cel shading comes from traditional animation. From Wikipedia: "The name comes from cels (short for celluloid), clear sheets of acetate which were painted on for use in traditional 2D animation."
Yes, this is one of many strange assumptions and false statements in this video.
It's a "ok" video, and a good general summary but the research is flaky, especially in terminology and several technical details, that are mentioned in various comments here.
@@elecblush Agreed.
As a fan of Cel animation I too was a bit bewildered by his strange use of the term.
@@elecblush Cel shading is just a slightly different term in 3D graphics... It's not a false statement or anything. You're making way to big a deal about this.
Besides you could use the same logic to justify the name for the original style of animation techniques on hand drawn 2D animation.
@@HerbaMachina no. Its named so because of the shading technique's resemblance to animation cels. It's not an industry specific use of the term to refer to units in a color gradient.
The original pixel art of 2d sprites isn't "cel shading" because while colored and animated like original 2d animation, it doesn't _resemble_ it, which is what cel shading is named for. Resembling old school animation cels.
It's a great video, but there is some misinformation. 3D graphics cards wasn't the thing that enabled true 3D graphics. Quake was released before those and it was a true 3D game with texture maps. A lot of other games used polygons before that as well. some were fully 3D, but were flat shaded. Others were a mix of 2D backgrounds and 3D characters. The game Outcast was released a few years after the first 3D graphics cards and it didn't utilize them at all. However it was one of the most beautiful games of that time. It used Voxels for the terrain and textured polygons for the characters, buildings etc. It also had things like anti-aliasing, depth of field, bump maps, character shadows. It was game ahead of it's time. So no 3D graphics cards (or 3D accelerators if we want to be accurate) didn't enable true 3D. It just made it a lot faster.
It's also a bit ironic that the game shown on screen is Virtua Fighter. That game had a PC release in 1995 and it didn't use 3D cards.
Soryy but in fact it's not "true" 3D, it's always a 2D screen in fact ^^
HE’S ALIVE. THE MAN IS ALIVE. It’s 4am when I got the notification, but this video is more important than sleep
Please go back to sleep :')
@@DigiDigger hold up, halfway done watching the video. Learning how to Ray Cast
Edit: well I’m done the video, see you in another 9 months!
4am for me too, I got very excited when I saw the notification
will wait until morning to watch though
@@Ashona I have started to work part-time in order to make more videos so I will upload more regularly now! :) (in my description I have explained now why I have been unable to post a lot of videos)
I love when you upload. I always learn something from these videos!
I hope you'll at least get a bit wiser!
09:20 - Missed opportunity talk about baked vs real time rendered shadows. Two really important phases in the history of game rendering.
Baked shadows where shadow maps applied to the textures beforehand, to avoid having to do this in real time.
Would have loved to see more info on shaders, and some of the more advanced renderpasses we see in modern games as well (some of the techniques that have progressed graphics from the N64 age of graphics to the graphics we see today)
The video is an ok sweep through a few rendering methods and concepts, but you are missing some important milestones and you deep dive into things a bit randomly. It would have been better if you made a firm commitment to either go deep on the technical structure in each of the parts, or made it more of a history sweep with links to other deep dives. Also as you see from the comments have quite a few details and terminology wrong.
Please don't take my criticism the wrong way. I liked the video, but felt there was room for improvement, especially since the idea of doing such a video (a history of rendering techniques) is great.
As a computer science student studying 3D graphics, if only it were this simple as what is in this video. There's so much to learn, so so much more.
"It's just a bunch of triangles"
*showing game footage, where rendering based on quads, not traiangles*
This is the comment I was looking for.
@Coder Husk The early sega VR games (as shown in the footage) used quads instead of triangles.
@Coder Husk The game shown as illustration (Virtua Fighter) is one of the earliest 'fully' 3D games, and both in its original arcade version and its PC ports it used an unusual custom 3D accelerator chip design that was based on quads. One of those quirky things that happen when a market is new and everyone is trying everything. It's just funny that of all the 3D accelerated games, DigiDigger picked one of the few that _don't_ use triangles 🙂
No. Confusingly enough "8-bit art style" refers to the "8-bit systems" art style, mainly the NES, that had 16 colors, and could only use 4 of them in each 16x16 tile. Meanwhile the "16-bit art style" is *actually* 8-bit because the "16-bit systems" like the SNES could usually display 256 different colors (or close to) on the screen.
?
@Nope Nope @
@@tanveersingh5423 #
@@live_cuh_reaction {
The Sega Saturn game you showed while explaining triangles actually used quads, not triangles.
It’s always a party when you upload. Awesome
stuff, keep it up!
Your videos are a fantastic introduction to understand how game mechanics are implemented! I've been trying to understand these mechanics for years, but your videos are the first that really allowed me to understand what's going on. Thank you!
This channel is pure gold! Glad to see that you still upload!
I like this video but watching it as someone who's made 2D game engines is painful, most older games used palletted graphics, where a sprite would have 1 byte per pixel which would be an index on a table of colors(usually 3 byte), sprites would also have a palette index which would point to which palette to use, some early polygonal engines like Quake also used this but it stopped being used that much in the mid-late 90's
This is how I programmed all my own crappy games. The best of this technique was the ability to set the whole palette by calling an interrupt. You did not have to change the pixels to get animation. If used in clever ways you could change the palette and make for fast changes.
I used this mostly to get lightning effects or even to control 7 segment displays on screen. I loved that era. It was easy to work with.
Nice to see you're back!
Nice to see you here💩
wtf peter, du hier?!!?!?!?
Der Peter ist einfach beste 😉
*your
@@henheya9735 🅱️ruh
one of the best things of texturing coulb be parameter materials, man. You can make SO MUCH with shading through dynamic textures is mind boggling.
you make all these difficult details look so easy to understand 👍
its amazing to see you posting again! your videos always get me so excited!
the fact that doom uses a tree structure for level data/rendering is really cool!
thanks for going into so much detail in your videos, they are always such a joy to watch~
Even though I already know this stuff, I still love watching this kind of videos!
LADS! HES BACK! The legend has returned to reunite us for a few precious minutes, but will soon return to the void again for an indefinite amount of time.
This video is mind-blowingly good. I'm a very visual learner so I heavily appreciate all the visual examples and diagrams
Binary Space Partitioning is not even close to what you've shown in the video. It's about compressing surfaces into a binary tree depending on their intersections, thus saving only equation parameters of the surface equation instead of actual vertex positions. What you show is more similar to an octree partitioning, but in 2d.
I was so confused by that part. I really thought I remembered BSPs wrongly.
But I think it's called quadtree in 2D.
@@MrDavibu it's fine, since I remember it not that correct as well. To be more precise it's the way planes are sorted depending on what side they are in relation to the normal of the chosen plane. That's why it's called "space partitioning".
This video is gold. This will remain in our heart forever
excellent video, gave me a lot of insights into how the game industry have developed over the years. props to you.
I don't understand how you can have so little subscribers with the quality of your explanations, I'm sharing 🙏
You explained this very well. I knew nothing about this and now I feel like I understand the basics!
Respect the game developers 🙏 bringing more and more realistic environments to explore. Thank you.
In Doom BSP-Tree was used to solve the sort-problem (back-to-front, front-to-back) fast. Sectors were used to determine where the player is and also the adjacent sectors. Sectors also were used for determining the floor and ceiling heights.
Ray-casting and ray-tracing are not the same thing. The main thing separating them being that Wolfenstein uses ray-casting for the sole purpose of determining whether or not a pixel intersects the mesh (consider this to be 1 bounce as the first cast is the final result), whilst ray-tracing aims to determine the path of a light ray as it bounces off surfaces before reaching the camera (usually making use of several ray-casts to achieve because light rays tend to bounce off multiple different surfaces each with their own unique qualities). Other than that, it's a great summary considering you managed to recollect 40 something odd years of game render tech development in the span of 13 minutes!
I noticed that you are not a frequent video uploader. I just want to tell you that what you sharing is pure gold. Looking for more of your quality videos.
yup, his example is based on quake/half-life/source engine BSP
When I was a kid I always wanted to be a Game Developer. Thank God I didn't went that path. I can't wrap my head around even this information. Respect for those developers and whoever work behind a game to make it so good.
The explanation is so technically elegant and concise. I had to sub immediately. Well done!
Surely the term “Cel shading” is a reference to it looking like cel painting/animation.
There is a few errors i the video. Most notable... 8 bit era didnt have 8 bit grapics. It had only 2 bit grapics partitioned into sprites out of a pallet of usualy 16, some Times 32 colors.
Even the 16 bit era mostly didnt have 8 bit grapics, but mostly 4 bit grapics put of a pallet of typically 16.
Its also worth saying that most older consoles and also some computers in that era did have a 2d grapics chip that was driving the grapics. There was in most cases no cpu involvment in showing the grapics..
It was really mostly PC that lacked that ability... sort of. Pc the cpu renderad pixel by pixel. This pushed pc cpu to have more preformance partly to offset this disadvantage. This was why 3d exploded on pc.
Cartoon rendering: Cel is NOT a "color cel". "Cel" comes from animation. Early animation was drawn on paper and inked and painted on celluloid sheets for photographing. These sheet were called "cels" for short. The colors used in cartoons were just "paint". Cels were transparent (five was the practical limit) so you could stack cels, allowing production to only animate and photograph the parts that moved.
This info brought to you by a guy that made commercial 3D cartoon rendering engines in the 1990s.
Never noticed that data structure and tree data sorting would be this useful! Glad i learned them in college
you explained the binary space partitioning in a way that was so easy to understand
This is the type of channel we live to subscribe to
Thank you very much for the detailed vid. I suck at graphics programming, but understanding the system and theory has really helped me
Raycasting was such a great hack!
So beautifully explained! :)
I took a timeout from studies and here I am studying about data structures and computer graphics.Life's got a sense of humor.
Thank you so so much for sharing your knowledge and research in a way that everyone can understand! It would be great to have another video of Game Plunge.
This video ends waayy to soon.
You did great explanation in what order things were drawn in doom or quake, but int in 3d world (and diferent types of them like Forward vs deferred)
Also how Different texture types changed in time from simple photo like image to various PBR types like albedo/roughnes/metalic.
Yes, the concept of the video is really solid, but it falls a bit flat on missing research and bungled terminology.
Better than my whole semester on computer graphics. Cheers
Just new to your channel but really loving the informational videos. Took me only a few seconds to subscribe and like because I could already tell it was quality content. Keep up the good work!
Very good video! You provided a great overview of the different aspects of realtime rendering in a digestible format.
Hope this series continues.. It is really insightful
Finally you are back.
Yes, Graphics are very important. Visuals are the top 3 most important thing in the games.
You should see Game-jam submissions. The graphics they make is something else in so less time.
One of the best video in the world of gaming history
Some feedback: there is a static background noise. You sound like being in a huge room with echo (google fixes, but recording in a closet is a funny and partially crazy solution I heard). A lot of snappy sounds that can be fixed with a shield thingy (no clue what it is in english).
Interesting video still! But sadly not something I like having on in the background.
im a sound designer, if i where him id get some acoustic panels to isolate his room
@@TheIndieGamesNL anything he can do to reduce the static? Also asking for myself
6:41 I thought this game used quads instead of triangles, so it isn't a good example of the first triangle rendered 3D graphics
i knew most of this but still enjoyed watching this. subbed!
i learned something new and that is fascinating as i want to be a game programmer. It's really need hard work and patience to create different parts of a game.
2:20 Ah yes, my childhood. Legend of Zelda : Link to the past. Thanks for reminding it!
waow, you are still alive
Yooooooooo, you're alive! So glad you uploaded :D
The video is amazing, as expected :)
Thanks for still trying to explore new topics and making deep and informative content ! I guess it requires a lot of work, and it seems motivation isnt always here (well maybe its is again). Don't give up on this, there is a lot of ppl who would enjoy your content ! Don't focus on the numbers or how it spreads, but the quality of what you create ! It will always bring you positive feedback, from the community but also from yourself ! Great video btw, always simple enough for ppl to understand it, but with enough deep for us to crave for more !
Thank you for this concise yet helpful video regarding rendering in general. I feel like a render expert now 😂
I just stumbled on this video. It's brilliant. I always wondered how the math for the graphics was done in games like Doom and Wolf3D. This video confirms some of my suspicions. Some effects like anti-aliasing don't impress me very much, but some effects like the holes in the wall in the game "Portal" are so amazing I have no absolutely idea how they implement that. I mean, you can sometimes see multiple copies of your universe when you look through a hole in Portal.
I don't think it's that complicated, you just need a render to texture, you already know the position of the player, so you know the camera position. You then need to transform this position to the relative camera position to the entry portal and apply this relative position to the exit portal. With now having the new camera position and the direction you are looking at, you can create your view-transform.
After that you can then create top bottom left right for the frustum by the dimensions of the entry portal. You can create a bit mask for the non visible pixels and then render the image. And then add the Portal glow effect on top. And do that recursively for every Portal visible in the Portal until a certain depth(you can change this depth in the settings of Portal)
@@MrDavibu You said "and do that recursively." This is part of what I don't understand. Recursion goes to infinity. How does the rendering engine know when to stop? How does it know to reduce resolution on a reserved image? So many questions.
Dude I just remembered this channel and wanted more videos like the terraria and portal one and decided to look up the video again. Low and behold there was another video like those.
Love the video! Thanks for uploading again
This is a fantastic video. Thanks so much for uploading and sharing your knowledge.
thanks man, these videos are very educational and insightful
Awesome video! Waiting for the next one on Ray Tracing!
i absolutely loved it ! thanks for making this video 😁😁
I like these explained in basics educational materials.
YES! He is back!
This video got me subscribed. Love your style. I learned a lot with this video. Been gaming since 1986 :)
You made me understand shading and cel shading in less than 2 minutes lmao, thanks
OMG, long time no see, this channel is pure gold, I wish to patreon you in order for you to create more content... it is extremely good
Love this video. So good and easy explanation of such complex topic.
Now. this type of content what I am looking for.
Yay! So glad this series is still going!
I'll have to watch this more than once.
Fantastic overview! Definitely subscribing :)
Congrats on the algorithmic success!!!
I really expected you to play "Sweet Victory" after this 6:24
Imagine the reality we live in needs to refresh and reload like video games 😂😂😂
Grandparents: You're fortunate to be alive in this time my grandchild. When I was your age we had to wait much longer for the world to refresh or reload in order to say hi to our friends across the road.
OMG!!! Finally! Love your vids.
Incredibly good video, thanks!
cool to se the evolution. Thanks to share it!
10:47 UV maps in fact have nothing to do with reflections in rasterization. The only parameters needed to sample a cubemap as a reflection is the eye vector (pointing from the pixel to the camera) and the normal vector (pointing out from the surface). Using those, you can calculate the reflection vector by which the eye vector will reflect off the surface with said normal. This reflection vector is then used to sample the cubemap.
I also want to note something about 8-bit palettes. It's not true that 8-bit computers and consoles necessarily output 8-bit color. A lot of them were much more limited. The NES for example has 64 colors available, of which only around 55 are unique (exact number is up for discussion). Additionally, the NES has 4 palettes for backgrounds and 4 palettes for sprites limited to 4 colors each, of which one color is shared between the background layer and one color is always transparent on the sprite layer, making for a total maximum of 25 colors on-screen at a time. Each palette is applied to a 16x16 pixel area on background layer and per sprite on the sprite layer (each sprite being either 8x8 or 8x16 pixels in size depending on the game).
Another thing I'd like to add as a contributing factor to why graphics are as realistic as they are today is the fact that render pipelines as well as certain stages of rendering are completely programmable on modern graphics cards. Early 3D rendering hardware had more static feature sets and limited customization. Programmability gives graphics programmers a lot of power to customize how rendering is preformed by writing programs that run on the GPU known as shaders. This programmability is the sole reason why screen-space effects such as SSR and SSAO even exist, as well as the plethora of anti-aliasing techniques that most of them rely on image-processing.
Damn such a good informative video, totally love it
This channel is a gem , suggestion for next video:
scriptable rendering pipelines ( deferred, Forward, clustered etc)
also how micropolygon rendering works ( I know geometry renderer have complexity ) but what about these new "geometry primitive shaders" as of this next gen rendering library have to offer and why developer don't use screen space micropolygon mapping ? before
I know each engine engineer use different methods i.e GI systems have so many different capabilities. but they trying to achieve the same goal realistic rendering . Now theses new techniques use same principles of offline renderers ( i.e in movies & pre rendered cinematics) but multiple implementation to optimize the performance, however.. I think after ray tracing technology (and similar micropolygon mapping) new trend like adaptive path tracing and voxels rendering techniques use to archive much better quality ( like texture quality in Dreams& little big planet)
you mentioned Dom use ray marching which is the same terminology for rendering equation of voxel engines .( Aka volumetric rendering)
but I don't understand how shading in voxels works , it should not trace the final color as they don't have direct information of the texture projection unless from the G buffer ( deferred) . or "fake Phong shading"
correct me if I'm wrong ,
Forward rendering is about geometry rendering aka vertex points , while deferred is texture projecting in the G buffer. ( Hence doesn't represent true 3D word) instead hybrid approach used like the scriptable rendering pipelines in unity and most recently Amazon game engine.
there's some other techniques like forward+ and clustered ( voxelized GI) etc.. but these aren't new . just hybrid rendering for certain tasks & stages of the image output.
I would like to dig deeper into how engine handle the scriptable pipelines .
Skipping Gouraud vertex shading is rather big omission, especially it was standard for quite a long time 😉
This taught me a lot, thank you! :)
A really amazing and informative video! Thanks a lot man!
Amazing vid! YT recommendations are Goldmines most times 🎉 Subbed and will be waiting for ray tracing video
Guys who would've thought these old games had such extremely smart stuff behind them! It was really great to even write hello world back in those years, and these gods coded tedious algorithms. Pretty smart for that era. Oof
Would be cool to see a follow up with deferred rendering, and global illumination models
“How do games render their scenes?”
That’s it. Roach!
Really cool, as a 3D artist those knowledge are essential
this makes "what does triangle says to the circle" joke, has another meaning
Hey this is a really good and well put together video
I am surprised by the fact that 3D graphics worked so well without dedicated hardware!