James Grimwood Pythagoras Theorem has become somewhat of a meme I'm my maths class, with great effort going into using it as much as possible and the first suggestion for solving every problem is usually Pythagoras.
@youtubeShadowBan it is not triggering me at all, but your wording and the racist videos just speak for themselves and tell me alot about you as a person. Have a nice day.
I now that math is important but that is why some people developed libreries that with some easy functions can automatically do the raycasting or the raytracing 🧐
As advanced as wolfenstein's engine is, I find it just makes Doom more impressive. Doom's engine is so much more sophisticated than Wolfenstein, it's actually insane to think only one and a half years passed between Doom's release and Wolfenstein 3D's release
@@robertforster8984 Wolfenstein's code is 16-bit; id was targeting 286s. As a side note, it was going to be a 16 color EGA game until late in it's development.
this just makes me appreciate how powerful even a 286 is. for a person to calculate one frame of wolfeintein by hand with pen and paper it might take a whole week and that little ancient cpu did it in real time. just wow.
@@sdsdfdu4437 Wow. You're quite dumb, aren't you? It's still 128 pixels across. So you have to do all of the calculations in this video in about one second. 😂 And that's just the first step, ignoring the vertical textures.
Mahj thanks for your feedback. Definitely will use bigger graphics in future. Sadly RUclips won't let me edit a video, only replace it...hopefully you can find a way to zoom to see these.
@Panguin > "An Enemy, People like you discourage new creators and old ones alike from making fresh and innovative content. I hope you're proud of yourself." I assume from context that "An Enemy" is a screen name for a previous comment, to which "Panguin"'s comment is a reply. But what was that comment? I only see 3 replies to "Mahj"'s constructive comment about the video only using 30%. Was it deleted, or did YT delete it on it's own, or sort it into God-Know-Where in the comment "stream/blizzard/whatever-the-heck-comment-"discussion"-forums-these-days-are-called". I'm confused..
Panguin People like them rather help new and old creators improve the quality of their content, helping them to communicate information easily to their audience, increasing their retention. So they should be proud of themselves
If a Wolf3D map has the edges exposed, you can look and go outside the level. In the north and south directions, the wall data repeats. In the west and east directions, it is from other areas of memory, which results in a bizarre world of mixed up blocks, doors, and empty spaces.
I'm actually going to try and implement something like this inside a more primitive game engine. Maybe even inside a command line using ASCII. Thanks for the detailed explanation!
Look up raycasting tutorials. I wrote a raycasting engine in Java last year following one of those. I even added a custom feature to render animated GIF textures. It's super cool and crazy easy to build a simple game engine.
This video hints at the genius of the Wolf3D code, but actually implementing a ray tracer really drives the point home. Even on modern hardware, it's not trivial to get the same performance that id software managed to squeeze out of a 286!
I implemented something very similar to this recently, without even hearing of raycasting before. I did it using ASCII in command line, and then moved on to actual pixels. It was really laggy, however. I was simply moving my rays thousands of very small steps in the direction of the walls. It also had that strange fisheye effect you mentioned. Nonetheless, It was quite fun to make!
There is an effort to port Wolf3D to the Commander X-16, an 8 bit 65c02 based machine. Casting all of these rays is time consuming, and complicated by the lack of assembly language commands to do multiplication on 65c02. I found a way to significantly reduce the number of rays cast. Rather than casting rays one column at a time from one side of the screen to another, the screen is first broken up into groups of columns of pixels. There are 304 columns on the screen, so rays are cast to column 0, column 256, column 288, and column 304 (note this column is just off the right side of the screen and not rendered). This breaks up the screen into a group of 255 uncalculated columns, a group of 31, and a group of 15. For each of the rays cast, there are several values saved in RAM: the map block hit, the face of that block (N,S,E or W), the X-intercept or Y-intercept (whichever applies for that face), and the distance in the direction the player is facing. For all the so-far-uncalculated pixel columns, the high byte of that distance is set to 255. For the rest of the columns we do not immediately cast a ray. Instead, we first check to see if the high byte of the distance is still 255. If it isn't, then we skip the column and move to the next one on the list. However, if it is 255 then we look to the left and the right of this column, to find the closest column to the left which has already been calculated and the closest column to the right which has already been calculated. The order we test the pixel columns, the left pointer, and the right pointer can be stored in three precalculated arrays. The first array is the column to test {0,256,288,304,128,64,192,32,96,160, 224, 16, 48, 80,112,144,176, 208, 240, 272, 8, 24, 40, 56...} . The second array is the column already tested to the left {65535,65535,65535,65535,0,0,128, 0, 64, 128, 192, 0, 32, 64, 96, 128...} and the third array is the column already tested to the right {65535,65535,65535,65535, 256, 128, 256, 64, 128, 192, 256, 32, 64, 96, 128, 160...}. After the first four entries, the left-side entries are the largest number lower than the current column before it in the first list, and the right-side entries are the lowest number higher than the current entry in the first list. The column to test next is just the midway point of the largest remaining untested range, from left to right. So the next column to test is 128. To the left we've already calculated column 0 and to the right we've already calculated column 256. Here's where the magic happens: do the column to the left and the column to the right both hit the same map block on the same side? If they do, then we don't need to cast the rays between them. We can just do a linear interpolation on the x-intercept/y-intercept value and on the distance value, as every column of pixels between two that hit the same block and face will be evenly spaced between them. So we just subtract the value for intercept on the left from the value for the intercept on the right to get an intercept range constant, and subtract the value for the distance on the left from the distance on the right to get a distance range constant. Then we add the left value to some fraction times that constant, a different fraction for each column of pixels. Since the difference between the column number of the right one and the left one is always a power of two, we're always interpolating 1, 3, 7, 15, 31, 63, 127, or 255 columns of pixels. It's trivial to keep track of which interpolation routine to use, and the number of calculations in interpolation (two subtractions once for the entire range, plus two multiplication subroutine calls and two additions for each column in the range) is about the same as just the final step of raycasting: converting delta x, delta y, and beta into a distance. On average, this reduces the number of rays that need to be cast down to approximately 305/(log(305)/log(2)) or 37 rays, more or less; about 87% reduction in rays cast on average. If you're standing close to and facing a wall, only the first four rays need to be cast and all the rest can be interpolated.
This is precisely the kind of video I was hoping existed. I've been dissecting old FPS games. Thank you so much for making this. Know that it is greatly appreciated.
Everytime I see godbolt I know I'm in for a great time. It's insane how topically everything you post/dev is to me at the time. When I was writing an emulator you did a talk at GOTO. When I was writing a compiler you made godbolt. And now I'm tinkering with primitive 3d graphics and here we are. Thanks for everything Matt really appreciate everything you do. Alot of what determined me wanting to go into a CS degree was stuff you've done. Thanks! PS shout-out to Chicago
TheVopi wow, thanks for the wonderful support! Very much appreciate your kind words. It's just lucky coincidence I've done things that are helpful to you :-). Great to hear though. And indeed, shouts to Chicago. You should check out the Chicago Open Source Open Mic Meetup, or the Chicago C++ user group if you're into such things!
I don't know anything about advanced math but I was always curious how they made 3D without actual 3D objects. It's really genius, especiallyl since developers at that time had to overcome so many limitations, it's mindblowig. Thanks for the explanation!
Thanks for this video, life saver. I was programming a fps game in python using sockets and pygame for fun at school. I kept wondering why I got the fish eye effect until i realised I was not calculating rays with respect to the perpendicular 'p' i.e. 'd'. Great video and really well explained
Back in 1984, I was 18, and it was the first year (83-84) that my school was offering a Computer Studies O Level, so I did it along with A Levels my last year of 6th form. Never one to be sensible, I decided one of my three projects (the other two were a version of Missile Command in BASIC (heh) and an "address book") would be a program to draw 3D graphics. It's only looking back now that really it's amazing I achieved anything at all. It drew wireframes, and there was no internet and no books on computer graphics available so I had to work it all out myself, and I remember long hours with pencil and notebook trying to work out how to do it. Your mention of learning Trigonometry with "SOHCAHTOA" brought back memories of that. We never learned the acronym though. In the end my program did work. It could draw simple wireframes. My maths led me to a strange implementation where I had something like a vanishing point but at a fixed distance and objects with negative coordinates appeared beyond it, but getting bigger again the further away they were. I don't needless to say have the source code now so precisely what I did I don't know. I feel quite proud to have got that far though. The machines were British school standards- Research Machines 380Z and 480Z, with a networked floppy disk pack of 4 drives. Which felt like something from Star Wars. At home, I had a ZX81. 3D graphics of any kind were a long way beyond it...
6:14 and 6:31 no, when stepping to the next cell you have to add +1 or -1 to the initial computation dx/tan(θ) or dy/tan(θ), as you correctly describe later based on source code.
When i was a kid i looked at these games and thought how crazy intelligent it all looked. That there are people who understand it. And now that i can understand it it just impresses me even more than it did before.
ID software took this several steps further with the followup game Doom , featuring much richer 3D map rendering, while still not requiring any 3D rendering or math coprocessor, although they needed more CPU power.
@@SweetHyunho Today's engines are very efficient, the whole area of realtime rendering is advancing all the time, what makes you think they aren't optimised ?
It’s interesting to finally learn how similar & different my own attempt at raycasting back-in-the-day was. I don’t think I ever realized the proper way to eliminate the fish-eye effect. I think I ended up with something much more complex. I also remember that I adapted the Bresenham algorithm for drawing lines for the scaling routine.
There is a way of eliminating fish-eye, Doom uses a specific technique to do it. Buggered if I can remember how! Doom is actually really similar to Wolf3D. One secret of Doom's speed, is that all walls are dead vertical. No slanting walls at all, just straight up 'n' down. You probably wouldn't notice that unless you knew to look for it. There's a few little genius shortcuts like that, that give the illusion of a world with much more freedom to the level design than there actually is. Doom is drawn as vertical strips from top to bottom. Starts at the ceiling, down to any wall, then maybe some more ceiling and wall. Then eventually starts hitting floor and wall partway down each column. Each point on the map in Doom has one floor height and one ceiling height. No floors above other floors. No bridges, no multi-storey buildings. Though there's some bits that look that way particularly in Doom II. The real genius (well, some of it, there's a lot of genius) in Doom is setting up compromises where the machine could generate each frame fast enough, while having more freedom of level design than the simple flat grid of Wolf 3D. The second part, is designing levels that hide those compromises so it looks like there are less limits than there really are.
This tells how proficient Carmack as a programmer about 3 years into professional game programming, starting from a couple of Ultima spin-offs (Shadowforge and Wraith for Apple ][) in 1989. Jeez that guy is a beast and he probably programmed non-stop 12 hours 7 days.
Gets a bit too trigonometric a bit too quick. It might be easier if you actually drew in the triangles, when you talk about SOHCATOAH. People are much better at understanding maths visually, usually, as shapes, human brains work better with concrete examples than abstract theory. I'm sure to you it's all the same thing cos you learned it so long ago you don't even think about it any more, but an important part of teaching something is knowing where the learner's brain is at, and bridging the gap between that, and understanding.
Depends what you want to achieve. Abstract material is transferable while reified material is not. As an example, a common caveat in teaching fractions is using cake and pies. It's very visual and most student can understand that taking two of four parts is the same as taking one of two parts. Going for cake to pie then to enumerable objects is a real struggle. But with practice, student end up building a simple model...which doesn't transfer well! That shows when student are asked to convert a fraction into a real number. Struggle comes back as they didn't internalize that it is a division in the first place and that they are simply asked "do the division". It's also why most adults struggle with ratio. (Problems like "It take 100 hours for 100 people to build 100 houses. How how long does it take for one person to build one house ?") When taught using abstract concept, it takes much more time for students to grasp. But once they do, they can solve any problems involving cake, pies and jugs and even convert fraction into reals. This is not taught that way because people do not need to have profound knowledge and it's OK most people struggle with ratio all their life. It's the practical side which won. But it also limit discoveries we are making. The same with reading has been observed. People lack vocabulary and are not proficient at reading because syllabic reading has been abandoned for the global method. Global method works through repetition. Through a lot of reading, students remember words and even expressions. The issue is that it obfuscate patterns within words. Again, pragmatism won. People seldom use words like "hydrophobic", so it's fine if they have to look it up. Schools "dumbed down" because there is not enough resources, but demand increased. Even 30 years ago, you could end up in a one to one session or in very small group with the teacher. It was also quite normal to have hours of preparation work between two lessons and be interrogated from the get go! What you can do, and that's what I did in the past, is explore and illustrate by yourself. It's very different because your anchor is the abstract subject and not the other way around. But again, starting with abstraction makes everything seems way heavier. But that's the weight of real knowledge!
@@ct275 I know that, I also was specifically referring to calculations. Automation and abstractions reduce the amount of calculations the developers need to make themselves. For example, if you want to find the sum of two integers, you perform a calculation - a+b. You want to find a quotient, you perform a calculation - a/b, where b=/=0. Soon, you find that you require to find the sum and quotient so often that you define functions - sum(a, b) and division(a,b). Later, you need to find averages, so you use your functions - division(sum(a,b)). You find that you need to find averages quite often, so you define a function for it - average(a,b). The more and more you automate and abstract your operations, the less calculations you need to perform. The computer still performs every single calculation, but the video shows how developers THEMSELVES needed to perform these calculations in order to make it happen. Nowadays, you download an engine full of APIs and plugins, you barely need to perform as many calculations. That's the point I'm making.
If you haven't looked into it, you have no idea, haha. Even something simple like a shader is incredibly complex. You have lighting (multiple light sources mind you), texture mapping, even various techniques to pick from. PBR for example. So you need to calculate Metallic, Glossiness, etc. Diffusion, Specular. Fresnel. Just to draw one dot on the screen.
Knowing how to optimize code is still very important. Even a game with relatively low graphical fidelity can bring down the FPS if someone doesn't know what they're doing. Even high fidelity modern games can vary a lot in performance due to someone really understanding GPUs and CPUs at low level in their code. A problem is we rely on frameworks and advancing hardware to take up the slack. But software has not advanced anywhere near as much as hardware has. We still rely on a lot of frameworks and libraries with code written decades ago. There's also a lot of room for software optimization in machine learning ("AI").
I'm pretty sure that either when or VERY shortly after that game came out I had a 386-sx 33 MHz with a separate external FPU I got a few weeks later, 2-4 MB RAM & a 40MB(lol) Hard Drive. & I felt Boss because of it. I am more certain that I had that same system for playing Doom. NO Q these things were like literal magic at the time. id/Carmack are Legends imo, I'm sure these things were ripe for discovery & implementation but he/they were the 1st to do it in a way that reached millions. I used to do modem to modem deathmatches with my uncle. But imho Quake was thE defining moment in history where everything really came together for 1st person 3D multiplayer gaming.
Don't forget Ken Silverman. Cloned Wolf3D as a teenager to make Ken's Labyrinth, then went on to make the BUILD engine that powered Duke Nukem 3D, Shadow Warrior, and Blood plus a number of less well-known games in the mid-1990s, and which got a revival for Ion Fury in 2019.
I'm sorry but I can't follow this, can you please explain why yIntercept is equal to ( y + dy + dx/tan(θ) ) ? I thought since the space between the Vertical Intercepts is [-dx * tan(θ)], shouldn't it be equal to ( y + dy - dx*tan(θ) ) ?
I clearly remember the night I downloaded it off of Compuserve. While standing in the cell, I hit the cursor key to spin around and thought "We're not in Kansas anymore". Some of the other tricks your noticed straight away was using flat images for the "rewards" and other items you collected during play. The movable walls were another great trick. Thanks for the "maths" explanation.
Absolutely fascinating stuff, not just the code tricks but it gives a kind of insight into the brain of John Carmack too! ... That guys brain has got so many groves it has its own Hausdorff dimension! I would love to see a Javascript simulation of these algorithms working, especially that cunning Square-Root hack!
IndiiSkies - And I’ve studied Engineering at Degree level you little snot nosed spod, but that doesn’t mean I can design a bridge before I’ve even drunk my coffee.
Just search up "raycasting fisheye effect." It just looks like the player is looking through a fish bowl. When facing and far from a wall, the lines (top and bottom) slowly converges at the ends of the screen; this is more apparent when the Fov is relatively wide.
@@callsignseth7679 The middle section of the stream of Daniel is about 3D representation of ray casting and the fish eye effect when we don't take in consideration this geometrical adjustments.
This is the older video and I don't know if there was a comment on the following issue... At the time stamp 04:55, you are showing that "dx" is the distance of the starting point from the LEFT side of the square field. But at the time stamp 06:20, "dx" is suddenly the distance from the RIGHT side of the square field. Which "dx" is used in the final equation? Left or right side "dx"?
Wolfenstein was so awesome. I was so addicted to it. I moved on to Doom then Heritc then Hexen. Hexen was my favorite. Awe so many wonderful memories. Thanks for the video. (liked)
It's actually not that hard when you think about it but coming up with it in 1992 when nobody has done it before is the actual achievement. I was 15 at that time and it took me years to figure out how it works. There was no stackoverflow that you could consult at that time.
The use of the square grid to leverage CPU power was pretty genius on id's behalf. I wonder how was the trig handled without floating point arithmetic?
It was built into the compiler, they didn't even need to think about it, but of course, it is always better to avoid using floating point, as doing floating point math without an FPU is expensive.
Now I see why Carmack was so resistant to adding "push-walls" when John and Adrian pestered him to. That breaks the grid stepping. I wonder how he went about it in the end?
It's funny to stumble upon a video by the guy who made the Compiler Explorer. Thanks for giving the tools and ideas for tinkering! =) And thanks for the notes on self-modifying and self-generating code: made me really want to dig into Wolf3D's internals Why does the fisheye effect occur with the naive hit distance calculation? Geometric intuition suggests that everything should be ok if you choose field of view correctly
I obviously didn't explain myself well about the fisheye. Field of view doesn't come in to it. Imagine a wall directly 10 feet from you. If you were to take a photo, the wall would appear in the photo as a rectangle, right? No fish eye. However, if you measured the distance from your eye to the wall directly in front of you you'd get (say) 5 feet. But it's further to the left and right edges of the wall, maybe 7 feet away (depending on how long the wall is). If you scaled the left hand as 7 feet away, the middle as 5 feet away and the right as 7 feet you'd draw something like a fish-eye wall. So -- the takeaway is, the scaling distance can't be the straight-line distance from the eye to the object, but rather the perpendicular distance away from a plane at the screen. I hope that's (maybe) a bit clearer! I've seen engines use the straight-line distance and then "fudge" it to undo the fisheye effect instead of calculate it properly as Wolf does. Changing the field of view only changes the angles between each ray cast out, it doesn't create or remove the fish eye effect (although if you have a very large field of view you see something similar).
@@MattGodbolt thanks, got it! But what if I aim not to simulate a flat non-distorted picture, but to make the screen "transparent"? In this setup the source of the rays matches the viewer's position, and the virtual screen FOV matches the real one. Will the straighforward approach be more appropriate then? And why isn't it used in game engines? Too narrow FOV? What I want to overcome by this approach is the unnatural view stretching. It's noticeable when you rotate the camera: everything looks way smaller in the center of the screen than on its periphery (yay, I know the reason of this warping now)
Wolf3D came out in 1992. Doom came out in 1993. My family got a 486DX/33 with 8MB of RAM in 1993. You could just barely run Quake on it (if you call getting single-digit fps with the sound breaking up "running"). Cost AUD$4000, which is about double that in today's money. Hardware was expensive. Hardly surprising that ID targeted older stuff like the 286 for Wolf3D and the 386 for Doom. 3D accelerated graphics cards were a mid-1990s thing. The first ID Software game to get support for them was Quake 1 with the GLQuake update, but Quake 1 started off with software rendering only. Quake 2 still came with a software renderer, and it wasn't until Quake 3 in 1999 that they were common enough that ID dropped software rendering entirely. The original Unreal from 1998 also includes a software renderer (and a pretty impressive one at that, as it dithers the environment textures to fake bilinear filtering, which I've never seen in any other engine), but later games using the engine don't bother to include it.
@11.05 if I am not mistaken.. one lone coder perform this (calculate p) the easier way.. first initialize player position at the midpoint of the map for example if the map is 16x16 we initialize the player position as: float playerX=8.0f float playerY=8.0f so p is 8.0
I disagree with calling a 286 "state of the art" for 1992. Commonly available, and a good target system to develop for? Yes, but the 386's and 486's had already been out for a few years by then.
Agree. The 486 was introduced in 1989, it was actually a bit old by 1992 - the 486DX2-66 was top of the line in 1992. The Pentium was released in 1993. 386s were still around but fading away. 286s were pretty rare and while Wolf-3d may have run on one, it didn't run particularly well.
If you are interested for more information, I recommend the book of Fabien Sanglard he describes in a very descriptice language the whole magic of Wolf3D. The whole engine, not only the rendering. Loved reading it and looking forward for his next book about Doom. Love your awesome vid about rendering of Wolf3D too @Matt! A pictures worth a thousand words. A video is worth a thousand pictures. Looking forward for your further videos!
Personally, I always thought that the fisheye effect was much more desirable than the "corrected" version that has become standard for the simple practical reason is that you can play with a higher FOV with the simple distance renderer and not have the central focal point in the middle of the screen be so small, and the edges be so big. I really wish more renders kept the simpler distance render for that simple reason alone. It makes it easier to see when playing.
It should be noted; Ken Silverman's Build Engine could actually do room-over-room tracing, even though it was primarily a raycasting engine. I'm not exactly sure how it worked, except that doorways and windows that you could pass through worked by teleporting the player to different places in the map.
@@navithefairy It's the same trick used by Rise Of The Triad: treat the view like it's really tall and slide the screen up or down, then render the bit that'll actually appear. Unlike a proper perspective-correct 3D rendering, it will keep the walls dead-vertical when you look up and down, whereas a proper rendering will have them turn into diagonals.
Thanks for the great video, very interesting. Have you seen the Game Engine Black Book: Wolf 3D by Fabien Sanglard? it goes into a lot of this detail, it's a great read.
A 486 might well have been out 3 years before Wolf3D, but they were super expensive and only really affordable for business use. A 286 was a realistic "home PC", but yeah, not state of the art. Mine was made by ICL and had a whole megabyte of RAM and a 40MB hdd. It was a massive upgrade from my Atari ST - it did VGA! ;-)
I knew many people who still used the 286 in 1992. In fact some even used older PCs for accounting and such. It wasn't a paperwheight unless you were a hardcore gamer. Edit: Atari ST
we got our first 486 in 1993. Wasn't cheap as such, but considerably cheaper than the 286 we got in 1990. That does give you some idea of when the 486 started to common. I mean, there's sometimes a big gap between something first being available and it being common. there were computers with a recognisably modern GUI in the mid-70's, but it's not until the mid 90's that it was common enough to be the norm, and not something of a niche thing...
I'm sorry, but this has become sort of a pet-peeve of mine (especially with all the videos about real-time raytracing around these days): Raycasting is simply the process of "firing" (or casting) a ray from one position in one direction into a scene and checking what it hits. That's all there is to it. It's still in use in almost every single game you play today, for example to check what enemy a bullet hits. In itself, it has about as much to do with rendering as a hammer has to do with building a house. Raytracing generally describes algorithms that use raycasting to render an image. Everything beyond that is not included in the terms on their own. Two other things I noticed: 1. You're only using about a quarter of the screen. I imagine that's a bit annoying for people watching this on a smaller device. 2. The step done around 12:35 might have been easier to follow if you circled the terms replaced by delta X and delta Y in the equation for p. Still, very nice and to the point explanation of the algorithm.
Thanks for the comments. You're not the first to point out my mistakes here :). If I were to redo things, I'd use more of the screenspace and make clearer the definition of what ray casting is.
@@barrybathwater4877 Well... yeah... they are definitely used while building houses, but using a hammer does not mean you're building a house... which was the point.
There's a better way to calculate how large the column of pixels for the wall should be than calculating the normal distance. Calculate the square of the normal distance, and use that directly.
There's a mistake in the definition of raycasting and raytracing. In raycasting you fire a ray for every pixel on the screen and stop when the ray intersects an object, in raytracing you fire a ray for every pixel on the screen and when the ray intersects an object you fire another ray from the point of intersection and into the direction in which your ray is be reflected by the object, and you repeat this action until your ray intersects the light source, thus tracing a ray of light from the camera and to the light source. In raycasting color of a pixel on the screen is determined by the color of the object at the point of intersection between it and the corresponding ray you cast and maybe the distance. In raytracing, the color of the pixel is determined by the color of an object as it is lit. The renderer in Wolfenstein 3D does even less than a normal raycasting algorithm would because it is only capable of rendering a certain kind of scene with a certain kind of camera: You can't look up or down and the floor and ceiling are always the same color, so the renderer only needs to cast one ray for every column of pixels, like it was said in the video, to determine how far a wall is. And, since the only other type of object on the scene is a billboard/sprite that always faces the player, raycasting is only needed to render the wall textures.
Your definitions aren't correct either. Raycasting is tracing a single ray PER COLUMN (or row, in theory) of pixels, and determines the point at which this intersects a surface. - you then use this to determine what column of a texture should be drawn, and the distance to determine the length of the line. (and in more advanced raycasting engines, some kind of lighting parameters.) That's raycasting. Raytracing also doesn't do what you think it does either. It traces a ray from each pixel and looks for an intersection with an object. If it's a fully reflective surface, it then traces a line based on the angle of incidence and repeats the process until it hits the bounce limit set by the renderer. It doesn't trace until it hits a lightsource, rather when it hits a surface, it traces a line to each light source, determines the angle to the surface of that specific light source at the point of intersection, and uses that to determine the lighting contribution. What you're calling Raytracing is a process called Path Tracing. What you're calling Raycasting is, to my knowledge a non-existent algorithm. What Wolfenstein does is how Raycasting is described in every book I've ever read on the subject.
@@KuraIthys Tbh I would argue that there's no real between raycasting and raytracing. In all these ray-based algorithms, you take a ray, trace/cast/fire/whatever it in some direction until it hits something, and then do something with the result. We need better names to differentiate between all the different rendering algorithms based on rays.
I had a 386DX 40Mhz then, I clearly remember that it didn't run well at full screen even on that machine. I was quite jealous for the 486DX2 66Mhz because Wolf3D ran perfectly on that, but not with the 386... on 12Mhz 286, there is no way to talk about well-running Wolf3D at all, if it is close to the full screen :D Not mentioning Doom, what was awfully slow at full res. on my 386, strong compromising needed to play. With a 486DX2 it was much more playable...
I did a project on explaining this exact thing about a month before this was made. Research for it was extremely difficult. This would have been "very useful"
We are really spoiled as programmers these days. We can simply buy or download an engine like Unreal or Unity, tell things where to draw and be done with it. These guys had to do absolutely everything from scratch.
They didn't have to do most things from scratch. If you want to talk about doing things from scratch, we can discuss how for instance Roller Coaster Tycoon 1 was programmed in Assembly. Now that is doing it from scratch.
>There's no need to reinvent the wheel. You're wrong about this. Many great game designs came from reinventing the wheel, screwing up, and discovering a new mechanic. I actually feel a little sorry for you young guys that you don't get elbow-deep in the Primal Act of Creation that becomes emergent behavior by accident. That's why every game is more like a sequel or clone of another game, instead of being a whole new wild idea.
@@johnraptis9749 I'd say that's more than a little because mobile marketplace is such a huge area, and absolute complete cancer at the same time. There is nothing original there at all.
Good and unstructured code. It works but it is insane. It applies 16-bit arithmetic on 32-bit words. It is not portable because it requires an old C compiler that defaults to unsigned comparisons. There should be a disclaimer.
@@kentlofgren I think a lot of people will try to implement this craziness on a different compiler. Ofcourse without knowing that (-dx) means static_cast(static_cast((1L
Definitely one of the better explanations I've heard, nice job! This is a little unrelated, but are you the guy that made the Godbolt compiler explorer?
Wolfenstein 3d has always fascinated me. For the longest time I have wanted to understand how it was made and after this video... I still don't. That is not the video's fault. I was never good at math. Its a start though and thanks for that. I feel like there could have been some more information on screen once the explainations started. The drawings were great but some supporting text maybe. Just a thought.
I'm pretty curious about the self-modifying code of the ray casting loop. How was it even done? And was this common practice when writing tight loops? It's seriously cool to think about, but in hindsight, I can see how much of a pain in the ass it could be for someone who doesn't know about it.
I'm not sure how common it was, but Wolfenstein did it here: github.com/id-Software/wolf3d/blob/05167784ef009d0d0daefe8d012b027f39dc8541/WOLFSRC/WL_DR_A.ASM#L235 patched the `jge` etc :)
Wow, that was way simpler to understand that I thought it would be. Guess I'll be reading the wolfenstien source code this weekend.Amazing video as usual!
The technique is fairly common in assembly code, which forces a fair level of spaghettification anyway. The speed critical bits like 3D rendering would be where you invested in assembly coding, which is far slower to develop.
Great video!, I have a question : at 5:09, we can already find the coordinate of the first horizontal intersection. But in raycasters, we need to check both horizontal and vertical intersections. The question is, in that case, how do we find the x,y coordinate of the first vertical intersections?
At 6:19 I show the first vertical intersection. I wasn't clear: there's also a dispatch on which of the four quadrants the ray is being cast on, which sets the scene for which direction to check first.
Yes, there is a class of programmers which simplify their formulas, finding interesting shortcuts in complex math algorithms. And then there is your business software colleague, the one who converts a number to a string to check if it is positive.
This example, written in Javascript, roughly shows the concept. On the left you can see the players POV and on the right you can see the top-down view in relation. wesmantooth.jeremyheminger.com/wip/pseudo3D/index.php Use your arrow keys to move the player.
Where floating points more expensive to compute back then ? And if so, did they use integers for doing all those calculations ? Thanks for a great video btw.
Igor Mitrovic absolutely! There was no hardware floating point unit. Software routines for floating point had to be written, and software FP is at least 100x slower than hardware. Floating point numbers have to be unpacked, added or multiplied, renormalized and then repacked. Or something like that. These are all integer operations and so can be written using just normal integer instructions. Hardware is more efficient by far at this though. Googling "software floating point" gets you a bunch of guys if you want to look at code, else the Wikipedia article on floating point explains some of the algorithms. You can see how you might build routines but also that they're considerably more complex than just integer addition.
Fun fact: today's x86 int (long) divisions are about 10 times *slower* than float (double) divisions (possibly other operation too but I did not check; for div it's about 10 vs. 100 cycles) making fixed-point algorithms slower than simply using floats! Blew my mind when we stumbled upon that while trying to figure out why our fixed-point port of an algorithm was way slower on x86 than the original.
If anyone is interested, there are great lectures on RUclips by Alexander Stepanov(the guy who wrote STL in c++, and invented concepts) about many programming topics. In one of the lectures he talks about cost of fixed and floating point operations used on different data structures. You will find that lecture here : ruclips.net/video/3K2LmnaLLF8/видео.html Basically today, it looks like, floating points have pretty much caught up to integers, if you look at the speed of computation.
I am honestly not surprised since there were ports to the SNES and other consoles of that era and I think that the 286 could out perform the CPU in the SNES
Something like this: instead of having 360 degrees in a circle, have 256 of them. Prepare (for eaxmple) a table of sin() by having sin(0/256), sin(1/256), sin(2/256)... etc. Now you can get sin(angle) by looking at entry "angle" in the sin table. For the entries: we know that sin() ranges between -1 and +1. So, we use a signed number between -128 and 127 to mean -1 to +1: we store (sin(angle) * 128)) in our tables, as an integer. So now we have a 256-byte table of sin. To multiply by sin(angle), we take a number like "150" and multiply it directly with the sin value. Then we shift it down by 7 (to account for the 128-scaled sin()). That way we get an answer in the right domain. Later you can note that there's a close relationship between sin() and cos(), and can reuse parts of the table. tan is a little trickier, as is tan-1. But hopefully this gives a little taste!
Floating point hardware existed but wasn't common until the Pentium. It was optional with 486s - the DX had floating point, the SX did not. Prior to that it was a separate chip (math-coprocessor) available all the way back to the 8087. See also the 287 and 387 chips - the math coprocessor counterparts to the 286 and 386. Many motherboards had space for the math coprocessor but it was left empty. Very little software took advantage of floating point hardware back then - CAD and spreadsheets come to mind, but both had software floating point algorithms to fall back on if the hardware wasn't present.
Badtanman They has floats in 1992 but integer math was faster before the Pentium. On the Pentium you could start an floating-point operation, do a bunch of integer math, and then read the float results X clock cycles later due to the dual U and V pipes of execution on the Pentium. This is how Quake was able to get ~8 - 16 texels for free with its software perspective correct texture mapper on the Pentium. Now a days integer math is _slower_ then floats so every one uses float32 or float64 for the increased precision, speed, and convenience.
asdf asdf haha, more a mixup in my understanding of school systems. I went to school in the UK and so it would have been "fourth year" of senior school or something like that...
I always think that the American system is pretty simple: In total there are 12 grades, 1-5 is Grade School, 6-8 is Middle School, 9-12 is high school. Anything after is college of some kind...
Henry H. I was hoping I covered that when I said about the scaled renderers. I didn't go into too much detail I guess. There were "JIT" compiled routines to scale all possible output sizes of a vertical slice of texture. And the textures were stored sideways to make that more efficient. Sorry I didn't get into details there!
ultima underworld had somewhat more 3d than wolfenstein3D (both games appeared roughly at the same time according to wiki) was the former intended for beefy computers or it wasn't as dynamic as wolfenstein and therefore didn't need so much computational power?
Ultima Underworld had sluggish controls, a lower framerate, and a smaller rendered window. It also wasn't much more 3d: IIRC it still enforced a coarse grid structure on the map and vision range was very restricted.
@@danpowell806 I do believe that it had different floor and ceiling heights, though, and maybe even floor slopes? 'Course, the renderer as a whole was a lot less efficient than what Doom would crank out later, so it was definitely a case where it only really worked because the game was slower paced than Wolf or Doom.
Okay, I see how this can work for steps and angled walls (like standing in the middle of a hexagonal shaped room) but do would you add distinct textures for different ceilings and floors? Wouldn't Wolfenstein 3D constantly be limited to the "sky" and "ground" being one colour or one unmoving texture?
You could add in a textured sky or ground using this method, with a little bit of cheating. Let's say the sky for the whole level is a large image. You then have to figure out where on that image the player is, which you could use their x,y position in the map for. Then you need to rotate the image to match the player's orientation, and then render it with scaling before drawing the walls. A pretty basic demo effect. The problem is that doing it this way, you're writing the pixels where the walls are twice, which will slow things down. You could use a z-buffer, drawing the walls first and then only doing the ceiling where you haven't drawn anything yet, but it's still going to be relatively slow particularly for the machines Wolf3D were written for.
@@deanolium But isn't rotating an image extremely computationally hard? What is 3D if it isn't scaling and rotating many many different images all assembled together into polygonal models? Wouldn't a huge level therefore need a single absurdly high resolution image as each part of it is viewed from fairly close?
@@Treblaine Nah, rotating an image on the z-axis is relatively easy. Essentially you just need to figure out the x,y displacements for each pixel when you move across one x or y position in screen space, which can be done with a little trig, ideally using look up tables, and then repeated. Of course, you need to create the rotation across the x axis (after it's been rotated) so that it looks like a ceiling or floor, but you can cheat to mimic that by doing some scaling on each row of the screen. As for the image needing to be large, in reality you just tile the image, which computationally is very, very cheap as you can just modulo the x and y points.
Another high quality video with excellent explanation, as usual with your content. Thank you! One small note though: the graphics were perhaps a bit too small, the lines thin and faint, and the annotations a bit tiny. On mobile, I could see them fairly okay, but someone with a smaller screen and/or less good vision could have a hard time. And especially thanks for adding captions (to whoever created them, if it was submitted, and not made by you), on top of your already clear and understandable speech!
Thank you for the feedback! I'm still learning how to best record these, and getting the screen zoomed up appropriately is something I just worked out how to do! Sorry the diagrams suffered from being too small here!
Im pretty new to the programming, but wouldn't that be much easier to cast every ray from a diferrent point on a line perpendicular (facing player in 90°degrees angle, imagine that like letter "T" player is the leg "|" and those lines will be casted from points on that "-" part) to the player (can it potentially lead to orthographic perspective? - and if so, would it matter in the first place? ) or cast those lines from the same point and then shorten those lines in the middle (lines closer to the middle would get shorten by the higher value ) to compensate for the fish eye effect Please correct me if im missing something
@@MattGodbolt first of all, i would let the player face the wall at 90° degrees angle, thus all the rays should be ideally the same length, which they will never be because of the fish eye effect. I would measure lengths of all rays, and then I would basicaly try to find out if that length progression from shortest to the longest ray can be represented by some mathematical function (like parabola or something) and then I would try to aplly reversed version of that mathematical function which would potentially cancel the fish eye effect. But in this part of world its late night right now as I'm writting this very so maybe I'm saying nonsence.
@@daifee9174 if you're casting every ray from a different point on the perpendicular...what angle are you casting? It doesn't seem to matter if you cast them from the player's position, or a point on the plane. Then the "reversed version of the mathematical function" is distance dependent, and it needs to be calculated. There are engines out there that do this, but the "reverse transform" is more expensive than avoiding it in the first place :)
@@MattGodbolt If I would cast rays from diferrent points on the line, they would have to be perpendicular to that line. Im not sure if my poor english can accurately describe what I mean, so I'll show it on letters again. It would be like"•E" player is that "•" and that perpendicular line is the vertical part of letter E and rays casted from that line are like those 3 horizontal parts of letter E
See kids, this is why you need to learn Pythagoras and trig at school. Those normal vectors won't calculate themselves ;-)
James Grimwood Pythagoras Theorem has become somewhat of a meme I'm my maths class, with great effort going into using it as much as possible and the first suggestion for solving every problem is usually Pythagoras.
@youtubeShadowBan Lets hope if you ever get a job, they don't look at your social media
@youtubeShadowBan it is not triggering me at all, but your wording and the racist videos just speak for themselves and tell me alot about you as a person. Have a nice day.
Ah, good quote..
I now that math is important but that is why some people developed libreries that with some easy functions can automatically do the raycasting or the raytracing 🧐
As advanced as wolfenstein's engine is, I find it just makes Doom more impressive. Doom's engine is so much more sophisticated than Wolfenstein, it's actually insane to think only one and a half years passed between Doom's release and Wolfenstein 3D's release
@etresevo It helped that that they were targeting a much higher system specification.
@@daishi5571 Not really. i486 instead of a i386.
@@robertforster8984 Well a 486dx2 is certainly a lot faster than a 386dx40
@@robertforster8984 Wolfenstein's code is 16-bit; id was targeting 286s.
As a side note, it was going to be a 16 color EGA game until late in it's development.
It’s also a lot harder to go from 0 to 1 than it is to make a feature reach version 2+
...I just posted my own raycaster video and fun tutorial! I love how you made your video :)
this just makes me appreciate how powerful even a 286 is. for a person to calculate one frame of wolfeintein by hand with pen and paper it might take a whole week and that little ancient cpu did it in real time. just wow.
Whole week is an overestimate. Might take a couple minutes. But yeah your point is still valid, computers are crazy
@@sdsdfdu4437 You think it would take you "a couple of minutes" to compute an entire frame?
Are you sure about that?
@@gorgolyt Yes, we're talking about wolfenstein3d here.
@@sdsdfdu4437 Wow. You're quite dumb, aren't you? It's still 128 pixels across. So you have to do all of the calculations in this video in about one second. 😂
And that's just the first step, ignoring the vertical textures.
@@gorgolyt Jesus fine, so it'll take an hour or two.
My algebra teacher would be proud
this is trigonometry
lol
Great video, but please make your graphics a bit bigger. You’re barely using 30% of the available space. On a cell phone, this does not help.
Mahj thanks for your feedback. Definitely will use bigger graphics in future. Sadly RUclips won't let me edit a video, only replace it...hopefully you can find a way to zoom to see these.
An Enemy thanks for your constructive comment. I'd rather not replace. I'll consider adding another with a more zoomed view
An Enemy, People like you discourage new creators and old ones alike from making fresh and innovative content. I hope you're proud of yourself.
@Panguin > "An Enemy, People like you discourage new creators and old ones alike from making fresh and innovative content. I hope you're proud of yourself."
I assume from context that "An Enemy" is a screen name for a previous comment, to which "Panguin"'s comment is a reply. But what was that comment? I only see 3 replies to "Mahj"'s constructive comment about the video only using 30%. Was it deleted, or did YT delete it on it's own, or sort it into God-Know-Where in the comment "stream/blizzard/whatever-the-heck-comment-"discussion"-forums-these-days-are-called". I'm confused..
Panguin People like them rather help new and old creators improve the quality of their content, helping them to communicate information easily to their audience, increasing their retention. So they should be proud of themselves
I can't believe there was no floating point at this time. Really respect them.
If a Wolf3D map has the edges exposed, you can look and go outside the level.
In the north and south directions, the wall data repeats. In the west and east directions, it is from other areas of memory, which results in a bizarre world of mixed up blocks, doors, and empty spaces.
This was easy to understand, that is until 2 minutes into the video when my brain failed me.
agreed, I'm fried. I wished I paid more attention in math at school! DAMMIT!
Same here
I'm actually going to try and implement something like this inside a more primitive game engine. Maybe even inside a command line using ASCII. Thanks for the detailed explanation!
If you do decide to do this, please remember to post an update here. Would be cool to see, Thanks!
Somebody took it even further an created a Raycasting engine in factorio ruclips.net/video/7lVAFcDX4eM/видео.html
To be fair, even in the 90s, game engines did exist. But they were often incredibly specific to one project because you can't waste any resources.
Look up raycasting tutorials. I wrote a raycasting engine in Java last year following one of those. I even added a custom feature to render animated GIF textures. It's super cool and crazy easy to build a simple game engine.
This video hints at the genius of the Wolf3D code, but actually implementing a ray tracer really drives the point home. Even on modern hardware, it's not trivial to get the same performance that id software managed to squeeze out of a 286!
I implemented something very similar to this recently, without even hearing of raycasting before. I did it using ASCII in command line, and then moved on to actual pixels. It was really laggy, however. I was simply moving my rays thousands of very small steps in the direction of the walls. It also had that strange fisheye effect you mentioned. Nonetheless, It was quite fun to make!
There is an effort to port Wolf3D to the Commander X-16, an 8 bit 65c02 based machine. Casting all of these rays is time consuming, and complicated by the lack of assembly language commands to do multiplication on 65c02. I found a way to significantly reduce the number of rays cast.
Rather than casting rays one column at a time from one side of the screen to another, the screen is first broken up into groups of columns of pixels. There are 304 columns on the screen, so rays are cast to column 0, column 256, column 288, and column 304 (note this column is just off the right side of the screen and not rendered). This breaks up the screen into a group of 255 uncalculated columns, a group of 31, and a group of 15. For each of the rays cast, there are several values saved in RAM: the map block hit, the face of that block (N,S,E or W), the X-intercept or Y-intercept (whichever applies for that face), and the distance in the direction the player is facing. For all the so-far-uncalculated pixel columns, the high byte of that distance is set to 255.
For the rest of the columns we do not immediately cast a ray. Instead, we first check to see if the high byte of the distance is still 255. If it isn't, then we skip the column and move to the next one on the list. However, if it is 255 then we look to the left and the right of this column, to find the closest column to the left which has already been calculated and the closest column to the right which has already been calculated. The order we test the pixel columns, the left pointer, and the right pointer can be stored in three precalculated arrays. The first array is the column to test {0,256,288,304,128,64,192,32,96,160, 224, 16, 48, 80,112,144,176, 208, 240, 272, 8, 24, 40, 56...} . The second array is the column already tested to the left {65535,65535,65535,65535,0,0,128, 0, 64, 128, 192, 0, 32, 64, 96, 128...} and the third array is the column already tested to the right {65535,65535,65535,65535, 256, 128, 256, 64, 128, 192, 256, 32, 64, 96, 128, 160...}. After the first four entries, the left-side entries are the largest number lower than the current column before it in the first list, and the right-side entries are the lowest number higher than the current entry in the first list. The column to test next is just the midway point of the largest remaining untested range, from left to right.
So the next column to test is 128. To the left we've already calculated column 0 and to the right we've already calculated column 256. Here's where the magic happens: do the column to the left and the column to the right both hit the same map block on the same side? If they do, then we don't need to cast the rays between them. We can just do a linear interpolation on the x-intercept/y-intercept value and on the distance value, as every column of pixels between two that hit the same block and face will be evenly spaced between them.
So we just subtract the value for intercept on the left from the value for the intercept on the right to get an intercept range constant, and subtract the value for the distance on the left from the distance on the right to get a distance range constant. Then we add the left value to some fraction times that constant, a different fraction for each column of pixels. Since the difference between the column number of the right one and the left one is always a power of two, we're always interpolating 1, 3, 7, 15, 31, 63, 127, or 255 columns of pixels. It's trivial to keep track of which interpolation routine to use, and the number of calculations in interpolation (two subtractions once for the entire range, plus two multiplication subroutine calls and two additions for each column in the range) is about the same as just the final step of raycasting: converting delta x, delta y, and beta into a distance.
On average, this reduces the number of rays that need to be cast down to approximately 305/(log(305)/log(2)) or 37 rays, more or less; about 87% reduction in rays cast on average. If you're standing close to and facing a wall, only the first four rays need to be cast and all the rest can be interpolated.
This is precisely the kind of video I was hoping existed. I've been dissecting old FPS games. Thank you so much for making this. Know that it is greatly appreciated.
Everytime I see godbolt I know I'm in for a great time. It's insane how topically everything you post/dev is to me at the time. When I was writing an emulator you did a talk at GOTO. When I was writing a compiler you made godbolt. And now I'm tinkering with primitive 3d graphics and here we are. Thanks for everything Matt really appreciate everything you do. Alot of what determined me wanting to go into a CS degree was stuff you've done. Thanks! PS shout-out to Chicago
TheVopi wow, thanks for the wonderful support! Very much appreciate your kind words. It's just lucky coincidence I've done things that are helpful to you :-). Great to hear though. And indeed, shouts to Chicago. You should check out the Chicago Open Source Open Mic Meetup, or the Chicago C++ user group if you're into such things!
Matt Godbolt I will! Thank you!
How do you get a degree in Counter-strike?
I don't know anything about advanced math but I was always curious how they made 3D without actual 3D objects. It's really genius, especiallyl since developers at that time had to overcome so many limitations, it's mindblowig. Thanks for the explanation!
Thanks for this video, life saver. I was programming a fps game in python using sockets and pygame for fun at school. I kept wondering why I got the fish eye effect until i realised I was not calculating rays with respect to the perpendicular 'p' i.e. 'd'. Great video and really well explained
now THAT was well elaborated in a very detailed way. Thank you Matt!
Our geometry teacher taught SOHCAHTOA as "Some Old Hippy Caught Another Hippy Tripping On Acid".
@@rodri_gl how the hell did you reach that conclusion
@@screamsinrussian5773 through Pythagoras'.
@@rodri_gl makes sense
Same
I live on Brazil and my math teacher told us SOHCAHTOA too. It's so cool it's the same tip on another country.
Back in 1984, I was 18, and it was the first year (83-84) that my school was offering a Computer Studies O Level, so I did it along with A Levels my last year of 6th form. Never one to be sensible, I decided one of my three projects (the other two were a version of Missile Command in BASIC (heh) and an "address book") would be a program to draw 3D graphics. It's only looking back now that really it's amazing I achieved anything at all. It drew wireframes, and there was no internet and no books on computer graphics available so I had to work it all out myself, and I remember long hours with pencil and notebook trying to work out how to do it.
Your mention of learning Trigonometry with "SOHCAHTOA" brought back memories of that. We never learned the acronym though. In the end my program did work. It could draw simple wireframes. My maths led me to a strange implementation where I had something like a vanishing point but at a fixed distance and objects with negative coordinates appeared beyond it, but getting bigger again the further away they were. I don't needless to say have the source code now so precisely what I did I don't know. I feel quite proud to have got that far though. The machines were British school standards- Research Machines 380Z and 480Z, with a networked floppy disk pack of 4 drives. Which felt like something from Star Wars.
At home, I had a ZX81. 3D graphics of any kind were a long way beyond it...
Except for 3D Monster Maze!
@@greenaum Indeed, how could I forget that? :)
6:14 and 6:31 no, when stepping to the next cell you have to add +1 or -1 to the initial computation dx/tan(θ) or dy/tan(θ), as you correctly describe later based on source code.
When i was a kid i looked at these games and thought how crazy intelligent it all looked. That there are people who understand it. And now that i can understand it it just impresses me even more than it did before.
ID software took this several steps further with the followup game Doom , featuring much richer 3D map rendering, while still not requiring any 3D rendering or math coprocessor, although they needed more CPU power.
thulinp absolutely. I have plans to cover Doom and Quake too :)
thulinp If today's engines were this efficient, wouldn't movie-like games run smoothly on ten years old hardware?
Sounds exciting! Can't wait to see how Id Software pulled off such a great 3d engine for Quake
define 3d rendering ? all rendering happens in 2 dimensions. hence you are plotting pixels on a screen.
@@SweetHyunho Today's engines are very efficient, the whole area of realtime rendering is advancing all the time, what makes you think they aren't optimised ?
It’s interesting to finally learn how similar & different my own attempt at raycasting back-in-the-day was. I don’t think I ever realized the proper way to eliminate the fish-eye effect. I think I ended up with something much more complex. I also remember that I adapted the Bresenham algorithm for drawing lines for the scaling routine.
There is a way of eliminating fish-eye, Doom uses a specific technique to do it. Buggered if I can remember how! Doom is actually really similar to Wolf3D. One secret of Doom's speed, is that all walls are dead vertical. No slanting walls at all, just straight up 'n' down. You probably wouldn't notice that unless you knew to look for it. There's a few little genius shortcuts like that, that give the illusion of a world with much more freedom to the level design than there actually is.
Doom is drawn as vertical strips from top to bottom. Starts at the ceiling, down to any wall, then maybe some more ceiling and wall. Then eventually starts hitting floor and wall partway down each column.
Each point on the map in Doom has one floor height and one ceiling height. No floors above other floors. No bridges, no multi-storey buildings. Though there's some bits that look that way particularly in Doom II.
The real genius (well, some of it, there's a lot of genius) in Doom is setting up compromises where the machine could generate each frame fast enough, while having more freedom of level design than the simple flat grid of Wolf 3D. The second part, is designing levels that hide those compromises so it looks like there are less limits than there really are.
2D plane world... 2D raytracing... Modified 1D image (image in one line) vision... Looks like it's what 2D creatures would see.
well, what a 2D creature would see indexed against another 2D plane of textures.
Actually it's a 1d image with 2d projection.
I don’t know. I remember Wolfenstein running pretty poorly on a 286. You would want a state of the art i386 to play Wolfenstein well.
Plus around that time, the 486 was generally available anyway.
This tells how proficient Carmack as a programmer about 3 years into professional game programming, starting from a couple of Ultima spin-offs (Shadowforge and Wraith for Apple ][) in 1989. Jeez that guy is a beast and he probably programmed non-stop 12 hours 7 days.
Gets a bit too trigonometric a bit too quick. It might be easier if you actually drew in the triangles, when you talk about SOHCATOAH. People are much better at understanding maths visually, usually, as shapes, human brains work better with concrete examples than abstract theory. I'm sure to you it's all the same thing cos you learned it so long ago you don't even think about it any more, but an important part of teaching something is knowing where the learner's brain is at, and bridging the gap between that, and understanding.
Depends what you want to achieve.
Abstract material is transferable while reified material is not.
As an example, a common caveat in teaching fractions is using cake and pies. It's very visual and most student can understand that taking two of four parts is the same as taking one of two parts. Going for cake to pie then to enumerable objects is a real struggle. But with practice, student end up building a simple model...which doesn't transfer well!
That shows when student are asked to convert a fraction into a real number. Struggle comes back as they didn't internalize that it is a division in the first place and that they are simply asked "do the division".
It's also why most adults struggle with ratio. (Problems like "It take 100 hours for 100 people to build 100 houses. How how long does it take for one person to build one house ?")
When taught using abstract concept, it takes much more time for students to grasp. But once they do, they can solve any problems involving cake, pies and jugs and even convert fraction into reals.
This is not taught that way because people do not need to have profound knowledge and it's OK most people struggle with ratio all their life.
It's the practical side which won. But it also limit discoveries we are making.
The same with reading has been observed. People lack vocabulary and are not proficient at reading because syllabic reading has been abandoned for the global method. Global method works through repetition. Through a lot of reading, students remember words and even expressions. The issue is that it obfuscate patterns within words. Again, pragmatism won. People seldom use words like "hydrophobic", so it's fine if they have to look it up.
Schools "dumbed down" because there is not enough resources, but demand increased. Even 30 years ago, you could end up in a one to one session or in very small group with the teacher. It was also quite normal to have hours of preparation work between two lessons and be interrogated from the get go!
What you can do, and that's what I did in the past, is explore and illustrate by yourself.
It's very different because your anchor is the abstract subject and not the other way around.
But again, starting with abstraction makes everything seems way heavier. But that's the weight of real knowledge!
Now imagine the amount of calculations in a modern game. Mind boggling🙌
Which is why you have abstraction and automation. In other words - engines.
@@pasijutaulietuviuesas9174 he said calculations.. not how hard it was to code. If anything the engine adds more things to calculate.
@@ct275 I know that, I also was specifically referring to calculations. Automation and abstractions reduce the amount of calculations the developers need to make themselves.
For example, if you want to find the sum of two integers, you perform a calculation - a+b. You want to find a quotient, you perform a calculation - a/b, where b=/=0. Soon, you find that you require to find the sum and quotient so often that you define functions - sum(a, b) and division(a,b). Later, you need to find averages, so you use your functions - division(sum(a,b)). You find that you need to find averages quite often, so you define a function for it - average(a,b).
The more and more you automate and abstract your operations, the less calculations you need to perform. The computer still performs every single calculation, but the video shows how developers THEMSELVES needed to perform these calculations in order to make it happen. Nowadays, you download an engine full of APIs and plugins, you barely need to perform as many calculations. That's the point I'm making.
If you haven't looked into it, you have no idea, haha. Even something simple like a shader is incredibly complex. You have lighting (multiple light sources mind you), texture mapping, even various techniques to pick from. PBR for example. So you need to calculate Metallic, Glossiness, etc. Diffusion, Specular. Fresnel.
Just to draw one dot on the screen.
Knowing how to optimize code is still very important. Even a game with relatively low graphical fidelity can bring down the FPS if someone doesn't know what they're doing. Even high fidelity modern games can vary a lot in performance due to someone really understanding GPUs and CPUs at low level in their code. A problem is we rely on frameworks and advancing hardware to take up the slack. But software has not advanced anywhere near as much as hardware has. We still rely on a lot of frameworks and libraries with code written decades ago. There's also a lot of room for software optimization in machine learning ("AI").
Extremely underrated youtube-channel - keep up the good work, the followers will come :)
matt: explains how wolfenstein 3d's engine works
me: ikr is crazy
I'm pretty sure that either when or VERY shortly after that game came out I had a 386-sx 33 MHz with a separate external FPU I got a few weeks later, 2-4 MB RAM & a 40MB(lol) Hard Drive. & I felt Boss because of it. I am more certain that I had that same system for playing Doom. NO Q these things were like literal magic at the time. id/Carmack are Legends imo, I'm sure these things were ripe for discovery & implementation but he/they were the 1st to do it in a way that reached millions. I used to do modem to modem deathmatches with my uncle. But imho Quake was thE defining moment in history where everything really came together for 1st person 3D multiplayer gaming.
Don't forget Ken Silverman. Cloned Wolf3D as a teenager to make Ken's Labyrinth, then went on to make the BUILD engine that powered Duke Nukem 3D, Shadow Warrior, and Blood plus a number of less well-known games in the mid-1990s, and which got a revival for Ion Fury in 2019.
@@Roxor128 Yup, I haven't really seriously gamed since but the evolution of technology has been almost as interesting as the the tech itself.
I love the videos where someone more knowledgeable explains the algorithms used in old games and software.
This is amazing, why is youtube showing me this 3 years too late
Hi! I think when I've watched this video a couple of times I'll be able to make a renderer. I plan on using it in my current project. We'll see!
thanks, I read an article on this but it took this video for the concept to really sink in.
I'm sorry but I can't follow this, can you please explain why yIntercept is equal to ( y + dy + dx/tan(θ) ) ?
I thought since the space between the Vertical Intercepts is [-dx * tan(θ)], shouldn't it be equal to ( y + dy - dx*tan(θ) ) ?
I clearly remember the night I downloaded it off of Compuserve. While standing in the cell, I hit the cursor key to spin around and thought "We're not in Kansas anymore". Some of the other tricks your noticed straight away was using flat images for the "rewards" and other items you collected during play. The movable walls were another great trick. Thanks for the "maths" explanation.
This needs to be presented as a 70s Open University programme, wearing a kipper tie and drawing on a blackboard.
Next up, making the 6502 by hand as they did in the 70s!
Absolutely fascinating stuff, not just the code tricks but it gives a kind of insight into the brain of John Carmack too! ... That guys brain has got so many groves it has its own Hausdorff dimension!
I would love to see a Javascript simulation of these algorithms working, especially that cunning Square-Root hack!
It's really 2D with depth projection for the x/y map. No height changes whatsoever.
Well I didn’t understand that (sitting here munching breakfast), but it looked really interesting, so I’ll definitely watch it again
trig is a high school subject.
IndiiSkies - And I’ve studied Engineering at Degree level you little snot nosed spod, but that doesn’t mean I can design a bridge before I’ve even drunk my coffee.
@@____________________________.x yikes
IndiiSkies - and its been 20 years since I last used any of this.
@Tommy Hopps - actually, zero tolerance for numpties is generally how we all turned out.
I'd be really interested in seeing that fisheye effect you mentioned at 3:38.
Just search up "raycasting fisheye effect."
It just looks like the player is looking through a fish bowl.
When facing and far from a wall, the lines (top and bottom) slowly converges at the ends of the screen; this is more apparent when the Fov is relatively wide.
Check out the coding train Livestream for a demonstration of that on P5 js ruclips.net/video/-6iIc6-Y-kk/видео.html
@@callsignseth7679 The middle section of the stream of Daniel is about 3D representation of ray casting and the fish eye effect when we don't take in consideration this geometrical adjustments.
Ultima Underworld and I think Catacomb Abyss both have this issue.
There's a picture of the effect in the book "Game Engine Black Book Wolfenstein 3D" by Fabien Sanglard.
This is the older video and I don't know if there was a comment on the following issue...
At the time stamp 04:55, you are showing that "dx" is the distance of the starting point from the LEFT side of the square field.
But at the time stamp 06:20, "dx" is suddenly the distance from the RIGHT side of the square field.
Which "dx" is used in the final equation? Left or right side "dx"?
Wolfenstein was so awesome. I was so addicted to it. I moved on to Doom then Heritc then Hexen. Hexen was my favorite. Awe so many wonderful memories. Thanks for the video. (liked)
This video has the charme of old educational material and does the job just as precise in a calm yet engaging way. Thank you! Subbed!
It's actually not that hard when you think about it but coming up with it in 1992 when nobody has done it before is the actual achievement. I was 15 at that time and it took me years to figure out how it works. There was no stackoverflow that you could consult at that time.
a 286 with no floating point was certainly NOT "state of the art" in 1992!
Tom P. Agreed. I still have my 386SX-16 from 1990. A 486DX2 was state of the art in 1992!
It wasn't 'state of the art', but it was met the minimum system requirements.
I think he met to say i386.
The use of the square grid to leverage CPU power was pretty genius on id's behalf. I wonder how was the trig handled without floating point arithmetic?
It was built into the compiler, they didn't even need to think about it, but of course, it is always better to avoid using floating point, as doing floating point math without an FPU is expensive.
I think I explain in the video (it's been a a while...) - everything used fixed-point arithmetic with look-up tables to handle the trig functions.
Now I see why Carmack was so resistant to adding "push-walls" when John and Adrian pestered him to. That breaks the grid stepping. I wonder how he went about it in the end?
Good point. Maybe it used 2D sprite tricks to move it further away by just decrementing its height and width.
@@nickwallette6201 You could fudge the intercepts on tiles that contain the shifted wall.
If still curious, checkout the "Game Engine Black Book Wolfenstein 3D" by Fabien Sanglard. There's a section talking about how it was done
@@shifter65 thank you!
(we don't use self-modifying code anymore... sadly, but that's why we use self-generating code !)
It's funny to stumble upon a video by the guy who made the Compiler Explorer. Thanks for giving the tools and ideas for tinkering! =)
And thanks for the notes on self-modifying and self-generating code: made me really want to dig into Wolf3D's internals
Why does the fisheye effect occur with the naive hit distance calculation? Geometric intuition suggests that everything should be ok if you choose field of view correctly
I obviously didn't explain myself well about the fisheye. Field of view doesn't come in to it. Imagine a wall directly 10 feet from you. If you were to take a photo, the wall would appear in the photo as a rectangle, right? No fish eye. However, if you measured the distance from your eye to the wall directly in front of you you'd get (say) 5 feet. But it's further to the left and right edges of the wall, maybe 7 feet away (depending on how long the wall is). If you scaled the left hand as 7 feet away, the middle as 5 feet away and the right as 7 feet you'd draw something like a fish-eye wall. So -- the takeaway is, the scaling distance can't be the straight-line distance from the eye to the object, but rather the perpendicular distance away from a plane at the screen. I hope that's (maybe) a bit clearer! I've seen engines use the straight-line distance and then "fudge" it to undo the fisheye effect instead of calculate it properly as Wolf does. Changing the field of view only changes the angles between each ray cast out, it doesn't create or remove the fish eye effect (although if you have a very large field of view you see something similar).
@@MattGodbolt thanks, got it!
But what if I aim not to simulate a flat non-distorted picture, but to make the screen "transparent"? In this setup the source of the rays matches the viewer's position, and the virtual screen FOV matches the real one. Will the straighforward approach be more appropriate then? And why isn't it used in game engines? Too narrow FOV?
What I want to overcome by this approach is the unnatural view stretching. It's noticeable when you rotate the camera: everything looks way smaller in the center of the screen than on its periphery (yay, I know the reason of this warping now)
The explanation in this video is so articulate and easy to follow.
This is sarcasm
There were far more powerful processors in 1993 than a 286. We were up to 486 pushing into pentiums. There were also graphics cards at that time.
Wolf3D came out in 1992. Doom came out in 1993.
My family got a 486DX/33 with 8MB of RAM in 1993. You could just barely run Quake on it (if you call getting single-digit fps with the sound breaking up "running"). Cost AUD$4000, which is about double that in today's money. Hardware was expensive. Hardly surprising that ID targeted older stuff like the 286 for Wolf3D and the 386 for Doom.
3D accelerated graphics cards were a mid-1990s thing. The first ID Software game to get support for them was Quake 1 with the GLQuake update, but Quake 1 started off with software rendering only. Quake 2 still came with a software renderer, and it wasn't until Quake 3 in 1999 that they were common enough that ID dropped software rendering entirely. The original Unreal from 1998 also includes a software renderer (and a pretty impressive one at that, as it dithers the environment textures to fake bilinear filtering, which I've never seen in any other engine), but later games using the engine don't bother to include it.
@11.05
if I am not mistaken..
one lone coder perform this (calculate p)
the easier way..
first initialize player position
at the midpoint of the map
for example if the map is 16x16
we initialize the player position
as:
float playerX=8.0f
float playerY=8.0f
so
p is 8.0
I disagree with calling a 286 "state of the art" for 1992. Commonly available, and a good target system to develop for? Yes, but the 386's and 486's had already been out for a few years by then.
Yeah, and not to mention the gaming beast that came out in '87 - Amiga 500.
Agree. The 486 was introduced in 1989, it was actually a bit old by 1992 - the 486DX2-66 was top of the line in 1992. The Pentium was released in 1993. 386s were still around but fading away. 286s were pretty rare and while Wolf-3d may have run on one, it didn't run particularly well.
If you are interested for more information, I recommend the book of Fabien Sanglard he describes in a very descriptice language the whole magic of Wolf3D. The whole engine, not only the rendering. Loved reading it and looking forward for his next book about Doom.
Love your awesome vid about rendering of Wolf3D too @Matt! A pictures worth a thousand words. A video is worth a thousand pictures. Looking forward for your further videos!
Thanks for recomending the book, but I can't find the chapter in the book that describes the engine of wolf3d, can you help?
Personally, I always thought that the fisheye effect was much more desirable than the "corrected" version that has become standard for the simple practical reason is that you can play with a higher FOV with the simple distance renderer and not have the central focal point in the middle of the screen be so small, and the edges be so big.
I really wish more renders kept the simpler distance render for that simple reason alone. It makes it easier to see when playing.
The fisheye effect didn't increase FOV. FOV stayed the same, the walls just scaled incorrectly.
It should be noted; Ken Silverman's Build Engine could actually do room-over-room tracing, even though it was primarily a raycasting engine. I'm not exactly sure how it worked, except that doorways and windows that you could pass through worked by teleporting the player to different places in the map.
The build engine wasn't geometrically correct though, try looking up and you will see everything looks distorted!
@@navithefairy It's the same trick used by Rise Of The Triad: treat the view like it's really tall and slide the screen up or down, then render the bit that'll actually appear. Unlike a proper perspective-correct 3D rendering, it will keep the walls dead-vertical when you look up and down, whereas a proper rendering will have them turn into diagonals.
Can you make a video which explains how sprites are added in?
Great idea, thanks. I don't know if I will have time in the near future. There's a great book on this subject if that's your thing though
@@MattGodbolt oh, what's the book name?
@@khangdao8119 game engine black book: Wolfenstein 3D by Fabien Sanglard
@@MattGodbolt thanks for recommending
Thanks for the great video, very interesting. Have you seen the Game Engine Black Book: Wolf 3D by Fabien Sanglard? it goes into a lot of this detail, it's a great read.
The 486 was released in 1989, 386 in 1985. A 286 would not have been "state of the art" in 1992, but rather a paperweight.
Patrik Lindström a fair criticism! Thanks for the correction. It was in fact the minimum spec!
A 486 might well have been out 3 years before Wolf3D, but they were super expensive and only really affordable for business use. A 286 was a realistic "home PC", but yeah, not state of the art.
Mine was made by ICL and had a whole megabyte of RAM and a 40MB hdd. It was a massive upgrade from my Atari ST - it did VGA! ;-)
I knew many people who still used the 286 in 1992. In fact some even used older PCs for accounting and such. It wasn't a paperwheight unless you were a hardcore gamer. Edit: Atari ST
we got our first 486 in 1993.
Wasn't cheap as such, but considerably cheaper than the 286 we got in 1990.
That does give you some idea of when the 486 started to common.
I mean, there's sometimes a big gap between something first being available and it being common.
there were computers with a recognisably modern GUI in the mid-70's, but it's not until the mid 90's that it was common enough to be the norm, and not something of a niche thing...
I played Wolf on 286 and my next computer was 586 in 1996.
I am amazed by the 118 down thumbs... This video is gold !
Brilliant! Thanks for putting this together. Absolutely fascinating :-)
I'm sorry, but this has become sort of a pet-peeve of mine (especially with all the videos about real-time raytracing around these days):
Raycasting is simply the process of "firing" (or casting) a ray from one position in one direction into a scene and checking what it hits. That's all there is to it.
It's still in use in almost every single game you play today, for example to check what enemy a bullet hits. In itself, it has about as much to do with rendering as a hammer has to do with building a house.
Raytracing generally describes algorithms that use raycasting to render an image.
Everything beyond that is not included in the terms on their own.
Two other things I noticed:
1. You're only using about a quarter of the screen. I imagine that's a bit annoying for people watching this on a smaller device.
2. The step done around 12:35 might have been easier to follow if you circled the terms replaced by delta X and delta Y in the equation for p.
Still, very nice and to the point explanation of the algorithm.
Thanks for the comments. You're not the first to point out my mistakes here :). If I were to redo things, I'd use more of the screenspace and make clearer the definition of what ray casting is.
hammers have a lot to do with building houses, Lol
@@barrybathwater4877 Well... yeah... they are definitely used while building houses, but using a hammer does not mean you're building a house... which was the point.
@@Clairvoyant81 using the toilet means you squeeze poopies out
There's a better way to calculate how large the column of pixels for the wall should be than calculating the normal distance.
Calculate the square of the normal distance, and use that directly.
There's a mistake in the definition of raycasting and raytracing. In raycasting you fire a ray for every pixel on the screen and stop when the ray intersects an object, in raytracing you fire a ray for every pixel on the screen and when the ray intersects an object you fire another ray from the point of intersection and into the direction in which your ray is be reflected by the object, and you repeat this action until your ray intersects the light source, thus tracing a ray of light from the camera and to the light source. In raycasting color of a pixel on the screen is determined by the color of the object at the point of intersection between it and the corresponding ray you cast and maybe the distance. In raytracing, the color of the pixel is determined by the color of an object as it is lit. The renderer in Wolfenstein 3D does even less than a normal raycasting algorithm would because it is only capable of rendering a certain kind of scene with a certain kind of camera: You can't look up or down and the floor and ceiling are always the same color, so the renderer only needs to cast one ray for every column of pixels, like it was said in the video, to determine how far a wall is. And, since the only other type of object on the scene is a billboard/sprite that always faces the player, raycasting is only needed to render the wall textures.
Your definitions aren't correct either.
Raycasting is tracing a single ray PER COLUMN (or row, in theory) of pixels, and determines the point at which this intersects a surface. - you then use this to determine what column of a texture should be drawn, and the distance to determine the length of the line. (and in more advanced raycasting engines, some kind of lighting parameters.)
That's raycasting.
Raytracing also doesn't do what you think it does either.
It traces a ray from each pixel and looks for an intersection with an object.
If it's a fully reflective surface, it then traces a line based on the angle of incidence and repeats the process until it hits the bounce limit set by the renderer.
It doesn't trace until it hits a lightsource, rather when it hits a surface, it traces a line to each light source, determines the angle to the surface of that specific light source at the point of intersection, and uses that to determine the lighting contribution.
What you're calling Raytracing is a process called Path Tracing.
What you're calling Raycasting is, to my knowledge a non-existent algorithm.
What Wolfenstein does is how Raycasting is described in every book I've ever read on the subject.
@@KuraIthys Tbh I would argue that there's no real between raycasting and raytracing. In all these ray-based algorithms, you take a ray, trace/cast/fire/whatever it in some direction until it hits something, and then do something with the result. We need better names to differentiate between all the different rendering algorithms based on rays.
Today is much more easier. Respect even more this guys. Todays devs are not math genius like this.
I had a 386DX 40Mhz then, I clearly remember that it didn't run well at full screen even on that machine. I was quite jealous for the 486DX2 66Mhz because Wolf3D ran perfectly on that, but not with the 386... on 12Mhz 286, there is no way to talk about well-running Wolf3D at all, if it is close to the full screen :D
Not mentioning Doom, what was awfully slow at full res. on my 386, strong compromising needed to play. With a 486DX2 it was much more playable...
I did a project on explaining this exact thing about a month before this was made. Research for it was extremely difficult. This would have been "very useful"
We are really spoiled as programmers these days. We can simply buy or download an engine like Unreal or Unity, tell things where to draw and be done with it. These guys had to do absolutely everything from scratch.
Spoiled, sure, but as said in every programming thread ever: There's no need to reinvent the wheel.
They didn't have to do most things from scratch. If you want to talk about doing things from scratch, we can discuss how for instance Roller Coaster Tycoon 1 was programmed in Assembly. Now that is doing it from scratch.
in ASSEMBLY
>There's no need to reinvent the wheel.
You're wrong about this. Many great game designs came from reinventing the wheel, screwing up, and discovering a new mechanic. I actually feel a little sorry for you young guys that you don't get elbow-deep in the Primal Act of Creation that becomes emergent behavior by accident.
That's why every game is more like a sequel or clone of another game, instead of being a whole new wild idea.
@@johnraptis9749 I'd say that's more than a little because mobile marketplace is such a huge area, and absolute complete cancer at the same time. There is nothing original there at all.
8:24 this is what makes it extra good. thx for sharing.
Good and unstructured code. It works but it is insane. It applies 16-bit arithmetic on 32-bit words. It is not portable because it requires an old C compiler that defaults to unsigned comparisons. There should be a disclaimer.
@@techeadache woaaa did you just use _knowledge_ to elaborate on my initial argument!! O_O :-)
@@kentlofgren I think a lot of people will try to implement this craziness on a different compiler. Ofcourse without knowing that (-dx) means static_cast(static_cast((1L
Definitely one of the better explanations I've heard, nice job! This is a little unrelated, but are you the guy that made the Godbolt compiler explorer?
Jakob thanks for the kind words. I am indeed the same chap :)
Not as many Godbolts in the programming world as you might've thought, funny old world eh?
This is really cool, matt
Wolfenstein 3d has always fascinated me. For the longest time I have wanted to understand how it was made and after this video... I still don't. That is not the video's fault. I was never good at math. Its a start though and thanks for that. I feel like there could have been some more information on screen once the explainations started. The drawings were great but some supporting text maybe. Just a thought.
I'm pretty curious about the self-modifying code of the ray casting loop. How was it even done? And was this common practice when writing tight loops? It's seriously cool to think about, but in hindsight, I can see how much of a pain in the ass it could be for someone who doesn't know about it.
I'm not sure how common it was, but Wolfenstein did it here: github.com/id-Software/wolf3d/blob/05167784ef009d0d0daefe8d012b027f39dc8541/WOLFSRC/WL_DR_A.ASM#L235 patched the `jge` etc :)
Wow, that was way simpler to understand that I thought it would be. Guess I'll be reading the wolfenstien source code this weekend.Amazing video as usual!
The technique is fairly common in assembly code, which forces a fair level of spaghettification anyway. The speed critical bits like 3D rendering would be where you invested in assembly coding, which is far slower to develop.
13:07 why is the height constant/p? Isn't the apparent size of an object arctan(size / distance)? Is it just an approximation that works fine?
I think the angle subtended is that, but the height itself is inversely proportional to distance alone.
Great video!, I have a question : at 5:09, we can already find the coordinate of the first horizontal intersection. But in raycasters, we need to check both horizontal and vertical intersections. The question is, in that case, how do we find the x,y coordinate of the first vertical intersections?
At 6:19 I show the first vertical intersection. I wasn't clear: there's also a dispatch on which of the four quadrants the ray is being cast on, which sets the scene for which direction to check first.
Yes, there is a class of programmers which simplify their formulas, finding interesting shortcuts in complex math algorithms. And then there is your business software colleague, the one who converts a number to a string to check if it is positive.
I actually expected the raycasting routine to be written in assembly.
This example, written in Javascript, roughly shows the concept. On the left you can see the players POV and on the right you can see the top-down view in relation.
wesmantooth.jeremyheminger.com/wip/pseudo3D/index.php
Use your arrow keys to move the player.
Interesting and easy to follow video. Any chance of a video covering the Doom renderer? :)
a detailed code review would be much appreciated.
Where floating points more expensive to compute back then ? And if so, did they use integers for doing all those calculations ?
Thanks for a great video btw.
Igor Mitrovic absolutely! There was no hardware floating point unit. Software routines for floating point had to be written, and software FP is at least 100x slower than hardware.
Floating point numbers have to be unpacked, added or multiplied, renormalized and then repacked. Or something like that. These are all integer operations and so can be written using just normal integer instructions. Hardware is more efficient by far at this though.
Googling "software floating point" gets you a bunch of guys if you want to look at code, else the Wikipedia article on floating point explains some of the algorithms. You can see how you might build routines but also that they're considerably more complex than just integer addition.
Fun fact: today's x86 int (long) divisions are about 10 times *slower* than float (double) divisions (possibly other operation too but I did not check; for div it's about 10 vs. 100 cycles) making fixed-point algorithms slower than simply using floats! Blew my mind when we stumbled upon that while trying to figure out why our fixed-point port of an algorithm was way slower on x86 than the original.
If anyone is interested, there are great lectures on RUclips by Alexander Stepanov(the guy who wrote STL in c++, and invented concepts) about many programming topics. In one of the lectures he talks about cost of fixed and floating point operations used on different data structures. You will find that lecture here :
ruclips.net/video/3K2LmnaLLF8/видео.html
Basically today, it looks like, floating points have pretty much caught up to integers, if you look at the speed of computation.
It ran well on my IBM Clone 8088. Well, until I used the map editor to create a room full of bosses... that brought it to its knees.
Incorrect - I owned a 486 when Wolfenstein was released in 1992.
It runs on a 286. Not in the "this is still playable and fun" way, but in the "it is still technically running" way.
I am honestly not surprised since there were ports to the SNES and other consoles of that era and I think that the 286 could out perform the CPU in the SNES
@@brianarmstrong234 If an 8MHz 286 could even outperform that 3.58MHz CPU in SNES :D Maybe it can, but who knows ... I'm not sure :D
This is greatly explained in Fabien Sanglard's book Game Engine Black Book, if anyone wants to know more.
Please, make a classic doom engine analysis, thanks
Actually the state of the art in 1992 was a 486DX.
Here in the US I wasn't taught trigonometry until college. I have a feeling you learned it a lot sooner...
I'd like to know how the lookup tables work and how the precision works with fixed point numbers.
Something like this: instead of having 360 degrees in a circle, have 256 of them. Prepare (for eaxmple) a table of sin() by having sin(0/256), sin(1/256), sin(2/256)... etc. Now you can get sin(angle) by looking at entry "angle" in the sin table. For the entries: we know that sin() ranges between -1 and +1. So, we use a signed number between -128 and 127 to mean -1 to +1: we store (sin(angle) * 128)) in our tables, as an integer. So now we have a 256-byte table of sin. To multiply by sin(angle), we take a number like "150" and multiply it directly with the sin value. Then we shift it down by 7 (to account for the 128-scaled sin()). That way we get an answer in the right domain. Later you can note that there's a close relationship between sin() and cos(), and can reuse parts of the table. tan is a little trickier, as is tan-1. But hopefully this gives a little taste!
They had no floats in 1992?
That absolutely blows my mind!
Fixed point is enough for such resolution and level of details
Floating point hardware existed but wasn't common until the Pentium. It was optional with 486s - the DX had floating point, the SX did not. Prior to that it was a separate chip (math-coprocessor) available all the way back to the 8087. See also the 287 and 387 chips - the math coprocessor counterparts to the 286 and 386. Many motherboards had space for the math coprocessor but it was left empty. Very little software took advantage of floating point hardware back then - CAD and spreadsheets come to mind, but both had software floating point algorithms to fall back on if the hardware wasn't present.
Badtanman They has floats in 1992 but integer math was faster before the Pentium. On the Pentium you could start an floating-point operation, do a bunch of integer math, and then read the float results X clock cycles later due to the dual U and V pipes of execution on the Pentium. This is how Quake was able to get ~8 - 16 texels for free with its software perspective correct texture mapper on the Pentium.
Now a days integer math is _slower_ then floats so every one uses float32 or float64 for the increased precision, speed, and convenience.
What grade school did you go to that taught trigonometry?
Great video though. Hope you keep doing videos in this style!
asdf asdf haha, more a mixup in my understanding of school systems. I went to school in the UK and so it would have been "fourth year" of senior school or something like that...
I always think that the American system is pretty simple: In total there are 12 grades, 1-5 is Grade School, 6-8 is Middle School, 9-12 is high school. Anything after is college of some kind...
Alright, but what about rendering the texture on the walls?
Henry H. I was hoping I covered that when I said about the scaled renderers. I didn't go into too much detail I guess. There were "JIT" compiled routines to scale all possible output sizes of a vertical slice of texture. And the textures were stored sideways to make that more efficient. Sorry I didn't get into details there!
ultima underworld had somewhat more 3d than wolfenstein3D
(both games appeared roughly at the same time according to wiki)
was the former intended for beefy computers or it wasn't as dynamic as wolfenstein and therefore didn't need so much computational power?
Ultima Underworld had sluggish controls, a lower framerate, and a smaller rendered window. It also wasn't much more 3d: IIRC it still enforced a coarse grid structure on the map and vision range was very restricted.
@@danpowell806 I do believe that it had different floor and ceiling heights, though, and maybe even floor slopes? 'Course, the renderer as a whole was a lot less efficient than what Doom would crank out later, so it was definitely a case where it only really worked because the game was slower paced than Wolf or Doom.
This game alone is what made me realize practically overnight that it was now safe to abandon my Amiga 500, and enter the world of PC's.
math and gometry is the secret of theory behind hardware, cpu and memory
Okay, I see how this can work for steps and angled walls (like standing in the middle of a hexagonal shaped room) but do would you add distinct textures for different ceilings and floors? Wouldn't Wolfenstein 3D constantly be limited to the "sky" and "ground" being one colour or one unmoving texture?
Yes and it was.
Except in versions of Wolf4SDL.
You could add in a textured sky or ground using this method, with a little bit of cheating. Let's say the sky for the whole level is a large image. You then have to figure out where on that image the player is, which you could use their x,y position in the map for. Then you need to rotate the image to match the player's orientation, and then render it with scaling before drawing the walls. A pretty basic demo effect. The problem is that doing it this way, you're writing the pixels where the walls are twice, which will slow things down. You could use a z-buffer, drawing the walls first and then only doing the ceiling where you haven't drawn anything yet, but it's still going to be relatively slow particularly for the machines Wolf3D were written for.
@@deanolium But isn't rotating an image extremely computationally hard?
What is 3D if it isn't scaling and rotating many many different images all assembled together into polygonal models?
Wouldn't a huge level therefore need a single absurdly high resolution image as each part of it is viewed from fairly close?
@@Treblaine Nah, rotating an image on the z-axis is relatively easy. Essentially you just need to figure out the x,y displacements for each pixel when you move across one x or y position in screen space, which can be done with a little trig, ideally using look up tables, and then repeated. Of course, you need to create the rotation across the x axis (after it's been rotated) so that it looks like a ceiling or floor, but you can cheat to mimic that by doing some scaling on each row of the screen.
As for the image needing to be large, in reality you just tile the image, which computationally is very, very cheap as you can just modulo the x and y points.
Another high quality video with excellent explanation, as usual with your content. Thank you!
One small note though: the graphics were perhaps a bit too small, the lines thin and faint, and the annotations a bit tiny. On mobile, I could see them fairly okay, but someone with a smaller screen and/or less good vision could have a hard time.
And especially thanks for adding captions (to whoever created them, if it was submitted, and not made by you), on top of your already clear and understandable speech!
Thank you for the feedback! I'm still learning how to best record these, and getting the screen zoomed up appropriately is something I just worked out how to do! Sorry the diagrams suffered from being too small here!
Super concise and easy to follow!
Sarcasm.
Im pretty new to the programming,
but wouldn't that be much easier to cast every ray from a diferrent point on a line perpendicular (facing player in 90°degrees angle, imagine that like letter "T" player is the leg "|" and those lines will be casted from points on that "-" part) to the player (can it potentially lead to orthographic perspective? - and if so, would it matter in the first place? ) or cast those lines from the same point and then shorten those lines in the middle (lines closer to the middle would get shorten by the higher value ) to compensate for the fish eye effect
Please correct me if im missing something
Not sure; what equation would you use to shorten the lines in the middle?
@@MattGodbolt first of all, i would let the player face the wall at 90° degrees angle, thus all the rays should be ideally the same length, which they will never be because of the fish eye effect. I would measure lengths of all rays, and then I would basicaly try to find out if that length progression from shortest to the longest ray can be represented by some mathematical function (like parabola or something) and then I would try to aplly reversed version of that mathematical function which would potentially cancel the fish eye effect.
But in this part of world its late night right now as I'm writting this very so maybe I'm saying nonsence.
@@daifee9174 if you're casting every ray from a different point on the perpendicular...what angle are you casting? It doesn't seem to matter if you cast them from the player's position, or a point on the plane. Then the "reversed version of the mathematical function" is distance dependent, and it needs to be calculated. There are engines out there that do this, but the "reverse transform" is more expensive than avoiding it in the first place :)
@@MattGodbolt If I would cast rays from diferrent points on the line, they would have to be perpendicular to that line. Im not sure if my poor english can accurately describe what I mean, so I'll show it on letters again. It would be like"•E" player is that "•" and that perpendicular line is the vertical part of letter E and rays casted from that line are like those 3 horizontal parts of letter E
So angle of every casted ray would be 90°