One modification you can make is the addition of mirrors. Select a point, then press space. Then, it will animate the ray moving from point to point with the mirror's effect too.
Yes, I was gonna say that too! Also, probably the angles shouldn't be equally spaced for the rays. Should be angles to equally spaced points on a line segment. Maybe that cosine thing does both these things?
@@WildAnimalChannel Yes, the right way to achieve ray casting for perspective projection is to consider a plane in front of the camera representing the screen, where rays are cast from the camera to points equally spaced on the plane. In this case, a segment will indeed be used. However as far as I'm concerned, I've never seen trigonometric stuff around perspective projection. When rendering stuff, it is usually involved before, at the stage of the world to view transformation (setting geometry relatively to the camera).
@@WildAnimalChannel It depends. A camera view matrix can be calculated once, then you can simply multiply it with vertex positions and you get a point in camera space. No big deal. In most cases you can still compute a lookup table with your coefficients at program startup for esoteric projections. As screen size is static, you can optimize quite easily.
I wrote this in Processing Java and then extended it to a very old looking doom-like game. Gotta say that this has been one hella good, interesting and entertaining journey, following your coding challenges and streams. I've learnt a ton and I've loved every part of it. *sends digital hugs*
I implemented it for myself, thank you for the inspiration! To correct perspective i derived a formula for h: h = wallHeight * sceneH / (2 * tan(radians(fov / 2)) * scene[i]). Use this instead of your map function.
Started watching the channel recently as I've taken on a lot of hobby projects recently centered on programming. Love the vids, the energy and excitement you bring to each topic makes everything fun and accessible. But moreover, I wanted to call out your thoroughness with adding links to inspiration, source material, and references in the description. It drives me crazy when video creators talk about some source material, be it other videos, news articles, etc, and fail to link to the source. Thanks for your thoroughness!
Just saw that the description says that it's auto-generated. Awesome! Seems like a really solid way to be both economical with time and all-encompassing in writing them up. I'll have to check out the code base for the description generator.
Long time listener... first time caller! :D Just became a patron at 97% of your goal, hopefully that pushed you over!! (went for the engineer tier, of course!
I needed exactly this for one of my student's projects. Figured the idea, was struggling with the implementation. Thanks a ton for posting this, top shelf delivery as always.
Yay! If you check out the community contributions, you'll see some viewers have improved/corrected my implementation. thecodingtrain.com/CodingChallenges/146-rendering-ray-casting.html
My 9 year old son was doing this kinda stuff using the Scratch IDE. He was working off of someone else's project but he was able to grasp the concepts and build maps for his game. It would be awesome if Dan would make a tutorial series for Scratch. I know my kids would love it. Hell, I'd love it!
Thank you for your great work! Here is just one hint: not only the wall height needs to be corrected, but also the horizontal ray distribution is non linear. the angular distribution of your rays is not equal to the horizontal column distribution on screen. The ray angle values need to be projected onto the virtual screen to get the correct x value for each column.
Really hope you take this further. I know you said the intention wasn't to make this more realistic, but it would be interesting to see how far you can take this e.g. textured walls, multiple lights.
If you check out thecodingtrain.com/CodingChallenges/146-rendering-ray-casting.html you;'ll find a bunch of community contributions that take this a lot further! I would love to revisit, yes.
When drawing the wall rectangles, you are mapping linearly from the distance to the wall to the height of the rectangle. Instead, you should map linearly from the INVERSE of the distance to the wall to the height of the rectangle.
How would I be able to program this? I have been making a ray marcher but have come into a similar issue. I'm am amateur programmer and can't figure this out!
This video gives me so much appreciation for the developers of well-optimized physics engines!!! I took this code and increased the resolution of the rays and the highest resolution I could get with no lag was about 5.33 rays/degree. Even then, the scene still has some pretty bad aliasing on the walls. It would be interesting to revisit this video with a focus on optimization.
The method used here is bad from the get-go if you're looking to optimize. Casting rays in every direction is not necessary, as most of them will hit the same plane as the neighbouring ray. A better option would be to cast rays towards the end of all planes, and working it out that way
I know I'm very much so late to this discussion, but don't forget you're doing this on a CPU, which is just not very adequate. By porting this to work on the GPU you could it orders of magnitude faster (for example, you would calculate a bunch of rays simultaneously, and not sequentially)
@@sweetberries4611 Yeah, but's that's all you need to render things. Then the walls are rasterized and it's done. Raycasting basically is just a lazy method to find what the closest wall is!
I was scrolling through Scratch (As I do occasionally) and I found a 3d maze that uses raycasting! In scratch! It's crazy what people can do with that.
View Frustum is the term for the area in front of the viewer that is visible. Cone of view also works. But it is less specific it doesn't include near and far distance.
@@paulkristopherespina7368 I succed to do one quite complete in Python, but I think it can be improved. And also that's a good way to see how people will do the code in another way. For the landing, I used an array to store the tetrominos and another with the playfield. Then I looked down the falling tetromino if there's something. But I don't know if it's a nice way...
this was a lot of fun and relatively easy to follow! made my own test, and implemented colored walls, textured walls, transparecy and mirrors! although, looking through transparent walls through mirrors really drops the fps
Just did this in C. But instead of using actual graphics, it runs in a terminal window xD. Thank you for showing me this. You are an incredible good teacher and a fun person to watch. I've been watching a lot of your videos, and learning a lot of new cool things. Sorry for bad english. I'm not a native english speaker.
For anyone interested, Gustavo since then made an AWESOME FREE course detailing the whole real implementation of RayCasting as it were in Wolfestein3D - also using JavasCript and p5. Check it out here: courses.pikuma.com/courses/raycasting And if anyone wants to know how to do ALL of that WITOUT ever using trigonometry, or even a square root, check out my github repo about it here: github.com/ArnonMarcus/Rational-Ray-Casting which is interactively hosted here: arnonmarcus.github.io/Rational-Ray-Casting/raycasting-js/index.html
As for moving the particle around instead of setting it to the mouse location, you can set the particle's default location to the center of the left pane, by dividing the width by 4 in its constructor. Then in your sketch file, you can modify the "keyPress()" function to something like this and make sure you call it in your draw() function before you call particle.show(); ... function keyPress() { let pos = particle.pos; // translation if ( keyIsPressed == true ) { if (key == 'a' || keyIsDown(LEFT_ARROW)) { // left pos.x -= 0.5; } if (key == 'd' || keyIsDown(RIGHT_ARROW)) { // right pos.x += 0.5; } if (key == 'w' || keyIsUp(UP_ARROW)) { // up //pos.y -= 0.5; particle.move(1); } if (key == 's' || keyIsUp(DOWN_ARROW)) { // down //pos.y += 0.5; particle.move(-1); } particle.update(pos.x, pos.y); // rotation if (key == 'e') { particle.rotate(0.01); } if (key == 'q') { particle.rotate(-0.01); } } } And this will give you a nice movement and rotation around the scene area. Where the keys 'a', 'd', 'w', and 's' or left, right, up and down arrow keys will move the particle left, right, up, down respectively and the 'e' and 'q' keys will rotate its view left and right respectively. Now, I tried to add in the functionality of rotating the particle with the mouse instead of updating its position and I wrote this function: function mouseMove() { let dx = mouseX - pmouseX; let dy = mouseY - pmouseY; let dir = p5.Vector.fromAngle(particle.heading); dir.normalize(); let newDir = createVector(dx,dy); newDir.normalize(); let theta = acos( dir.dot(newDir) ); return theta; // should return in degrees } And I called this in the draw function before calling particle.show(); such as this: particle.rotate(radians(mouseMove()); however, there is one caveat with this, it will cause the particle to continuously rotate as if it was a radar beacon. I think for the above to work correctly, you would have to set the mouses current position to that of the particle and you only want to update the particle's angle or heading, when the mouse does move... Very similar in constructing a Camera Object within a 3D scene for a 1st person view. There's a slight bug in this code, but I haven't quite narrowed it down yet... I'm trying to poll the mouses (x,y)'s current position and find the delta between that and it's previous (x,y). I'm then creating 2 vectors and normalizing them to simplify the equation of the dot product between them to find the angle. I then find the angle between these two vectors. I'm getting the first vector from the particle's current heading and the new vector from the change in the mouse's position (dx,dy). However, as I've said, this is causing it to continuously rotate, and moving the mouse doesn't seem to affect its rotation nor changes its direction... I'll have to go back and look as some of my older projects with Direct X and OpenGL and look at my Camera classes and the update and rendering functions within the Scene class to see how I'm generating them. It's a bit different working with JS as opposed to C++. C++ everything is strongly typed, and using the GLM library makes a lot of the vector and matrix math much simpler.
3:17 Why do you declare a constant variable and then your code later updates the constant... it is bugging my code out, I cannot re-declare the constant value. Am I doing something wrong?
The reason you are getting the fish-eye effect is because the distance calculations give you the radial distance (distance to the point), rather than the straight distance (distance to the linear equation with the slope equal to the perpendicular angle of the player angle). The fix for this is relatively simple as all you need to do is multiply the radial distance by the cosine of the difference of player to ray angles. TLDR: Straight Distance = Distance To Point * Cos(Ray Angle - Player Angle)
Taking advantage of the fact that you are doing tutorials on renderings with ray cast, bring a future video about mode 7, which was a technique used in old games like Super Mario Kart.
There is an easier way to get the raycast to not give that fisheye effect. The effect is due to perspective, as you explained, so you're using a calculation to flatten that perspective. But you could also shift the perspective in the very same way the human eye does. Place the rays along the backside of the ellipse, rather than the front side, so it has a concave arrangement rather than the current convex one. Ensure the rays meet at the tip of the ellipse, as if they were focused through that point. This can be achieved by either calculating the angle of each ray, or by simply making the ellipse slightly flattened along the back side. The rays will now hit the target with the same euclidean distance as the center ray, as each of them has an equal offset forward from the center ray. BUT, they will be displayed in reverse, so you need to flip the order you read them. The leftmost ray should display the rightmost line and vise versa. This is exactly how the human eye sees. Light goes in through the very tip of the front of the eye, focusing the light rays into a lens that shatters the beams again into the concave membrane at the back of the eye, giving a reverse image of what is in front of it. The brain then flips this image on the horizontal axis and the conscious you get presented with the correct image. The mathematical solution is still viable and have one major advantage: You can get a field of view larger than 180 degrees. With the physical solution, due to how the rays need to focus at the tip of the "eye", you can never get a FOV wider than 180 degrees. As soon as you go beyond that, you're just focusing the ray along the same path as the ray on the exact opposite side, just in reverse. The way to get around this would be to have more than one "eye", but then you would have to find a solution for how to present this to the screen.
About 35 years ago I tried a similar thing using graphics from wolfenstine. I also had the fish eye effect and I was unable to work out what the issue was at the time. I can finally put that to rest!
@@TheCodingTrain I've had to rewrite the software as I lost the HDD with the original. Did it in C in a DOS VM, so it's as close to the original as possible. Now to add texture
That's a really interesting idea! The allowed speed of movement could be balanced with the maximum force to entirely prevent collisions. At least, with a circular character.
@@ironnoriboi - Collision detection wouldn't require as much distance as looking at the environment. So the rays would only have to be cast out far enough to be effective, and the density of rays could also be lower.
@@ironnoriboi Incidentally, I'm glad you happened to reply to this thread. I've been watching a lot of coding videos lately, started working on a boid-based project, and could not for the life of me recall where this thread was with an idea I liked for bouncy collisions.
People actually have blind spots in their field of vision from where the optic nerve travels through the retina, but the brain compensate and fills in the space. You can find this by holding one finger from each hand about 7 inches apart and center your right hand over your left eye and about at arm's length, the top of your left hand will disappear. Your code makes it look like a flashlight.
I love your videos. Have learned so much! Do you intentionally make mistakes to help us relate to your programming or are you doing the programming on the fly and just making mistakes like we all do?
hi dan i noticed a bug at 27:39 look at the top view camera(left side of canvas) when you turn the angle to 360 the rays can go through the (randomlygenerated)walls for some reason
I've been wanting to make a maze game and use ray casting for visible surface determination. However, I had a different idea in mind as well. What about using this to create something I'd call "sound casters"? Basically, assign certain objects (enemies and other important sound generating game objects) within a fixed radius of the player, whether visible or not, as sound casters. Create a vector between the player and the sound caster (through walls and obstacles) to determine a direction so as not to cast rays unnecessarily, and then cast out a small number of rays (10?) from the sound caster to the player, reflecting the ray and decreasing the sound's intensity by a calculated percentage with each collision with the walls. If the sound intensity drops below a threshold, or if the number of reflected rays exceeds a threshold, delete the ray. Any rays that make it to the general, unobstructed, player location would have their associated sound intensities combined to generate the sound volume for that sound caster as heard by the player. With this system, coupled with 3D sound positioning, sounds would be more or less physically accurate and effects like echos could be easily created. The ray casting could be accomplished through shaders, taking full advantage of hardware acceleration.
hey dan could you make a video on Q learning? Now that you're getting into tensorflow and NEAT I think it would be interesting. It's not as obnoxious to explain and make as backpropagation and you can make a pretty interesting visuals for it like a car learning to go around a race track or something. thank you!
Coloring and texturing the walls would have no effect until you consider the returning ray. So far, you only consider the distance to the wall, not the reflection of the wall. Your particle is not a camera, it is a distance sensor.
Just really get used to it. Figure out something you want to make and use the internet to help you make it. Dont feel bad about using the internet to help you make a program because thats what everyone does even the super experienced people.
The output here is drawn in a square, which requires a lower FOV than games which use your entire (much wider) screen. the bigger the FOV compared to the width of your display, the more fish-eye ness.
This is really cool :) There is a weird glitch at the end when you switch between fisheye view and 'normal' view: on the left side, some rays go through some walls. Have you noticed that ?
I think the issue is that he is using the cosine angle for the closest test while the distance should only be adjusted for the display. This especially noticeable for 360 I think. Typically for angles of 90 or 270 degrees, the cosine will be 0 and any first boundary in the array will be the closest.
quite late to the party for this, but would it be possible for you to add player collision between the walls? i've been following along the tutorial and I've not worked out how
So i tried it, and... The cosine thing... Ive implemented that, but... It doesent make a huge impact, the fish eye is a teene tiny bit better, but still there... Im not sure what i did wrong... I made my completly from the ground up, and not with this tutorial, but i think the method is fairly simmilar.
I have a question. Can i combine a compound shape (say, a couple intersecting rectangles) into a single entity so i could further move/rotate it as a single object?
I have a feeling that your projection is still wrong. If you make the brightness of the scene fixed to 255 and you walk to a corner and look across the room you can see the wall heights invert themselves. i.imgur.com/HJgfkah.png.
Hello great job as always to keep my attention bind to your videos.... I was trying to add a texture for the sky, and I thought to map to cylinder coordinates, but I cant make it wrap correctly. Someone has tried it ? Or maybe could be in the next episode.... :D
so i got a question, i have a game in processing im working on, and its only 11.2 KB, however, its taking up 28% of my cpu and just over 1200MB of my ram... any suggestions?
its just a simple rpg game. and all you can do right now is just walk in and out of a building on the first map. it does have colision detection for stadic objects so you cant move though them either. im also using a csv file so i can edit the maps easily. i only have one number to represent the tyle in the csv file so i had to use a double for loop to get its x,y cords into an array that the program can read from. when it loads either into the building or out of the building, it clears the array and re loads the csv file it needs for the current map
He disabled the drawing of the border which is drawn around the rectangles, but these rectangles still counted the invisible border as part of their total width.
@@ironnoriboi In the code he explained he initialized record to infinity and later gave the record value to scene(i) array.if there is no shorter distance than record,record will be infinity and in map function if it is greater than sceneW square the color will be black
i did something like this a bit ago when i was bored. i did it in khan academy code though so its not that well optimized or anything but i can give a link if you want
I've read about raycasting and stuff many times, (I have a book written in 1993 about the subject and what it termed '3d maze games), but it was never very clear to me what the benefits were of Wolfenstein's grid based approach. Oh sure, I know old computers were not good at multiply, divide, or any of the trig functions, and were definitely atrocious at calculating a square root... Similarly, you'd use fixed point integer maths because they either couldn't calculate floating point numbers whatsoever, or could only do so using very slow software approximations. Even so, I'm not sure what was gained in implementing it as a grid over a 2d polygonal mesh and calculating actual line intersections that speeds it up enough to warrant the limitations imposed... Yes, you can guarantee that you're only testing each ray against a horizontal or vertical grid line (multiple times though, since you step through each grid square in turn), but you'd still have to calculate the actual intersection point, and the ray you're testing is still at arbitrary angles... Hmm. I suppose if the game world is axis aligned it does imply you can test whether a ray crosses any given grid line using a simple comparison of the coordinates... But the intersection itself would still have to be determined, which on the face of it doesn't sound any easier than a more generalised intersection calculation... Of course, there were other performance implications back then. Simply drawing the pixels onscreen was out of the league of many systems of the era... Even if you assume a 320x200 screen, you have to update 64000 pixels, which on say a 1.79 mhz CPU already cuts your maximum framerate down to under 28 fps IF the CPU you're using can write a memory value in a single cycle, which almost nothing from that era could. And even just clearing the screen is an operation on that level... Wolfenstein 3d runs on a 12 mhz 286... which means to keep up even 30 fps, with JUST the video memory fill (and assuming you can get away without needing to clear the memory, since you're overwriting all of it), means you have just over 6 CPU cycles per pixel drawn. And with that budget you have to perform all the 3d calculations, any memory lookups, and the actual pixel drawing routine (which is guaranteed to eat up at least one of your six cycles even in an absolute best case scenario...) It's just... Crazy that this was at all possible...
"it was never very clear to me what the benefits were of Wolfenstein's grid based approach" If they were to choose this approach they would have to implement level editor that supports this, and also have some kind of acceleration structure for testing rays, which would be very expensive and complicated. They did this for Doom, but Doom doesnt even use any kind of raycasting. "Hmm. I suppose if the game world is axis aligned it does imply you can test whether a ray crosses any given grid line using a simple comparison of the coordinates... But the intersection itself would still have to be determined, which on the face of it doesn't sound any easier than a more generalised intersection calculation..." It actually is much cheaper, you dont need to even compute the intersection point, you will just have it from stepping through cells.
The more I watch this channel, the more I realize that it's a true treasure. Keep it up, Dan! We love you!
One modification you can make is the addition of mirrors. Select a point, then press space. Then, it will animate the ray moving from point to point with the mirror's effect too.
im trying to do this. I want to be able to see the reflection and refraction of the light beams and how they form images when converging to a point
You can implement weak perspective projection to correct that fish eye effect.
h = 1 / z
Yes, I was gonna say that too! Also, probably the angles shouldn't be equally spaced for the rays. Should be angles to equally spaced points on a line segment. Maybe that cosine thing does both these things?
@@WildAnimalChannel Yes, the right way to achieve ray casting for perspective projection is to consider a plane in front of the camera representing the screen, where rays are cast from the camera to points equally spaced on the plane. In this case, a segment will indeed be used.
However as far as I'm concerned, I've never seen trigonometric stuff around perspective projection.
When rendering stuff, it is usually involved before, at the stage of the world to view transformation (setting geometry relatively to the camera).
I think he tried it on the stream and it didn't work?
@@HyperMario64 Yeah, you want to avoid trigometry functions as they are costly when doing the calculation millions of times.
@@WildAnimalChannel It depends.
A camera view matrix can be calculated once, then you can simply multiply it with vertex positions and you get a point in camera space. No big deal.
In most cases you can still compute a lookup table with your coefficients at program startup for esoteric projections. As screen size is static, you can optimize quite easily.
I wrote this in Processing Java and then extended it to a very old looking doom-like game. Gotta say that this has been one hella good, interesting and entertaining journey, following your coding challenges and streams. I've learnt a ton and I've loved every part of it. *sends digital hugs*
I implemented it for myself, thank you for the inspiration! To correct perspective i derived a formula for h: h = wallHeight * sceneH / (2 * tan(radians(fov / 2)) * scene[i]). Use this instead of your map function.
Started watching the channel recently as I've taken on a lot of hobby projects recently centered on programming. Love the vids, the energy and excitement you bring to each topic makes everything fun and accessible. But moreover, I wanted to call out your thoroughness with adding links to inspiration, source material, and references in the description. It drives me crazy when video creators talk about some source material, be it other videos, news articles, etc, and fail to link to the source. Thanks for your thoroughness!
Just saw that the description says that it's auto-generated. Awesome! Seems like a really solid way to be both economical with time and all-encompassing in writing them up. I'll have to check out the code base for the description generator.
It's manually created for my website and then that info is autogenerated into a YT description format. More here: thecodingtrain.com/
Long time listener... first time caller! :D Just became a patron at 97% of your goal, hopefully that pushed you over!! (went for the engineer tier, of course!
THANK YOU for the support!
awesome video!! at 10:30 I really shouted a wow it was just amazing, this is the best channel on youtube! thank you Dan!!
I needed exactly this for one of my student's projects. Figured the idea, was struggling with the implementation. Thanks a ton for posting this, top shelf delivery as always.
Yay! If you check out the community contributions, you'll see some viewers have improved/corrected my implementation. thecodingtrain.com/CodingChallenges/146-rendering-ray-casting.html
@@TheCodingTrain Thank you!
courses.pikuma.com/courses/raycasting
My 9 year old son was doing this kinda stuff using the Scratch IDE. He was working off of someone else's project but he was able to grasp the concepts and build maps for his game.
It would be awesome if Dan would make a tutorial series for Scratch. I know my kids would love it. Hell, I'd love it!
Wolfenstein 3D was magic, and now i know how it worked! So simple in retrospect
Thank you for your great work! Here is just one hint: not only the wall height needs to be corrected, but also the horizontal ray distribution is non linear. the angular distribution of your rays is not equal to the horizontal column distribution on screen. The ray angle values need to be projected onto the virtual screen to get the correct x value for each column.
Ah, great point!
Really hope you take this further. I know you said the intention wasn't to make this more realistic, but it would be interesting to see how far you can take this e.g. textured walls, multiple lights.
If you check out thecodingtrain.com/CodingChallenges/146-rendering-ray-casting.html you;'ll find a bunch of community contributions that take this a lot further! I would love to revisit, yes.
This coding challenge + Maze Generation challenge #10 + point-line collision detection = endless fun.
Don’t forget about adding node.js and socket.io for multiplayer
When drawing the wall rectangles, you are mapping linearly from the distance to the wall to the height of the rectangle. Instead, you should map linearly from the INVERSE of the distance to the wall to the height of the rectangle.
Yeah, the fish-eye would look more 'natural' that way. What he ended up with is a... hollow-eye? :)
How would I be able to program this? I have been making a ray marcher but have come into a similar issue. I'm am amateur programmer and can't figure this out!
this is so cool. Ive been tryna find a way to easily render 3d curves and this gave me so many ideas
This video gives me so much appreciation for the developers of well-optimized physics engines!!!
I took this code and increased the resolution of the rays and the highest resolution I could get with no lag was about 5.33 rays/degree. Even then, the scene still has some pretty bad aliasing on the walls.
It would be interesting to revisit this video with a focus on optimization.
The method used here is bad from the get-go if you're looking to optimize. Casting rays in every direction is not necessary, as most of them will hit the same plane as the neighbouring ray. A better option would be to cast rays towards the end of all planes, and working it out that way
I know I'm very much so late to this discussion, but don't forget you're doing this on a CPU, which is just not very adequate. By porting this to work on the GPU you could it orders of magnitude faster (for example, you would calculate a bunch of rays simultaneously, and not sequentially)
Try to make Doom with this😂😂
@@nikname2773 doom doesn't use raycasting, wolfenstein does
For the hundreth time, Doom uses Binary Space Partitioning (BSP), not raycasting
@@doommaker4000 BSP is just an occlusion culling method, not rendering
@@sweetberries4611 Yeah, but's that's all you need to render things. Then the walls are rasterized and it's done. Raycasting basically is just a lazy method to find what the closest wall is!
@@sweetberries4611 I thought the first doom used raycasting, apparently I was wrong.
Thank you Dan, brilliant and beautiful challenge!
THANKS FOR THIIS DAN!!!!!!!!! I'M WAITING THIS FOR SOOO LOOOOONG! I LOVE U!
I was scrolling through Scratch (As I do occasionally) and I found a 3d maze that uses raycasting! In scratch! It's crazy what people can do with that.
View Frustum is the term for the area in front of the viewer that is visible. Cone of view also works. But it is less specific it doesn't include near and far distance.
Maybe next you could try to code a Tetris? That should be great I think! Btw you can find all of the rules in the Tetris Guidelines
I actually tried to do one but got stuck on making the tetromino stay on the field after it lands. I cant figure it out so i really hope he does one
@@paulkristopherespina7368 I succed to do one quite complete in Python, but I think it can be improved. And also that's a good way to see how people will do the code in another way.
For the landing, I used an array to store the tetrominos and another with the playfield. Then I looked down the falling tetromino if there's something. But I don't know if it's a nice way...
@@jeeaile5835 thanks for that, i might try it when i come back to it
Submit it as an issue to the CodingTrain/RainbowCode github repo.
Ritoban Roy Chowdhury well why not but isn’t it a problem that’s in Python ? And it’s in french too...
Whoa!! This is awesome!! 👍
this was a lot of fun and relatively easy to follow! made my own test, and implemented colored walls, textured walls, transparecy and mirrors! although, looking through transparent walls through mirrors really drops the fps
Just did this in C. But instead of using actual graphics, it runs in a terminal window xD.
Thank you for showing me this. You are an incredible good teacher and a fun person to watch. I've been watching a lot of your videos, and learning a lot of new cool things.
Sorry for bad english. I'm not a native english speaker.
This youtuber made a similar program in c++ but in windows console. You might also like his other stuff.
ruclips.net/video/xW8skO7MFYw/видео.html
It's nice to see someone making errors, makes me feel less stupid.
For anyone interested, Gustavo since then made an AWESOME FREE course detailing the whole real implementation of RayCasting as it were in Wolfestein3D - also using JavasCript and p5.
Check it out here: courses.pikuma.com/courses/raycasting And if anyone wants to know how to do ALL of that WITOUT ever using trigonometry, or even a square root, check out my
github repo about it here: github.com/ArnonMarcus/Rational-Ray-Casting which is interactively hosted here: arnonmarcus.github.io/Rational-Ray-Casting/raycasting-js/index.html
Thanks for creating that videos
I love it !!!!
As for moving the particle around instead of setting it to the mouse location, you can set the particle's default location to the center of the left pane, by dividing the width by 4 in its constructor. Then in your sketch file, you can modify the "keyPress()" function to something like this and make sure you call it in your draw() function before you call particle.show(); ...
function keyPress() {
let pos = particle.pos;
// translation
if ( keyIsPressed == true ) {
if (key == 'a' || keyIsDown(LEFT_ARROW)) { // left
pos.x -= 0.5;
}
if (key == 'd' || keyIsDown(RIGHT_ARROW)) { // right
pos.x += 0.5;
}
if (key == 'w' || keyIsUp(UP_ARROW)) { // up
//pos.y -= 0.5;
particle.move(1);
}
if (key == 's' || keyIsUp(DOWN_ARROW)) { // down
//pos.y += 0.5;
particle.move(-1);
}
particle.update(pos.x, pos.y);
// rotation
if (key == 'e') {
particle.rotate(0.01);
}
if (key == 'q') {
particle.rotate(-0.01);
}
}
}
And this will give you a nice movement and rotation around the scene area. Where the keys 'a', 'd', 'w', and 's' or left, right, up and down arrow keys will move the particle left, right, up, down respectively and the 'e' and 'q' keys will rotate its view left and right respectively.
Now, I tried to add in the functionality of rotating the particle with the mouse instead of updating its position and I wrote this function:
function mouseMove() {
let dx = mouseX - pmouseX;
let dy = mouseY - pmouseY;
let dir = p5.Vector.fromAngle(particle.heading);
dir.normalize();
let newDir = createVector(dx,dy);
newDir.normalize();
let theta = acos( dir.dot(newDir) );
return theta; // should return in degrees
}
And I called this in the draw function before calling particle.show(); such as this:
particle.rotate(radians(mouseMove());
however, there is one caveat with this, it will cause the particle to continuously rotate as if it was a radar beacon.
I think for the above to work correctly, you would have to set the mouses current position to that of the particle and you only want to update the particle's angle or heading, when the mouse does move... Very similar in constructing a Camera Object within a 3D scene for a 1st person view. There's a slight bug in this code, but I haven't quite narrowed it down yet... I'm trying to poll the mouses (x,y)'s current position and find the delta between that and it's previous (x,y). I'm then creating 2 vectors and normalizing them to simplify the equation of the dot product between them to find the angle. I then find the angle between these two vectors. I'm getting the first vector from the particle's current heading and the new vector from the change in the mouse's position (dx,dy). However, as I've said, this is causing it to continuously rotate, and moving the mouse doesn't seem to affect its rotation nor changes its direction... I'll have to go back and look as some of my older projects with Direct X and OpenGL and look at my Camera classes and the update and rendering functions within the Scene class to see how I'm generating them. It's a bit different working with JS as opposed to C++. C++ everything is strongly typed, and using the GLM library makes a lot of the vector and matrix math much simpler.
3:17 Why do you declare a constant variable and then your code later updates the constant... it is bugging my code out, I cannot re-declare the constant value. Am I doing something wrong?
The reason you are getting the fish-eye effect is because the distance calculations give you the radial distance (distance to the point), rather than the straight distance (distance to the linear equation with the slope equal to the perpendicular angle of the player angle). The fix for this is relatively simple as all you need to do is multiply the radial distance by the cosine of the difference of player to ray angles.
TLDR:
Straight Distance = Distance To Point * Cos(Ray Angle - Player Angle)
Very nice episode, love your content!
You are amazing
Taking advantage of the fact that you are doing tutorials on renderings with ray cast, bring a future video about mode 7, which was a technique used in old games like Super Mario Kart.
You should definitely make a game with this in the next coding challange
Thank you so much for your videos!!!!
There is an easier way to get the raycast to not give that fisheye effect.
The effect is due to perspective, as you explained, so you're using a calculation to flatten that perspective. But you could also shift the perspective in the very same way the human eye does.
Place the rays along the backside of the ellipse, rather than the front side, so it has a concave arrangement rather than the current convex one. Ensure the rays meet at the tip of the ellipse, as if they were focused through that point. This can be achieved by either calculating the angle of each ray, or by simply making the ellipse slightly flattened along the back side.
The rays will now hit the target with the same euclidean distance as the center ray, as each of them has an equal offset forward from the center ray. BUT, they will be displayed in reverse, so you need to flip the order you read them. The leftmost ray should display the rightmost line and vise versa.
This is exactly how the human eye sees. Light goes in through the very tip of the front of the eye, focusing the light rays into a lens that shatters the beams again into the concave membrane at the back of the eye, giving a reverse image of what is in front of it. The brain then flips this image on the horizontal axis and the conscious you get presented with the correct image.
The mathematical solution is still viable and have one major advantage: You can get a field of view larger than 180 degrees. With the physical solution, due to how the rays need to focus at the tip of the "eye", you can never get a FOV wider than 180 degrees. As soon as you go beyond that, you're just focusing the ray along the same path as the ray on the exact opposite side, just in reverse.
The way to get around this would be to have more than one "eye", but then you would have to find a solution for how to present this to the screen.
About 35 years ago I tried a similar thing using graphics from wolfenstine. I also had the fish eye effect and I was unable to work out what the issue was at the time. I can finally put that to rest!
Wow!
@@TheCodingTrain I've had to rewrite the software as I lost the HDD with the original. Did it in C in a DOS VM, so it's as close to the original as possible. Now to add texture
Well, the original was in QBasic and horribly slow.
320 x 200 256 colour mode 13h is the same
Now to combine this with the maze generator and A* pathfinding to make a 3d maze running game :)
Here's an idea, you could do bouncy collisions by applying a force in the opposite direction of the rays proportional to how compressed they are.
That's a really interesting idea! The allowed speed of movement could be balanced with the maximum force to entirely prevent collisions. At least, with a circular character.
This means that you need to cast rays into each direction you can move. Otherwise you can walk through walls if you turn your back to it.
@@ironnoriboi - Collision detection wouldn't require as much distance as looking at the environment. So the rays would only have to be cast out far enough to be effective, and the density of rays could also be lower.
@@SirRebrl yep
@@ironnoriboi Incidentally, I'm glad you happened to reply to this thread. I've been watching a lot of coding videos lately, started working on a boid-based project, and could not for the life of me recall where this thread was with an idea I liked for bouncy collisions.
People actually have blind spots in their field of vision from where the optic nerve travels through the retina, but the brain compensate and fills in the space. You can find this by holding one finger from each hand about 7 inches apart and center your right hand over your left eye and about at arm's length, the top of your left hand will disappear. Your code makes it look like a flashlight.
Amazing. So interesting! Thank you.
I love your videos. Have learned so much!
Do you intentionally make mistakes to help us relate to your programming or are you doing the programming on the fly and just making mistakes like we all do?
Another great video!
to fix the fish eye just use cos of the difference of the look angle and the offset
That's amazing! 😀
Adds some monsters, some doors and keys and here we go
Why did I never tried this? Great Idea!
Now I did! Thx for the inspiration! ruclips.net/video/7aCqhJK9i2U/видео.html
combine with marching cubes and you got yourselve a cave exploration
hi dan i noticed a bug at 27:39 look at the top view camera(left side of canvas) when you turn the angle to 360 the rays can go through the (randomlygenerated)walls for some reason
yeah...
weird
It also dissapears when he clicks too
oh weird! I didn't notice that. . .hmmm, will have to debug!
I've been wanting to make a maze game and use ray casting for visible surface determination. However, I had a different idea in mind as well. What about using this to create something I'd call "sound casters"? Basically, assign certain objects (enemies and other important sound generating game objects) within a fixed radius of the player, whether visible or not, as sound casters. Create a vector between the player and the sound caster (through walls and obstacles) to determine a direction so as not to cast rays unnecessarily, and then cast out a small number of rays (10?) from the sound caster to the player, reflecting the ray and decreasing the sound's intensity by a calculated percentage with each collision with the walls. If the sound intensity drops below a threshold, or if the number of reflected rays exceeds a threshold, delete the ray. Any rays that make it to the general, unobstructed, player location would have their associated sound intensities combined to generate the sound volume for that sound caster as heard by the player. With this system, coupled with 3D sound positioning, sounds would be more or less physically accurate and effects like echos could be easily created. The ray casting could be accomplished through shaders, taking full advantage of hardware acceleration.
Love this idea!
Didn't know processing can do this much 🤩
The ghost of TotalBiscuit is smiling with your inclusion of an FOV slider.
My man!
I made this in pure js. It was a lot of fun.
Awesome, I think you should map in a way that the close walls are a bit darker otherwise it s hard to go around just watching the rendering ^^
So add basic colision detection and retro doom-like rendering engine in javascript done
courses.pikuma.com/courses/raycasting
love it!!!
Muy bueno!!
this is so insaneeeeeeeeeeeeeeeeee
What if you would add reflections?? I think it would look quite cool!
Could you have something build the scene based on just what the camera sees and it's position and rotation?
So you reinvented the Depth Buffer? :^)
yeaaaa! I love things like this
hey dan
could you make a video on Q learning?
Now that you're getting into tensorflow and NEAT I think it would be interesting. It's not as obnoxious to explain and make as backpropagation and you can make a pretty interesting visuals for it like a car learning to go around a race track or something.
thank you!
yeah he already wanting to do this project
@@sanchitverma2892 really?? thats great!
Coloring and texturing the walls would have no effect until you consider the returning ray.
So far, you only consider the distance to the wall, not the reflection of the wall.
Your particle is not a camera, it is a distance sensor.
So cool, thanks professor.
I wonder what the code would be like with Swift.
The inverse square law for brightness means that the brightness should be equal to teh distance to the power of negative 2
I remember doing this in QBasic 20 years ago
him using 45* instead of 75* for the FOV is killing my eyes
@Reyes25111 it's just the fact that 75 is default in most games 45 is way too low, but again that depends on how the game calculates the fov.
Just really get used to it. Figure out something you want to make and use the internet to help you make it. Dont feel bad about using the internet to help you make a program because thats what everyone does even the super experienced people.
The output here is drawn in a square, which requires a lower FOV than games which use your entire (much wider) screen.
the bigger the FOV compared to the width of your display, the more fish-eye ness.
This is so interesting.
This is really cool :) There is a weird glitch at the end when you switch between fisheye view and 'normal' view: on the left side, some rays go through some walls. Have you noticed that ?
"oooOOps"
I think the issue is that he is using the cosine angle for the closest test while the distance should only be adjusted for the display. This especially noticeable for 360 I think. Typically for angles of 90 or 270 degrees, the cosine will be 0 and any first boundary in the array will be the closest.
Yes! I didn't notice this bug while recording!
quite late to the party for this, but would it be possible for you to add player collision between the walls?
i've been following along the tutorial and I've not worked out how
What would a 360 degree vision cone look like?
there is something missing about this raycasting is that light bounce walls
At 25:36 that is basically scalar projection right?
this reminds me of a RUclipsr named Bisqwit
Harold Geronimo Bisqwit doesn’t make mistakes though 😅
So i tried it, and... The cosine thing... Ive implemented that, but... It doesent make a huge impact, the fish eye is a teene tiny bit better, but still there... Im not sure what i did wrong... I made my completly from the ground up, and not with this tutorial, but i think the method is fairly simmilar.
This reminds me of my Siemens phone back in 2000
If the whole "map" was covered in black, how would you subtract your view to remove the black? Just like a lamp in the dark.
do you have an Optimization Code video?
I googled map function mentioned in this video but didn't get the function that is used here in this video.Can someone help me with this map function?
This one? ruclips.net/video/nicMAoW6u1g/видео.html
@@TheCodingTrain Thank you
Final: fisheye eff still on the screen :)
You just wrote Wolfenstein3d in 30 minutes :)
I have a question. Can i combine a compound shape (say, a couple intersecting rectangles) into a single entity so i could further move/rotate it as a single object?
This was amazing!
nice vid.
I have a feeling that your projection is still wrong. If you make the brightness of the scene fixed to 255 and you walk to a corner and look across the room you can see the wall heights invert themselves. i.imgur.com/HJgfkah.png.
it posible to texture the wall without web gl?
why, oh why i ever looked up that "this dot song" xD xD DIS DAAAT, DIS DAAT... aAA A A A A xD
Hello great job as always to keep my attention bind to your videos.... I was trying to add a texture for the sky, and I thought to map to cylinder coordinates, but I cant make it wrap correctly. Someone has tried it ? Or maybe could be in the next episode.... :D
I figured out it, here is the solution: My code is in Java, it is the solution on a buffer of pixel.
for(int x=0; x
Could you do reflections as well?
so i got a question, i have a game in processing im working on, and its only 11.2 KB, however, its taking up 28% of my cpu and just over 1200MB of my ram... any suggestions?
its just a simple rpg game. and all you can do right now is just walk in and out of a building on the first map. it does have colision detection for stadic objects so you cant move though them either. im also using a csv file so i can edit the maps easily. i only have one number to represent the tyle in the csv file so i had to use a double for loop to get its x,y cords into an array that the program can read from. when it loads either into the building or out of the building, it clears the array and re loads the csv file it needs for the current map
Any plan on implementing delaunay triangulation?
It's definitely on the list to get to sometime!
Why did we increase w to w+1 at 16:58 to reduce the gap? Can someone help?
He disabled the drawing of the border which is drawn around the rectangles, but these rectangles still counted the invisible border as part of their total width.
Because of this invisible border width got equal to infinity and thus showed black between walls. Am I right?
@@bhanusri3732 how did you get to anything infinity?
@@ironnoriboi In the code he explained he initialized record to infinity and later gave the record value to scene(i) array.if there is no shorter distance than record,record will be infinity and in map function if it is greater than sceneW square the color will be black
How can I change the geometry of the obstacles?
You tha best
i did something like this a bit ago when i was bored. i did it in khan academy code though so its not that well optimized or anything but i can give a link if you want
I've read about raycasting and stuff many times, (I have a book written in 1993 about the subject and what it termed '3d maze games), but it was never very clear to me what the benefits were of Wolfenstein's grid based approach.
Oh sure, I know old computers were not good at multiply, divide, or any of the trig functions, and were definitely atrocious at calculating a square root...
Similarly, you'd use fixed point integer maths because they either couldn't calculate floating point numbers whatsoever, or could only do so using very slow software approximations.
Even so, I'm not sure what was gained in implementing it as a grid over a 2d polygonal mesh and calculating actual line intersections that speeds it up enough to warrant the limitations imposed...
Yes, you can guarantee that you're only testing each ray against a horizontal or vertical grid line (multiple times though, since you step through each grid square in turn), but you'd still have to calculate the actual intersection point, and the ray you're testing is still at arbitrary angles...
Hmm. I suppose if the game world is axis aligned it does imply you can test whether a ray crosses any given grid line using a simple comparison of the coordinates...
But the intersection itself would still have to be determined, which on the face of it doesn't sound any easier than a more generalised intersection calculation...
Of course, there were other performance implications back then.
Simply drawing the pixels onscreen was out of the league of many systems of the era...
Even if you assume a 320x200 screen, you have to update 64000 pixels, which on say a 1.79 mhz CPU already cuts your maximum framerate down to under 28 fps IF the CPU you're using can write a memory value in a single cycle, which almost nothing from that era could.
And even just clearing the screen is an operation on that level...
Wolfenstein 3d runs on a 12 mhz 286...
which means to keep up even 30 fps, with JUST the video memory fill (and assuming you can get away without needing to clear the memory, since you're overwriting all of it), means you have just over 6 CPU cycles per pixel drawn.
And with that budget you have to perform all the 3d calculations, any memory lookups, and the actual pixel drawing routine (which is guaranteed to eat up at least one of your six cycles even in an absolute best case scenario...)
It's just... Crazy that this was at all possible...
"it was never very clear to me what the benefits were of Wolfenstein's grid based approach"
If they were to choose this approach they would have to implement level editor that supports this, and also have some kind of acceleration structure for testing rays, which would be very expensive and complicated. They did this for Doom, but Doom doesnt even use any kind of raycasting.
"Hmm. I suppose if the game world is axis aligned it does imply you can test whether a ray crosses any given grid line using a simple comparison of the coordinates... But the intersection itself would still have to be determined, which on the face of it doesn't sound any easier than a more generalised intersection calculation..."
It actually is much cheaper, you dont need to even compute the intersection point, you will just have it from stepping through cells.
Watch Godbolt's explanation ruclips.net/video/eOCQfxRQ2pY/видео.html