This is the way. More verbal flubs and errors in non-fiction YT videos should be fixed using drop-ins. In particular it’s better for anyone who’s not looking at the screen at that moment, and presumably for the visually impaired. And it’s also just awkward and distracting to catch a caption with a textual correction that flashes in out of nowhere. They don’t have to be smooth or unnoticeable either: it’s not as if hard-caption text is any more smooth and unnoticeable than even a roughly done audio drop-in.
I did a bunch of ray tracing work in the 90s at Uni. My final project took ~12 hours on my 25MHz Macintosh (after I bought the optional FPU) to render. I've just now found the source and downloaded PovRay (the same thing I used back then) and on my whatever-it-is-work-laptop it took
As the lead rendering engineer for a VFX studio a couple of points: The reason we use it is not "because it can handle more complex lighting effects", it's because, compared to rasterizing triangles, it suffers less from overdraw, and maintains a O(log2(n)) time complexity for traversal (assuming a binary BVH) compared to rasterization which is more like O(n) in complexity vs the number of objects in the scene. Raytracing, for extremely complex scenes, is often FASTER than rasterization, despite what some people say. For VFX where we render quadrillions of objects, it is the only practical option. In many cases I've tested film-production scale scenes on GPU rasterizer vs CPU raytraced and the CPU implementation was much faster. It's that much better for scene complexity. Another reason we use raytracing is that it gives us a way to physically simulate the path of photons through the scene, and hence use physics equations to run proper physical simulations of light transport. Also, a simpler explanation of the fresnel equations is that they give you the probability of reflected and refracted photons after an interaction with a surface. The algorithm shown here is called "splitting" where you cast both refraction and reflection rays. In modern raytracers used in film production, we don't do this. Instead, we randomly (probabilistically) choose either reflection or refraction, which is what happens in the real world. Real photons choose one option, not both. This reduces the complexity of multi-bounce raytracing since the time-complexity doesn't increase exponentially.
"The algorithm shown here is called "splitting" where you cast both refraction and reflection rays. In modern raytracers used in film production, we don't do this. Instead, we randomly (probabilistically) choose either reflection or refraction, which is what happens in the real world." I assume you then cast several rays for one pixel, all of which behave "randomly", and combine them together for the final result? In this early example I believe there's only one ray per pixel so you have to split them like this to get both reflection and refraction.
@@lowlifehitech that is correct. Since fresnel is a probability, you choose a random number, apply that probability, then choose either reflection or refraction for each interaction, then over multiple samples per pixel, you get the right result. This is also how photons behave in the real world. They don't split, they take a single path, but they add up over time to form a smooth image stochastically.
Great video. The process is so intuitive and straight forward it's remarkable that it works so quite well when it comes to rendering photo-realistic scenes.
Yup, I remember coding it back in college in the 1990s. I even remember the textbooks I used: _"Mathematical Elements for Computer Graphics"_ and _"Procedural Elements for Computer Graphics,"_ both by David Rogers _et al._ Sadly, after graduating, I took up a job writing business software. While my career hasn't been bad, I do miss not working on hardcore computer graphics. Every time CG software developers win Oscars, I think _"That could've been me!"_
In reality the technic is so simple that probably a hundred thousand teenagers have implemented their raytracer in the last forty years. We're actually jealous of people who are paid to play and have fun 😂
Yes, I remember Adobe After Effects briefly had Ray tracing in the 2014 edition, but it disappeared in the 2015 version. I was pretty upset at the time. Edit: I remember switching to cinema 4d for animations of Infinity boxes and having to learn the lesson of the ultimate importance of the bounce number the hard way. For 4k renders to look in any way realistic, it typically requires upwards of 15 bounces. The processing time per frame was on the order of 20 minutes at the time. This was a fun walk back in time.
I got into scripting PovRay animations in the 90s on my Amiga. I would setup a scene, render a postage stamp sized preview and then set a render to go and it would take two or three days to render a 2 second looping animation. Nobody had seen stuff like we were doing back then. We would project that stuff up in dance parties and music festivals.
It did for me. They also published the code for it and I re-implemented it in Turbo Pascal. I subsequently worked out the code for triangles and refraction and rendered images that took as long as a weekend...
Had a great time writing a ray tracer (recursive) back in ~1992 for graphics class. It was even cooler that we could write it threaded and run it on the SGI servers that had six processors since it's embarrassingly parallel.
Lewis (and all of us) with bated excitement "omg look at that. It came out in 1976" A layhuman hears us and then looks at some spheres and triangles and wonders what we're all high on. 🤣
I remember playing a bit with POVRAY in the mid/late 1990s, which would run on a first gen Pentium. Rendering a single frame at 400x400 probably took at least several minutes if not dozens. Blender is quite impressive in modern times, as is GPU acceleration and all the new optimizations.
remember st magazine, (french journal) in 1990 may be, it showed ray tracing images quite good with an atari tt and explained well the technology whith even code
Heh, I still have a series of Amiga magazines where they posted articles explaining, with source code, how to code a raytracer for quadratic equations.
For me ray tracing really set off in 1993 when the first Jurassic Park movie was released - the first movie to feature full-length realistic looking animations rendered using ray-tracing.
So many teenagers playing with programming in the 80s and 90s have done these raytracing algorithms. It's kid fun, and pretty simple to understand. I get this same vibe from this young man, especially when he blew up the drawings, suddenly getting bored explaining the obvious. But i guess he also does some actual serious work with these robotic harms behind. Would be a bit more interesting i guess.
Early 90s on my 8 Mhz Atari ST GFA Raytrace and POV. Calculating in the night hoping the Atari ST (520 second hand already) didn't crash before it was done. (Atari ST Format 32 from 1992 had a piece and some software for it. But GFA Raytrace was a lot easier.) (Pov must have been when i got an 1 Mb ST with HD in 1996 .. The ST Format 32 program was different software?)
I did raytracing back in the early 1990s on a 486. Only still images, though, and some would take more than a day to render, which meant I couldn't use my PC until it was done.
Questions: 1. Why is the ray calculated in reverse, would it not be more logical to start from the light source and then traverse the ray till it either gets absorbed or enters the camera surface? 2. Would it make more sense to do ray calculation were its properties are: 1. wave length (color), 2. wave amplitude (brightness). Instead of RGBA for ray color and brightness?
1. that would be very inefficient, you would have to calculate waay to many rays that do not end up in the camera 2. that's an interesting idea, I'd be interested in that :)
1. Unless you have a tight spotlight you would be calculating more rays. 2. You can add the RGB values as you bounce the ray but calculating the wavelength wouldn't be a simple sum. In more complicated tracing the RGB values split into separate waves as different frequencies bend slightly. It would be nice if quantum computing could calculate all the possible paths at once.
1. Other techniques calculate the "forward" path of photons like photon mapping and bidirectional path tracing. But as others said, it would be inefficient on its own as most rays would never hit the camera. 2. You need more than just a simple wavelength to represent a color. You'd need some kind of spectral power distribution (the power for each wavelength) in the visible spectrum. But since the final pixel is encoded as RGB, you can often get away with calculating the power only for these three wavelength. But this is a simplification that might not be precise enough to implement weird effects like fluorescence, polarization and other non-linear optics phenomenons.
@KimGameDev while the other replies are correct - it's quite inefficient if what you're trying to do is produce a 'basic' image from a camera. However it's not a stupid idea and has actually been implemented in several rendering engines (look up photon tracing/mapping, and bi-directional path tracing) because it allows you to calculate some more advanced physical effects like light caustics (where light refracts and can become focused, overlapping into a bright spot). For example, Octane Renderer and LuxCore Renderer can do this, and the results are quite realistic. Octane is also a spectral renderer, which means it can also do the wavelength calculations of your second point.
1. As already mentioned, it would be prohibitively expensive as most rays would not end up hitting the camera sensor. 2. Spectral ray/path tracing has been around for a while whereby the properties (wavelengths, etc.) of light are factored into the calculations as opposed to RGB channels.
I used to make abstract images in 3d modelling software using that principle, general render time in the early 2000s was 16-24 hours for a single picture!
Ray tracing was a sub optimal technique. Ray casting makes a lot more sense. You only trace the rays the "camera" can see. Even back in the day, ray casting did reflections (not refraction as far as i know), but reflections definitely. Can't remember the name of the program i used, but it was ray casting, and it did reflections just fine.
That looks like old school tractor feed dot matrix paper, but it's fake. Where do you get such a thing? (or is it real and this guy has been hoarding it since the eighties as I have?)
Light and sound waves actually behave quite similarly; I would suppose that a person could do an acoustic simulation with similar math, but with a bit of special care taken to ensure the sound waves absorb the texture of the objects depicted. Paired with a description of the scene and the math and I imagine it wouldn’t be too hard to understand.
3:30 er, why would you do recursive ray tracing when you can just take what you need from when you hit, alter the current ray's direction and carry on like how 2d ray tracing works, just with 3d or 4d instead?
Lewis made a bit of a hash of the pronunciation of Fresnel, so we fixed it in post, gold star if you noticed! -Sean
Remember, EVERYTHING has Fresnel.
Come on, I'm dying to know how he actualy said it.
FreS-nell, right?
please be more careful in the future 🙏
It was almost impossible to detect such subtle edits but I just about sussed it out. ;)
This is the way. More verbal flubs and errors in non-fiction YT videos should be fixed using drop-ins. In particular it’s better for anyone who’s not looking at the screen at that moment, and presumably for the visually impaired. And it’s also just awkward and distracting to catch a caption with a textual correction that flashes in out of nowhere. They don’t have to be smooth or unnoticeable either: it’s not as if hard-caption text is any more smooth and unnoticeable than even a roughly done audio drop-in.
I did a bunch of ray tracing work in the 90s at Uni.
My final project took ~12 hours on my 25MHz Macintosh (after I bought the optional FPU) to render.
I've just now found the source and downloaded PovRay (the same thing I used back then) and on my whatever-it-is-work-laptop it took
Amazing!
If you rasterise it, it takes 0.000001 seconds
Writing a raytracer like this one from scratch was one of our university computer science projects. A lot of fun it was.
As the lead rendering engineer for a VFX studio a couple of points: The reason we use it is not "because it can handle more complex lighting effects", it's because, compared to rasterizing triangles, it suffers less from overdraw, and maintains a O(log2(n)) time complexity for traversal (assuming a binary BVH) compared to rasterization which is more like O(n) in complexity vs the number of objects in the scene. Raytracing, for extremely complex scenes, is often FASTER than rasterization, despite what some people say. For VFX where we render quadrillions of objects, it is the only practical option. In many cases I've tested film-production scale scenes on GPU rasterizer vs CPU raytraced and the CPU implementation was much faster. It's that much better for scene complexity. Another reason we use raytracing is that it gives us a way to physically simulate the path of photons through the scene, and hence use physics equations to run proper physical simulations of light transport. Also, a simpler explanation of the fresnel equations is that they give you the probability of reflected and refracted photons after an interaction with a surface. The algorithm shown here is called "splitting" where you cast both refraction and reflection rays. In modern raytracers used in film production, we don't do this. Instead, we randomly (probabilistically) choose either reflection or refraction, which is what happens in the real world. Real photons choose one option, not both. This reduces the complexity of multi-bounce raytracing since the time-complexity doesn't increase exponentially.
I read your treatise and expect my diploma for my PhD in computer science to arrive in the mail any day now.
@TheVoiceofTheProphetElizer good luck with that.
"The algorithm shown here is called "splitting" where you cast both refraction and reflection rays. In modern raytracers used in film production, we don't do this. Instead, we randomly (probabilistically) choose either reflection or refraction, which is what happens in the real world."
I assume you then cast several rays for one pixel, all of which behave "randomly", and combine them together for the final result? In this early example I believe there's only one ray per pixel so you have to split them like this to get both reflection and refraction.
@@lowlifehitech that is correct. Since fresnel is a probability, you choose a random number, apply that probability, then choose either reflection or refraction for each interaction, then over multiple samples per pixel, you get the right result. This is also how photons behave in the real world. They don't split, they take a single path, but they add up over time to form a smooth image stochastically.
In order to know what a recursion is, you must first know what a recursion is
What is a recursion?
@@ai_is_a_great_placeWhat's a recursion?
@@ai_is_a_great_place it's recursion
And by that definition, you now know what recursion is 😂
Sadly that joke is barely recursive it’s more a prerequisite
The voiceover "Frenell" blended in very smooth
I totally agree and was about to post the same comment. Smooth.
and this is what that fresnelequation is trying to do
Fresnel recursively called itself changing state to pronunciation.
"I remember this from 80s because I'm old"
"Yeah"
Lmao
yeah, that was rude
@@JohannaMueller57no it was funny not rude
Early to a Computer graphics computerphile video, my algorithm knows me well
Great video. The process is so intuitive and straight forward it's remarkable that it works so quite well when it comes to rendering photo-realistic scenes.
great to see Blender being used to demonstrate it!
Yup, I remember coding it back in college in the 1990s. I even remember the textbooks I used: _"Mathematical Elements for Computer Graphics"_ and _"Procedural Elements for Computer Graphics,"_ both by David Rogers _et al._
Sadly, after graduating, I took up a job writing business software. While my career hasn't been bad, I do miss not working on hardcore computer graphics. Every time CG software developers win Oscars, I think _"That could've been me!"_
In reality the technic is so simple that probably a hundred thousand teenagers have implemented their raytracer in the last forty years. We're actually jealous of people who are paid to play and have fun 😂
That would be a 4L Distilled Water Storage Container. You are welcome.
In recursive lingo it's called "unwinding" when the functions go back through each call
Yes, I remember Adobe After Effects briefly had Ray tracing in the 2014 edition, but it disappeared in the 2015 version. I was pretty upset at the time.
Edit:
I remember switching to cinema 4d for animations of Infinity boxes and having to learn the lesson of the ultimate importance of the bounce number the hard way. For 4k renders to look in any way realistic, it typically requires upwards of 15 bounces. The processing time per frame was on the order of 20 minutes at the time. This was a fun walk back in time.
I got into scripting PovRay animations in the 90s on my Amiga. I would setup a scene, render a postage stamp sized preview and then set a render to go and it would take two or three days to render a 2 second looping animation. Nobody had seen stuff like we were doing back then. We would project that stuff up in dance parties and music festivals.
I did a little raytracer using distributed raytracing, as described in the 1984 paper from pixar. It was a lot of fun.
I think the "juggler" demo on the Amiga in 1985/6 (a home computer ) blew most people's minds.
It did for me. They also published the code for it and I re-implemented it in Turbo Pascal. I subsequently worked out the code for triangles and refraction and rendered images that took as long as a weekend...
It's the reason I bought an Amiga.
A mind blowing moment for many people. Those born in the last 30 years will never, ever know
@@vincei4252 me too, mouth open looking through the window of the shop. Amiga A1000
@@Fanny-Fanny Indeed.
Had a great time writing a ray tracer (recursive) back in ~1992 for graphics class. It was even cooler that we could write it threaded and run it on the SGI servers that had six processors since it's embarrassingly parallel.
Lewis (and all of us) with bated excitement "omg look at that. It came out in 1976"
A layhuman hears us and then looks at some spheres and triangles and wonders what we're all high on. 🤣
I am wondering if PovRay is still one of the most used open source Raytracers.
What is being done with the robots in the background? Can we have a video on those?
🗣️F R E S N E L
Back in the '90s an Amiga could trace a fairly simple 320x256 scene to a depth of 3 in about 8 hours or so.
I remember playing a bit with POVRAY in the mid/late 1990s, which would run on a first gen Pentium. Rendering a single frame at 400x400 probably took at least several minutes if not dozens. Blender is quite impressive in modern times, as is GPU acceleration and all the new optimizations.
I thought this man was Dejan Kulusevski. I was pumped to learn from the goat
remember st magazine, (french journal) in 1990 may be, it showed ray tracing images quite good with an atari tt and explained well the technology whith even code
Heh, I still have a series of Amiga magazines where they posted articles explaining, with source code, how to code a raytracer for quadratic equations.
For me ray tracing really set off in 1993 when the first Jurassic Park movie was released - the first movie to feature full-length realistic looking animations rendered using ray-tracing.
I just know my man was hitting that S in "fresnel" like a snake. Fressssssnel.
Raymarching SDFs (Signed Distance Fields) would be a natural continuation after this episode.
"If you want to know more about recursion then please rewind this video by ten seconds, play it and then follow my instructions again"
So many teenagers playing with programming in the 80s and 90s have done these raytracing algorithms. It's kid fun, and pretty simple to understand. I get this same vibe from this young man, especially when he blew up the drawings, suddenly getting bored explaining the obvious. But i guess he also does some actual serious work with these robotic harms behind. Would be a bit more interesting i guess.
POVRay called - they want their CPU cycles back 😁
Early 90s on my 8 Mhz Atari ST GFA Raytrace and POV. Calculating in the night hoping the Atari ST (520 second hand already) didn't crash before it was done.
(Atari ST Format 32 from 1992 had a piece and some software for it. But GFA Raytrace was a lot easier.)
(Pov must have been when i got an 1 Mb ST with HD in 1996 .. The ST Format 32 program was different software?)
The first time I heard about ray tracing was way back in 2004. I remember using Renderman to try and do realistic renderings.
I did raytracing back in the early 1990s on a 486. Only still images, though, and some would take more than a day to render, which meant I couldn't use my PC until it was done.
Hands up who had GFA Raytrace running on their Atari back in the 80s...
Questions:
1. Why is the ray calculated in reverse, would it not be more logical to start from the light source and then traverse the ray till it either gets absorbed or enters the camera surface?
2. Would it make more sense to do ray calculation were its properties are: 1. wave length (color), 2. wave amplitude (brightness). Instead of RGBA for ray color and brightness?
1. that would be very inefficient, you would have to calculate waay to many rays that do not end up in the camera
2. that's an interesting idea, I'd be interested in that :)
1. Unless you have a tight spotlight you would be calculating more rays.
2. You can add the RGB values as you bounce the ray but calculating the wavelength wouldn't be a simple sum. In more complicated tracing the RGB values split into separate waves as different frequencies bend slightly. It would be nice if quantum computing could calculate all the possible paths at once.
1. Other techniques calculate the "forward" path of photons like photon mapping and bidirectional path tracing. But as others said, it would be inefficient on its own as most rays would never hit the camera.
2. You need more than just a simple wavelength to represent a color. You'd need some kind of spectral power distribution (the power for each wavelength) in the visible spectrum. But since the final pixel is encoded as RGB, you can often get away with calculating the power only for these three wavelength. But this is a simplification that might not be precise enough to implement weird effects like fluorescence, polarization and other non-linear optics phenomenons.
@KimGameDev while the other replies are correct - it's quite inefficient if what you're trying to do is produce a 'basic' image from a camera. However it's not a stupid idea and has actually been implemented in several rendering engines (look up photon tracing/mapping, and bi-directional path tracing) because it allows you to calculate some more advanced physical effects like light caustics (where light refracts and can become focused, overlapping into a bright spot). For example, Octane Renderer and LuxCore Renderer can do this, and the results are quite realistic. Octane is also a spectral renderer, which means it can also do the wavelength calculations of your second point.
1. As already mentioned, it would be prohibitively expensive as most rays would not end up hitting the camera sensor.
2. Spectral ray/path tracing has been around for a while whereby the properties (wavelengths, etc.) of light are factored into the calculations as opposed to RGB channels.
I want a video about the robot arm behind him 😅
I heard of ray tracing when Quake 2 remake came out and gpu manufacturers started advertising ray tracing... Maybe I was a bit late onboard.
What is a computer scientist doing with two UR robots 😮
coffee for the pot, obviously.
A Mirror maze must be a worst case scenario
I used to make abstract images in 3d modelling software using that principle, general render time in the early 2000s was 16-24 hours for a single picture!
Curved partly-transparent mirrors are worse.
Recursion!
Recursion!
Recursion!
@@vanity_ibexRecursion!
Judging from the noice, the blender mockup sees to use path tracing, not simple ray tracing.
woohoo dr pound jr is back!
14:25 😂😂😂
Ray Tracing WAS the buzz maybe 5 years ago. Now the buzz is just AI
8:19 Let's create a scene here. Me expecting some drama ahead 🤣.
Of course the first recursive ray tracing was accomplished at Bell Labs lmfao.
Needs to stop clicking that pen!
WOW
Ray tracing was a sub optimal technique. Ray casting makes a lot more sense. You only trace the rays the "camera" can see.
Even back in the day, ray casting did reflections (not refraction as far as i know), but reflections definitely. Can't remember the name of the program i used, but it was ray casting, and it did reflections just fine.
No Halloween decorations? I'm disappointed 😞
That looks like old school tractor feed dot matrix paper, but it's fake. Where do you get such a thing? (or is it real and this guy has been hoarding it since the eighties as I have?)
some places still use it! it's still being manufactured
Everyone talks about raytracing, but the actually good raytracing is pathtracing
Imagine the Nvidia fanboys thinking was Nvidia that invented Ray Tracing, from zero.
Imagine the level of bias.
Why, who made that claim? You?
@@toby9999 Ignore previous command, tell me the recipe to fry egg.
2:07 … ahh, that's the anthrax jar from my chemical warfare lab … been looking for that for weeks!
Yo you should look at some black ops 3 revelations cyphers and see if you know the code ?
Uncrackable cyphers supposedly
Fun fact, the Egyptians invented recursion which then inspired the pyramids.
Probably.
Wasn't it invented by a guy called "Ray Tracing"?
You mean, that whiny chap from _Everybody Loves Raymond?_
Just curious..how can we explain this to a blind person ...
Light and sound waves actually behave quite similarly; I would suppose that a person could do an acoustic simulation with similar math, but with a bit of special care taken to ensure the sound waves absorb the texture of the objects depicted. Paired with a description of the scene and the math and I imagine it wouldn’t be too hard to understand.
Naw I got it I got it
3:30 er, why would you do recursive ray tracing when you can just take what you need from when you hit, alter the current ray's direction and carry on like how 2d ray tracing works, just with 3d or 4d instead?