I made a better Ray-Tracing engine

Поделиться
HTML-код
  • Опубликовано: 9 янв 2025

Комментарии • 533

  • @NamePointer
    @NamePointer  2 года назад +207

    Sorry for the short break :)
    I hope this video was worth the wait!
    Also, thank you so much for 10k subscribers!!
    EDIT: At around 1:17, I called OpenGL a graphics "library" which isn't the right word. OpenGL is an API, not a library!

    • @dynastylobster8957
      @dynastylobster8957 2 года назад +1

      i know this isn't entirely realistic, but what if you blurred the noise somehow instead of using accurate information?

    • @ahmeddawood8847
      @ahmeddawood8847 2 года назад +2

      use vulcan ?

    • @ssj3mohan
      @ssj3mohan 2 года назад

      You could do it for unity the [ Doomed Engine ] Why not right ?

    • @localareakobold9108
      @localareakobold9108 2 года назад

      if your raytracing takes lesser resources I'll buy it

    • @teenspider
      @teenspider 2 года назад

      *s h o r t e s t* break of all time
      (joke)

  • @mgkeeley
    @mgkeeley 2 года назад +500

    A quick hack for fast antialiasing is to cast the rays through the corners of the pixels instead of the centers. It's basically the same amount of rays, but you can average the 4 values for each pixel and get 1 step for free. Adding your offsets will improve the antialiasing over time as you currentlly have it.

    • @waldolemmer
      @waldolemmer 2 года назад +18

      MSAA

    • @felineboy
      @felineboy 2 года назад +62

      Add a fifth ray in the center (with 1/2 the weight, and 1/8 for each corner) and you got the quincunx algorithm.

    • @mrburns366
      @mrburns366 2 года назад +4

      I'm not a programmer and I can't wrap my brain around a pixel having a corner.. 🤦‍♂️ a pixel is a finite point with and X,Y coordinate right? Say pixel 0,0 was in the upper left.. what would be the coordinates for the corners? 🤷‍♂️ Lol

    • @mgkeeley
      @mgkeeley 2 года назад +45

      @@mrburns366 good question! When casting a ray through the center of pixel "0,0", you actually cast it though coordinates "0.5, 0.5". The "screen" is virtual inside the raytracer, and has floating point coordinates for where the pixels are. Each pixel is a square with sides of length 1.0. Hope that helps!

    • @grande1900
      @grande1900 2 года назад +2

      Basically 4xMSAA

  • @ProjectPhysX
    @ProjectPhysX 2 года назад +314

    Your first raytracing video motivated me to implement fast ray-grid traversal in my CFD software for ultra-realistic fluid rendering. The simple stuff already brought me quite far. I'm amazed by the more complex techniques you show in this video. Thank you for sharing your knowledge!

    • @uniquelyrics2331
      @uniquelyrics2331 Год назад

      that is some quite complex vocabulary

    • @ctbdjc
      @ctbdjc 6 месяцев назад +1

      i can only think of that aerodynamics of a cow video

  • @pablovega7697
    @pablovega7697 2 года назад +305

    Don’t believe it. You just saw a video online. Or used google street view. There’s no way you went outside

  • @OllAxe
    @OllAxe 2 года назад +94

    9:17 One potential solution is to implement motion vectors and move the pixels in the buffer accordingly. That way you can move the camera while keeping old samples for additional data. Note however that newer samples need to be weighted more heavily so that new data is generated for previously invisible parts of the screeen, and that specular reflections with low roughness would look inaccurate as you move around since they are dependent on the camera direction. The latter may help the former a bit but a proper solution might need to put specular reflections in a separate buffer and handle them differently.
    This is an important part of ray-tracing in Teardown, the SEUS PTGI Minecraft shader, Quake II RTX and many other RTX-powered games, so it's a well-known technique. There might even be papers or tutorials out there that describe how to do it in more detail. I also know that Dennis Gustavsson, the programmer of Teardown and its custom engine, has written a blog post on using blue noise to decrease perceived noise in ray-tracing, and other things about real-time ray-tracing that could be of help.

    • @NamePointer
      @NamePointer  2 года назад +14

      Thanks for the interesting insight!

    • @WilliumBobCole
      @WilliumBobCole 2 года назад +5

      I came to the comments to say this, though as expected, others have beaten me to it. You're already doing temporal smoothing of the image, may as well not throw out the entire buffer. Obviously the more the camera moves, the fewer previous frames will be useful, but it's still way better than starting from scratch any time the camera moves

    • @oskartornevall8265
      @oskartornevall8265 Год назад +1

      If you don't care about object movement, then simply reprojecting the samples based on the difference in camera movement / rotation and filtering based on projected vs real depth of the pixel works (a spatiotemporal filter, if you care about terminology). This is used in GTAO (Ground Truth Ambient Occlusion) if you want to look at an example of such a filter.

    • @convenientEstelle
      @convenientEstelle Год назад

      @@oskartornevall8265 Ground Truth Ambient Occlusion

    • @oskartornevall8265
      @oskartornevall8265 Год назад

      @@convenientEstelle Yes, thanks. Was tired when I wrote that and misremembered the name :)

  • @JJIsShort
    @JJIsShort 2 года назад +60

    When I was implementing a raymarching algorithm, a lot of my stuff looked fake. Thanks for the new features for me to implement. Something I did use was an AABB optimisation. I went from being able to render about 15 objects in almost real time to way more. If you want more frames, it's quite easy. You have also inspired me to implement ray tracing and try to make my own engine. Thanks.

  • @KingBobXVI
    @KingBobXVI 2 года назад +51

    One simple change to consider: look into different color spaces for image processing, RGB is very intuitive because it's what displays use, but it's not really the best option for things like blending values together - actual color info can get lost and coalesce into muddy grays pretty easily. If you do all the math in HSV color space though, you can do blending the same, and maintain better hue and saturation while you blend before converting back to RGB for display.

    • @omnificatorg4426
      @omnificatorg4426 2 года назад +4

      The main advantage of RGB over HSV is linearity, so you can easily add and multiply the values. Of course, don't forget about gamma correction, or you will get dark gloomy colours. The mean of #FF0000 and #00FF00 is #BBBB00.

  • @ThrillDaWill
    @ThrillDaWill 2 года назад +7

    Great video!! I’m excited to see your new projects! Don’t stress too much over them and try to have fun!

  • @monstrositylabs
    @monstrositylabs 2 года назад +4

    I only subscribed two hours ago. Looked at the date of your last video and assumed this channel was dead. Then coincidently you post the first video in a year 10 minutes after I subscribed!

  • @dazcarrr
    @dazcarrr 2 года назад +31

    other channels may do this sort of thing, but none go quite as in depth on the technical side as you do. the 10k subs are well deserved!

  • @WhiteDragon103
    @WhiteDragon103 2 года назад +13

    If you separate view-dependent lighting (reflections) from view-independent lighting (lambertian) you can keep the view independent lighting buffer while moving the camera. If you move an object though, you'll have to reset both buffers.

    • @forasago
      @forasago 2 года назад +1

      Or you just accept that indirect lighting will lag behind / ghost a little. Only direct light / shadows need to keep up with the full framerate to look okay. Indirect lighting lags behind in basically every game engine, even Unreal 5.

  • @evannibbe9375
    @evannibbe9375 2 года назад +65

    The better solution to avoid rendering from scratch when the camera moves is to save the colors found, not as a buffer based on what appears on the screen, but instead as a buffer of what color should be associated with each piece of 3D objects those rays hit (color in this case being the total light that part of the shape could be considered to emit, which is averaged with the new calculation for that point).
    The one downside of this method is that it will require a lot more memory associated with each object in the scene, (sort of like a baked light map texture), and that more metallic objects will take a bit longer to converge (since their lighting changes considerably with camera movements).

  • @bovineox1111
    @bovineox1111 2 года назад +2

    Super stuff - always wanted to create a raytracer myself, did a bit of work but I think that the hardest bit to do quickly, is sorting the objects and determining the nearest collision.

  • @spacechannelfiver
    @spacechannelfiver 2 года назад +4

    You can do an optimisation by rendering into sparse voxel space instead of screen space. All of those dot products you calculated from the lights are still the same within voxel space, you can just cull the non visble voxels and recalculate whatever lights are in screen space if they move / change intensity. It just becomes a data management task which is much faster. Lumen works like this AFAIK.

  • @oskartornevall8265
    @oskartornevall8265 Год назад +5

    If you want even more realistic material behaviour, try looking into GGX scattering, it's a microfacet distribution, meaning it models the materials as a ton of microscopic mirrors oriented depending on smoothness etc. Great video btw!

  • @jorgeromeu
    @jorgeromeu 2 года назад +3

    Hi, a month ago or so i finished my bachelor's thesis which revolved around path tracing. This video explains it better than I've seen anywhere else!

  • @caiostange2770
    @caiostange2770 2 года назад +6

    Hello!
    A fix for not having accumulation when moving the camera is: instead of merging frames directly, take into account a velocity buffer. This should tell how much each pixel moved each frame. With that, it can combine pixels with previous ones even if they moved. TAA does this as well, you should look into it

  • @alex-yk8bh
    @alex-yk8bh 2 года назад +4

    Proud to say you're the reason why I disable adblock sometimes! Such a great piece of content. Congrats.

  • @marexexe7308
    @marexexe7308 2 года назад +2

    The visuals in this video is stunning! Great job! I enjoyed every frame of the video

  • @2002budokan
    @2002budokan Год назад

    Being able to summarize the entire ray-tracing process, its finest details and professional touches in such a short video is a special ability. Thanks.

  • @monuminmonumin6783
    @monuminmonumin6783 2 года назад +1

    i love that you're learning all this, sharing it and especially that you're putting in the Effort.
    Great Work! i'm hoping for more advanced Versions, just because i'm curious how far you can come!

  • @yooyo3d
    @yooyo3d 2 года назад +2

    You can use Multi Render Target extension to render stuff in multiple buffers at same time. Use those additional buffers to store current state of "recursion". Be wise to encode only necessary things in those buffers. Then just iterate multiple times over those buffers and image will get better and better.

  • @minhlucnguyen7614
    @minhlucnguyen7614 Год назад

    I'm learning 2d art, watching your video makes me realize the way an artist decide the hue, satturation, value of a spot on the painting is exactly like how ray tracing work. The video is very fun and comprehensive to watch!

  • @GaryMcKinnonUFO
    @GaryMcKinnonUFO 2 года назад +3

    Very cool indeed. I wrote my first tracer in BASIC, only Phong shading and of course it took hours to render a single polygon but it was a good exercise, makes matrix multiplication actually interesting :)

  • @kamranki
    @kamranki 2 года назад +1

    Lovely video! I love how you go out of your way to explain everything visually while keeping it simple. I am glad to have found your channel.

  • @frankyin8509
    @frankyin8509 21 день назад

    this is a boost shot on my personal project to develop a tiny physics engine. thx a lot. merry Christmas: D

  • @dexterman6361
    @dexterman6361 2 года назад +1

    To be accurate, the deep learning algo nvidia uses is called DLSS (Deep learning super sampling). This can theoretically be used with RTX off. This is technically unrelated to RTX (hardware accelerated ray tracing or more accurately, hardware accelerated bounding box checks for use in RTX)

    • @jcm2606
      @jcm2606 2 года назад +1

      DLSS actually has nothing to do at all with raytracing, and as of 2.0 is essentially just a variation of TAAU with a machine learning model taking care of when to reject previous frames and how it should blend frames together.

  • @ravenmillieweikel3847
    @ravenmillieweikel3847 2 года назад +1

    A way that the noise while moving problem could be fixed is offsetting the memory buffer's pixels by the depth buffer in the direction of movement rather than completely starting over
    Another way to get rid of aliasing is to downsample, that is, render the entire screen at a higher resolution, then scale it down.

  • @shitshow_1
    @shitshow_1 2 года назад +1

    Absolutely Amazing. I'm an undergrad. I've been very enthusiastic and learning 3D Computer Graphics from 9th grade. You put all my learnings in a nut shell which gave me a good Recap. Thank you so much ❤

  • @lonewolfsstuck
    @lonewolfsstuck 2 года назад +3

    Should add a de-noiser post process effect, would help significantly

  • @Layzy3D
    @Layzy3D 2 года назад +3

    If you continue this raytracer, you could add pbr materials and fresnel (for the moment it looks like you blended between metallic and diffuse materials)

  • @sjoer
    @sjoer 2 года назад +2

    You could also draw a circle where the ray intersects a surface, this could help you with indirect lighting around objects as you can use it to average a larger area!
    I think I saw Unreal implement this, it is called splotch mapping.

  • @michaelleue7594
    @michaelleue7594 2 года назад +1

    I imagine that color from pixel to adjacent pixels is generally pretty strongly correlated, actually. If you could implement it with a statistical element that could estimate the minimum distance between pixels at which the correlation disappears, you could use the gpu to output pixels that are at least that far apart to start with, and then use a different, more directed method to output pixels close to those pixels.

  • @Supakills101
    @Supakills101 Год назад

    This is a massive improvement well done. Leveraging hardware acceleration would take this to another level.

  • @novygaming5713
    @novygaming5713 Год назад +2

    One mistake I noticed is that reflective spheres have dark edges. This is caused because dot product shading is still being done for non-diffuse materials. The solution is to interpolate between the shaded brightness and a full brightness as the roughness goes down.

  • @Raftube02
    @Raftube02 2 года назад +1

    I think that one way to stop noise resulting from using random numbers independently of each other would be to use perlin noise, because then the color of the pixels would be more related to each other.

  • @djpiercy1235
    @djpiercy1235 2 года назад

    I think a smart way to reduce noise while minimising performance impact would be to reduce the amount of indirect rays depending on the roughness of the surface. A surface with a roughness of 0 should only need to emit one reflection ray, since the light can only bounce in one direction. A surface with higher roughness would need a lot more samples, since the cone of directions that the light can bounce off in is so much larger, you need more rays to fill it up.

  • @hamzazafar5182
    @hamzazafar5182 2 года назад

    Omg thank you! This was an extremely useful and simple tutorial. I'm not sure if I would have been able to install without it

  • @thanzawoo3389
    @thanzawoo3389 2 года назад

    after browsing through so many channels. Yours is by far the best. The explaining thod is so great and detailed even complex stuff is

    • @DaveeeOnTop
      @DaveeeOnTop Год назад

      I found the Sebastian Lague video also very informing, I think it wasnt out by the time you wrote your comment, but if you're still interested, I'd recommend you watch it

  • @martinevans8965
    @martinevans8965 2 года назад

    Best video on this topic on RUclips. So well explained and great result.

  • @londongaz2
    @londongaz2 2 года назад

    Great video! You've inspired me to work on improving my own rt engine which suffers from many of these similar problems.

  • @zelcion
    @zelcion 2 года назад

    Okay, I got this recommended on my RUclips front page, and i have never seen any of your videos. This is it, you're making it big.
    By the way, haven't i looked at the view count and subcriber count, I would think this was a big production of a 500K sub channel. Great work! Got my sub!

  • @chrisfuller1268
    @chrisfuller1268 2 года назад +3

    One of the best descriptions of ray-tracing I have ever seen. Most of the other descriptions are full of jargon invented and only used by a very small group of people, making the descriptions incredibly hard to understand by someone who needs to work with ray-tracing every once in awhile, not every day.

  • @christophercoronaios4732
    @christophercoronaios4732 2 года назад

    Great job man!! I find your ray tracing videos very helpful and informative. Please make more!

  • @youtubehandlesux
    @youtubehandlesux 2 года назад +10

    You could improve the realism of the scene easily with some tonemapping algos, they basically imitate how eyes or cameras perceive different strengths of light (e.g. color desaturates at high light strength while not straight up becoming #FFFFFF), as opposed to just a simple gamma function

  • @oscill8ocelot
    @oscill8ocelot 2 года назад

    So glad I subscribed last year. :3 Excellent stuff!

  • @cgpoly3419
    @cgpoly3419 2 года назад +3

    I just finished an pathtracer for an university project. I am currently rendering the 30sec animation and hope it finished rendering by the deadline in two days. While my project is quite diffrent (it doesn't even try to be real time because it wouldn't work with our scene and we don't use our sky map for lighting since just contains stars and wouldnt contribute an significant amount of light (it's a space scene)) some of the problems where the same especially the rewriting of some functions to make them non recursively. Its reassuring to see that I am not the only one who is annoyed by some aspects of OpenGL.

    • @jcm2606
      @jcm2606 2 года назад +3

      This isn't an issue with OpenGL, rather it's an issue with GPUs in general. GPUs don't have a stack, every function call is inlined and all automatic variables exist in a shared register file, so it's not possible for a GPU to support recursion, at a fundamental level. You *can* emulate recursion via iteration, by creating your own stack structure and dynamically appending to and iterating over it, but this will cause coherence issues and will significantly worsen register pressure, which can result in performance plummeting.

  • @nahomicastillolecca3719
    @nahomicastillolecca3719 2 года назад

    THANK YOU!!! TNice tutorials is such an amazing tutorial. I just got soft soft today and was playing around on it but had no clue how to really use it.

  • @nickadams2361
    @nickadams2361 2 года назад

    Man this looks like a very fun project to think through

  • @timothyoh9715
    @timothyoh9715 2 года назад

    Your content is great man. Keep up the good work

  • @ruix
    @ruix 2 года назад +1

    Real cool project. But next time you should raise your volume a bit

  • @notgartificial8591
    @notgartificial8591 2 года назад

    I recommend adding something called "fresnel" to the engine since the ground plane is looking a bit flat near the horizon. The steeper a ray comes in the more reflective the object gets. This effect gets weaker the rougher the object is. It is also a mandatory feature if you want photorealism since our brains know something is off.
    I also recommend adding caustics because it also affects realism. When computing indirect lighting, you should make rays bounce off reflective surfaces and if it reaches a light or a bright surface, you light the original surface accordingly.

  • @dragoncosmico
    @dragoncosmico 2 года назад +1

    you can study the bloom shaders of reshade, they're also written in c

  • @nafizjubaer1717
    @nafizjubaer1717 2 года назад +3

    Trying using motion vectors to offset previous frames when combining it with the current one during denoising., ie Temporal Denoising

  • @malachyfernandez6285
    @malachyfernandez6285 2 года назад +1

    this has inspired me to make a raytracer from scratch, in scratch.

  • @blacklistnr1
    @blacklistnr1 5 месяцев назад

    12:20 [no recursion, what to do?]: You can always convert a simple recursive function to iterative by manually using a stack in place of the automatic call stack you're used to.
    so:
    1. make a struct with the args
    2. make an array of that struct with a MAX_SIZE, push when calling, pop to return
    3. if your function makes the recursive call in the middle/multiple calls: split it into blocks controlled via a switch and a jump_point argument, move any needed locals to args as well
    4. abuse the power of this knowledge :))
    P.S. for your case(struggling with FPS already), I think that just running computeSceneColor again would have been to expensive anyways

  • @szybowiec2140
    @szybowiec2140 2 года назад

    THIS TUTORIAL REALLY WORKS I AM FROM PHILIPPINES! THIS MAN DESERVES A SUBSCRIPTION!

  • @yash1152
    @yash1152 2 года назад

    i am super happy to see many many comments about how the part with camera movements and reusing the data can be done. soo many of points are given in comments. seems it would be enough to serve a separate video on its own (:

  • @niloytesla
    @niloytesla 2 года назад +2

    You are my inspiration!
    I was also tried to make my own 'ray tracing' engine but I couldn't. It's hard, so I just stop it.
    But now I think I should start over.

  • @miguelguerrero3394
    @miguelguerrero3394 2 года назад +1

    Very good video, next implementation could be importance sampling, so that indirect rays are biased towards the light sources, significantly reducing the noise

  • @phillipotey9736
    @phillipotey9736 2 года назад

    This has given me an idea for a quantum renderer. You have each point on the object "be a camera" and save the color value to that point on the surface. The colors are then constantly streamed to the global camera continuously. Each object would have a then texture that would emit light for every direction and would update only when something changes in the objects direct line of sight. It might take a lot of ram so further things are from the global camera would save more dynamically in memory. This works off of the current quantum interpretation that light is a wave until it's collapsed by hitting an object or interacting, and the true color/intensity is chosen.

  • @SCPokSecondaccound
    @SCPokSecondaccound Год назад

    Consider enhancing the visuals with Tone Mapping, like adding bloom for a better appearance. Address the noise issue by employing two frame buffers: first one for the current frame and another for the previous frame. Displace the second buffer using its motion vector, blend it with the current frame, and display both buffers for improved results.
    Edit: I know my comment is a bit late, but I hope it still helped, even if you've already fixed it.
    Edit2: To enhance the prominence of the bloom, consider amplifying its visibility by expanding the radius.

  • @abenezertena6441
    @abenezertena6441 2 года назад

    I thaught Graphics programming was a rocket science, you inspired me a lot, Thanks

  • @sannfdev
    @sannfdev 2 года назад +1

    What about a de-noising algorithm? You could reference surrounding pixels to remove noise.

  • @FelixNielsen
    @FelixNielsen 2 года назад

    Regarding using a buffer of previously rendered frames, to reduce noise, not working once the camera has been moved, of is moving, it seems to me a reasonably good approximation som simply transform the image in the buffer, to fit the new perspective. Of course it isn't a perfect solution, but it shouldn't be all that difficult, or resource intensive, compared to the other work being done.
    Furthermore, it might be worth considering weighin the buffered frames, so that earlier frames has a lesser impact, than later rendered frames, but I don't really know it that is worth the effort, or indeed, has the desired effect.

  • @peacefulexistence_
    @peacefulexistence_ 2 года назад +5

    Afaik RTX "raytracing cores" just hardware accelerate Ray-Triangle intersections. By machine learning, did you mean DLSS, which uses machine learning to upscale frames so that they don't have to be computed at full resolutions?

    • @NamePointer
      @NamePointer  2 года назад +1

      I watched this video from NVIDIA a while back: ruclips.net/video/6O2B9BZiZjQ/видео.html
      From my understanding, it implies that RTX cards have to deal with quite a bit of noise, and that they use neural networks for denoising.

    • @peacefulexistence_
      @peacefulexistence_ 2 года назад +2

      @@NamePointer afaik there's no special hardware in the RTX card for denoising.
      It's just an NN which you can run on an non-rtx card.
      Same for DLSS.
      Thus both are post-processing steps done in software. The GPU just accelerates some of the computation in the NN with the tensor cores (I haven't looked into the tensor cores much)

    • @ABaumstumpf
      @ABaumstumpf 2 года назад +2

      ​@@NamePointer You might want to watch that video again and then follow the links they provide in the description. The short version:
      No, RTX is just the brand-name and behind it are just a Vulkan, Dx12 and OptiX API that lets you ... trace rays.
      The denoising is an entirely separate library. Or more accurate - not just one but multiple libraries with multiple approaches and algorithms, each designed for specific applications with their own pros and cons.
      But just "RTX" does not mean anything as it is a marketing-term that encompass Vulkan-Raytracing (vendor agnostic) that does just ray acceleration.

    • @NamePointer
      @NamePointer  2 года назад

      When I said "RTX technology" I didn't mean hardware specifically, but the entire Ray tracing package NVIDIA gives game developers to add ray tracing to their game. If I understood correctly, that one uses Neural networks for denoising.

    • @ABaumstumpf
      @ABaumstumpf 2 года назад +1

      @@NamePointer"If I understood correctly, that one uses Neural networks for denoising."
      There are multiple libraries included, one of which uses the result of a trained network, others are just "simple" postprocessing or temporal solutions.

  • @fghjkcvb2614
    @fghjkcvb2614 2 года назад

    Great to see you again!

  • @BossBeneBaby
    @BossBeneBaby Год назад

    Hey great video. In 2021 Khronos realeased the Raytracing Pipeline for Vulkan. It supports all modern graphics cards (even AMD) and its incredibly fast. I managed to write a realtime Pathtracer and even with 4k resolution it is possible to render in realtime.

  • @helenvalencia7073
    @helenvalencia7073 2 года назад

    for bloom take your frame buffer, make every pixel under a threshold black, then sample it to a lower resolution and average it with the normal sized one, repeat a few times, and then add that to the original image

  • @eboatwright_
    @eboatwright_ 2 года назад

    Awesome! Definitely looks alot better :)

  • @darltrash
    @darltrash 2 года назад

    Cool project, man!

  • @JorgetePanete
    @JorgetePanete 2 года назад +1

    I made one in Java without GPU too but projecting photons, much... slower, used as a showcase of Java

  • @Jurkox26
    @Jurkox26 Год назад

    At 16:22 the golden sphere seems to be losing some of the energy. Might wanna check for some negatives. Anyway neat job as usual.

  • @crestofhonor2349
    @crestofhonor2349 2 года назад

    I do love seeing anything ray tracing related

  • @rigbyb
    @rigbyb 11 месяцев назад

    Holy shit, this is amazing. Thanks for making this video

  • @quads4407
    @quads4407 2 года назад +1

    Is it possible to paint the indirect illumination on the spheres so the indirect illumination is recalculated only when the spheres move instead of when the camera moves ?
    Also each new frame would reduce the noise even if the camera is in a different position

    • @NamePointer
      @NamePointer  2 года назад

      It would be possible for objects with a high roughness, where the color of a point on the object stays the same regardless of where the camera is. Baked global illumination is actually really common in current engines, ray-traced or not, but I don't know how much such hybrid forms are used

  • @NavyCuda
    @NavyCuda 2 года назад +2

    I don't think I've laughed so hard in a long time... We used to joke about that unnatural light in the outside world that burned our skin. Let's stay under the warm comfort of florescent lighting!

    • @VRchitecture
      @VRchitecture 2 года назад

      Everything outside of virtual space is unnatural ☝🏻

  • @pablovega7697
    @pablovega7697 2 года назад

    Great video! Hope more come soon

  • @spectacledsquirrel
    @spectacledsquirrel 2 года назад

    Enjoyed the video, well done mate !
    13:23 pretty sure you could've made it iterative, it seemed primitive recursive (you didn't show much there :l ). Even if it is strictly recursive, you could've simulated it with a stack to see if your idea works as intended.

  • @Matlockization
    @Matlockization 2 года назад

    It was great that you developed something from scratch. I'm sure the software caused no bottle necks in the hardware.

  • @mastershooter64
    @mastershooter64 2 года назад

    oh that's a nice idea 0:11 I should make videos about the physics engine that I wanna write so that people can give their feedback and I can improve my physics engine

  • @saricubra2867
    @saricubra2867 2 года назад

    Wow, i'm liking your path traced engine, looks very natural, unlike the overexposed, overblown Unreal Engine 5 implementation which looks like everything is lit by a giant flashlight extremely close to the ground.

  • @alfred4194
    @alfred4194 2 года назад +1

    Amazing content once again

  • @hutchw2947
    @hutchw2947 2 года назад

    I love you im surprised you don't have more subscribers

  • @sinom
    @sinom 2 года назад

    instead of discarding all the info whenever the camera moves you could calculate the motion vectors and modify the current colour buffer based on that

  • @flameofthephoenix8395
    @flameofthephoenix8395 Год назад

    You can probably just have it to where each time the camera moves average each pixel in the buffer with the current pixel, then add it back to the buffer, effectively making the program still use the old rendered bits, but take them with a grain of salt. A better explanation is to say that you've rendered pixel A, B, and C, then the camera moves so now you have pixel D, but all the previous ones aren't accurate. So, you are going to alter the previous pixels in the buffer entirely, A = (A+D)/2 B = (B+D)/2 C = (C+D)/2, then the next pixel you render becomes (A+B+C+D)/4. This of course will still have a little blur, but less so.

  • @nicolacarlomagno27
    @nicolacarlomagno27 2 года назад

    where I’d record one track of the soft and than use a second Edison to record scrubbing through the soft to mimic a wave table.

  • @kikihun9726
    @kikihun9726 2 года назад +1

    You should use nvidia Ray Tracing SDK.
    Using the RT cores will reduce the cpu usage a lot.

  • @gcma1999
    @gcma1999 2 года назад +1

    Not only great results, but the video aswell must've been very tiresome to do, great work. Did you use Manim for the concept animations?

    • @NamePointer
      @NamePointer  2 года назад +1

      Happy to hear you liked the video! I didn't use manim but only After Effects and Blender for the animations.

  • @michaeldavid6832
    @michaeldavid6832 2 года назад

    You can rewrite some recursive functions as loops. Tail recursions.

  • @Vioxtar
    @Vioxtar 2 года назад

    This video was great, but the intro had me thinking you're going to tackle the primitives-only issue with some support for meshes.

    • @NamePointer
      @NamePointer  2 года назад +1

      I really wanted to, but I ran out of time as ray tracing for meshes involves a whole lot of other techniques new to me

  • @qdriela
    @qdriela 2 года назад +1

    0:56 most relatable thing ever

  • @triplezgames3882
    @triplezgames3882 2 года назад

    The csgo flashbang sound when you were exposed to sunlight 😂

  • @anerdwillhackit
    @anerdwillhackit 3 месяца назад

    Amazing work!

  • @ThePixelisaThor
    @ThePixelisaThor 2 года назад

    Clean comeback !

  • @silverace_71
    @silverace_71 2 года назад +1

    Cool project idea. Make a ray tracing engine that's optimized for vulkan. (I'm totally not going to turn it into a Minecraft mod)

  • @jackcomas
    @jackcomas 2 года назад

    It's really great if your computer is near the windows, especially when you're coding a Ray Tracing Engine

  • @LegendLength
    @LegendLength 2 года назад +2

    Anti aliasing is hard without using previous frames. Seems like you have to just send multiple rays each frame which seems slow.

    • @thespinningcube
      @thespinningcube 2 года назад +1

      Luckily a lot of sampling is already being done to get accurate indirect lighting, so it isn’t really slowing it down any further, just adding another source of randomness to each sample.

    • @LegendLength
      @LegendLength 2 года назад

      @@thespinningcube Ok that makes me feel a bit better then. I haven't done proper lighting yet but will in a few weeks.
      I was hoping to send only 1 light ray though per pixel. I'll just have to see how bad it is to send multiple light rays.

  • @hallen.09
    @hallen.09 2 года назад +1

    Could you make this an app that we can run without downloading the src?

  • @thecoweggs
    @thecoweggs 2 года назад

    This is actually really impressive