Thanks! This is part of the same project you previously asked about :) - 2 years. If you mean this rendering technique specifically, thats 4 months from the first commit to this video.
Awesome work! I'm coming from the offline rendering realm without such hard real-time requirements but becoming increasingly curious about voxel cone tracing. If I may pick your brain a bit, what do you think about an implementation that combines voxel cone tracing with raytracing techniques like BDPT or MLT, casting cones for diffuse but rays or even ray packets for specular? Also if you can help me, I am having so much trouble wrapping my mind around occlusion in the context of sampling mipmaps or higher levels of an SVO. I understand it (I think) in how we accumulate occlusion/opacity and terminate the cone marching process at a threshold, but it seems like it would introduce so many errors into the rendering equation unless we use numerous, very thin cones. It just completely breaks my intuitions of how light works. I just can't make sense of this idea of occlusion being accumulated over a cone instead of a single ray which is binary as to whether it's occluded or not.
Your understanding is correct and "light leaking" would be one common negative consequence of cumulative cone-sampling. But at the same time this process gives a reasonable approximation of diffuse lighting at even 1 sample so its a curse and a blessing. There is a StepMultiplier parameter so that gives you a way to somewhat control the error without adjusting coneAngle. Also handling transparent objects becomes really easy. I cant say much about your idea of combining vxgi with ray tracing. I think it would a pain to make work properly and it would definitly be much slower. I'd imagine there are better solutions outside the context of vxgi. Like using DDGI for diffuse only and the concentrating on specular completly separately (maybe with ray tracing)
looks amazing! wondering how you manage to render the debug voxel grid view without any artifacts caused by linear stepping or cone diameter based stepping missing geometries. Is it rendered by tracing a cone with small aperture per pixel from the camera into the scene?
yes exactly. As you can see ConeAngle was set to 0 so that it keeps sampling lod 0 of the texture, and StepSizeMultiplier 0.2. That's quite low actually I now have it on 0.4 and it still looks the same.
That makes sense, thanks for the insights! I've been playing around with this technique as well but so far struggling with leaking as cones with large aperture start to miss geometries. I'm using 6 cones with 15 degree aperture for diffuse and doing cone diameter based stepping and it's only capable of capturing near field indirect diffuse :( With thin cones it's much better since it essentially greatly increases the number of steps/samples taken along a cone, thus less likely to miss geometries. Have you been experiencing similar issues? Thanks again! @@BoyBaykiller
Cool, an other person checking out VXGI! So you are advancing the sample position along the ray solely by the cone diameter? If I do that then I'm also getting light leaking, yes. There is an additional step multiplier like this: github.com/BoyBaykiller/IDKEngine/blob/master/IDKEngine/res/shaders/VXGI/include/TraceCone.glsl#L32 There is also the method of accumulating alpha non linearly so that the first samples contribute the most in order to reduce leaking but I havent tried that. Also you just gave me an optimization idea: Automatically adapting (increasing) step multiplier based on cone angle or whatever. Because as you already discovered you can easily get away with a higher step size for thin angles. I am already doing something similar when choosing how many cones to trace. I think this might give a good speedu up!
Didn't expect such a speedy response :) Yep, I'm solely marching by cone diamater every step. Gotcha, if I'm not mistaken, modulating stepping distance with value less than 1.f is kind of like indirectly increasing sample counts along a cone? Were you observing perf drop when using a small step multiplier like .2 or .4? (mind if I ask if there is any other ways to reach out to you directly for further discussions? no pressure if the answer is no :) thanks!
Cone sample count is linearly interpolated from 1 to 6 based on how reflective the surface appears. Cone angle is also linearly interpolated from 0.005 to 0.32 based on the surface roughness. The code for that is in the IndirectLight function: github.com/BoyBaykiller/IDKEngine/blob/master/IDKEngine/res/shaders/VXGI/ConeTracing/compute.glsl
Hard to say honestly, especially because ray tracing implementations can vary a lot in complexity. I'd say that if you can implement either VXGI or a GPU-Path Tracer you can implement the other as well.
The atmospheric scattering is taken from github.com/wwwtyro/glsl-atmosphere. I move the "sun" by plugging the time variable into cos & sin. That makes it move in a circle and you get the day/night effect.
Pretty neato. I'm confused about the green bounding boxes not bounding the geometry when selected though, they all appear to fill the whole voxel GI volume in the horizontal axes and only conform in the vertical axis?
Good observation. They actually do perfectly bound the geometry of a selected mesh. It just that all meshes that contain the same material are merged into one. You can see that at 2:30 where I change the emissive propery of one curtain type and it changes for all, because really it's just one mesh. This merging can be turned off.
Huh, Dexter? While I do support ray traced shadows, in this video it just classic filtered shadow maps. And yes there is also SSAO, regardless of VXGI.
amazing
amazing work,
thank you for your sharing,
Wtf this is so good
Awesome!
Amazing work
Thanks, you too.
@@BoyBaykiller thank you! :)
What is Cone Tracing?
I know I sound stupid, but I really don't know what it does.
Wow, that's stunning! How long you're working on this now?
Thanks! This is part of the same project you previously asked about :) - 2 years.
If you mean this rendering technique specifically, thats 4 months from the first commit to this video.
Awesome work! I'm coming from the offline rendering realm without such hard real-time requirements but becoming increasingly curious about voxel cone tracing. If I may pick your brain a bit, what do you think about an implementation that combines voxel cone tracing with raytracing techniques like BDPT or MLT, casting cones for diffuse but rays or even ray packets for specular?
Also if you can help me, I am having so much trouble wrapping my mind around occlusion in the context of sampling mipmaps or higher levels of an SVO. I understand it (I think) in how we accumulate occlusion/opacity and terminate the cone marching process at a threshold, but it seems like it would introduce so many errors into the rendering equation unless we use numerous, very thin cones. It just completely breaks my intuitions of how light works. I just can't make sense of this idea of occlusion being accumulated over a cone instead of a single ray which is binary as to whether it's occluded or not.
Your understanding is correct and "light leaking" would be one common negative consequence of cumulative cone-sampling.
But at the same time this process gives a reasonable approximation of diffuse lighting at even 1 sample so its a curse and a blessing.
There is a StepMultiplier parameter so that gives you a way to somewhat control the error without adjusting coneAngle.
Also handling transparent objects becomes really easy.
I cant say much about your idea of combining vxgi with ray tracing.
I think it would a pain to make work properly and it would definitly be much slower. I'd imagine there are better solutions outside the context of vxgi.
Like using DDGI for diffuse only and the concentrating on specular completly separately (maybe with ray tracing)
looks amazing! wondering how you manage to render the debug voxel grid view without any artifacts caused by linear stepping or cone diameter based stepping missing geometries. Is it rendered by tracing a cone with small aperture per pixel from the camera into the scene?
yes exactly. As you can see ConeAngle was set to 0 so that it keeps sampling lod 0 of the texture, and StepSizeMultiplier 0.2. That's quite low actually I now have it on 0.4 and it still looks the same.
That makes sense, thanks for the insights! I've been playing around with this technique as well but so far struggling with leaking as cones with large aperture start to miss geometries. I'm using 6 cones with 15 degree aperture for diffuse and doing cone diameter based stepping and it's only capable of capturing near field indirect diffuse :( With thin cones it's much better since it essentially greatly increases the number of steps/samples taken along a cone, thus less likely to miss geometries. Have you been experiencing similar issues? Thanks again! @@BoyBaykiller
Cool, an other person checking out VXGI!
So you are advancing the sample position along the ray solely by the cone diameter?
If I do that then I'm also getting light leaking, yes. There is an additional step multiplier like this:
github.com/BoyBaykiller/IDKEngine/blob/master/IDKEngine/res/shaders/VXGI/include/TraceCone.glsl#L32
There is also the method of accumulating alpha non linearly so that the first samples contribute the most in order to reduce leaking but I havent tried that.
Also you just gave me an optimization idea:
Automatically adapting (increasing) step multiplier based on cone angle or whatever. Because as you already discovered you can easily get away with a higher step size for thin angles. I am already doing something similar when choosing how many cones to trace. I think this might give a good speedu up!
Didn't expect such a speedy response :) Yep, I'm solely marching by cone diamater every step. Gotcha, if I'm not mistaken, modulating stepping distance with value less than 1.f is kind of like indirectly increasing sample counts along a cone? Were you observing perf drop when using a small step multiplier like .2 or .4? (mind if I ask if there is any other ways to reach out to you directly for further discussions? no pressure if the answer is no :) thanks!
yeah it does come at a noticeable performance cost. Sure I like discussing this stuff. My discord is boybaykiller.
How many cones and at what angles? Thanks.
Cone sample count is linearly interpolated from 1 to 6 based on how reflective the surface appears.
Cone angle is also linearly interpolated from 0.005 to 0.32 based on the surface roughness.
The code for that is in the IndirectLight function: github.com/BoyBaykiller/IDKEngine/blob/master/IDKEngine/res/shaders/VXGI/ConeTracing/compute.glsl
Is Voxel based global illumination easier to implement than ray tracing?
Hard to say honestly, especially because ray tracing implementations can vary a lot in complexity.
I'd say that if you can implement either VXGI or a GPU-Path Tracer you can implement the other as well.
How did you implement the time responsive skybox? That is very cool
The atmospheric scattering is taken from github.com/wwwtyro/glsl-atmosphere.
I move the "sun" by plugging the time variable into cos & sin. That makes it move in a circle and you get the day/night effect.
Pretty neato. I'm confused about the green bounding boxes not bounding the geometry when selected though, they all appear to fill the whole voxel GI volume in the horizontal axes and only conform in the vertical axis?
Good observation. They actually do perfectly bound the geometry of a selected mesh. It just that all meshes that contain the same material are merged into one. You can see that at 2:30 where I change the emissive propery of one curtain type and it changes for all, because really it's just one mesh. This merging can be turned off.
Amazing! How does the camera navigation and object picking work?
Thanks. Its a standard camera implementation I just copied from somewhere. Object picking works by ray tracing.
came here from two minute papers :)
Dexter, is that you? - Nice stuff, are the shadows raytraced or its some SSAO?
Huh, Dexter? While I do support ray traced shadows, in this video it just classic filtered shadow maps. And yes there is also SSAO, regardless of VXGI.
amazing!