Textures, lighting, and MUCH faster rendering [Voxel Devlog #2]

Поделиться
HTML-код
  • Опубликовано: 6 янв 2022
  • In this video, I describe the improvements that I've made to my voxel ray marching renderer. I talk about adding textures, implementing lighting with shadows, and how I optimized my algorithm to double the rendering speed.
  • ИгрыИгры

Комментарии • 27

  • @vullematti4447
    @vullematti4447 2 года назад +9

    Really impressive !! But yeah, also doing a C# 3D engine (well pseudo 3D since I’m creating a Doom-like BSP based engine from scratch), I can relate how difficult it is to find a nice graphic API haha

  • @cPho3nix
    @cPho3nix 2 года назад

    Earned my subscription, really looking forward to seeing more of these series.

  • @clemdemort9613
    @clemdemort9613 2 года назад +9

    This Is genuinely really cool! I too am making a voxel renderer but as a first programming project, so it's not as impressive still it's fun :)
    I'm subscribing to you sir!

    • @DouglasDwyer
      @DouglasDwyer  2 года назад +2

      Thanks much! Working on this sort of thing is very rewarding, because there's always something more to implement or optimize. Since you mentioned your project, I went ahead and had a look at your channel - I personally would call what you've accomplished quite impressive! I especially like your transparency implementation. Though I haven't yet tried it, I imagine that transparency is much more easily implemented with raytracing than rasterization because you don't need to generate extra geometry for the inside of the mesh.

    • @clemdemort9613
      @clemdemort9613 2 года назад

      @@DouglasDwyer Thank you :D! Yeah voxels really are on the more fun part of graphics programming in my opinion.
      Can't wait to see what you make with them!

    • @logosking2848
      @logosking2848 2 года назад +1

      for your own good please do not write a 3d renderer for your first project. they are hard, even for seasoned developers, and you will almost certainly fail.

    • @clemdemort9613
      @clemdemort9613 2 года назад +1

      @@logosking2848 it's a tad too late for that ;)

  • @cona1432
    @cona1432 Год назад

    Damn, thats quite impressive!
    I myself have been playing around with Voxel Ray Marching and Ray Marching in true 3D for a couple of years now and would be intressted to see how you've implemented it.. Hope you find some time to make more vids on this project. *cheers*

  • @arsenbabaev1022
    @arsenbabaev1022 2 года назад +1

    So your ray marching does pure Ray-Box intersection tests? No 3d DDA at all? From all devlogs i didnt undersantand what do you mean by “object”.

    • @DouglasDwyer
      @DouglasDwyer  2 года назад +1

      In the current renderer, each "object" is a voxel grid, and each object is ray casted against separately. Ray-box intersection tests are utilized to determine which objects are hit by a given ray. Then, the voxel grid of each object is marched through using 3D DDA until a non-empty octree level is hit :)

  • @126sivgucsivanshgupta2
    @126sivgucsivanshgupta2 2 года назад

    Idk much about vulkan, I heard it's very involved, so did u use some library that abstracted a bit of the vulkan or did u do everything from scratch urself ?

    • @DouglasDwyer
      @DouglasDwyer  2 года назад +1

      Indeed, Vulkan is quite involved. It's a useful thing to know - especially as it has influenced other emerging standards, like WebGPU - but is very, very verbose. It took me about a week to get through the Vulkan tutorial, and another week to rewrite my code to be object oriented. That said, I did not utilize any abstraction, just the Silk .NET Vulkan bindings for C#. They are about one-to-one with the Vulkan specification.
      The main thing about Vulkan is that it gives you absolute control over everything - you control how images are presented to the screen, how buffers are stored, how data is copied, and the like. This allows you to tailor a project to specific needs, but means that initial setup and structuring of the library takes much longer. For this reason, I've recently been looking into trying WebGPU - it is another cross-platform API (which can even target Vulkan as a backend), but is slightly simpler to use.

    • @126sivgucsivanshgupta2
      @126sivgucsivanshgupta2 2 года назад

      @@DouglasDwyer ahh, I have recently gotten into rendering techniquies, and have been trying to impliment them by myself (not reccomended) I did 1 for a software rasterizer (search chillytomamto 3d on yt), and followed 1 tutorial for raytracing (raytracing in a week) and was trying to write a basic voxel raytracer (very primitive, no texturing etc) but wanted to learn how to use the gpu. From what i understand is that opengl cant do ray tracing without some hacky code, is it the same for vulkan ? if so then how is raytracing actually done in gpu ?

    • @DouglasDwyer
      @DouglasDwyer  2 года назад +1

      @@126sivgucsivanshgupta2 At this time, real-time ray tracing support on the GPU depends upon the method that you use to implement it. Most GPUs are currently designed for rasterization operations, without any innate ray tracing capabilities - they have a fixed rasterization pipeline (like vertex to geometry to fragment shader) which cannot be changed. Newer graphics cards, like the nVidia RTX series, do have a builtin ray tracing pipeline, with specialized shader stages for ray intersection and coloring. OpenGL does not support this ray tracing pipeline. Through an extension, you can take advantage of this on Vulkan, but it requires you to have RTX-capable hardware.
      It is not necessary, however, to use the ray tracing pipeline to achieve hardware-accelerated ray tracing. GPUs do support general-purpose calculations, so ray tracing operations can be performed in a standard fragment or compute shader as well, programmed much the same way you would on the CPU. Both Vulkan and OpenGL are capable of this, although compute shader support requires a newer OpenGL implementation. This is the technique I employ in the video; all ray tracing is performed in a compute shader. To be honest, for marching voxel volumes, this may be the optimal approach. The RTX standard is based around calculating triangle intersections, and I am not certain how easy it would be to map that pipeline to traversing, say, voxel octrees.

  • @tromboneguy4773
    @tromboneguy4773 6 месяцев назад

    How did you manage to take the average lighting of all exposed faces? Did you send information back to the CPU and then take average?

    • @DouglasDwyer
      @DouglasDwyer  6 месяцев назад

      For any pixel that needs to be shaded, I round the sampling position to the nearest voxel. Then, I go through the visible face directions and calculate the Phong lighting for those faces, and average them. This all happens in the shader :)

    • @tromboneguy4773
      @tromboneguy4773 6 месяцев назад

      @@DouglasDwyer Thank you, do you include the lighting changes from shadows from other voxels when taking average?

  • @russellsorin1856
    @russellsorin1856 7 месяцев назад

    are you still using the 7+ bytes per node currently? (2024)

    • @DouglasDwyer
      @DouglasDwyer  7 месяцев назад

      So my current engine is raster and greedy-meshing based, but I still use a similar octree format for representing the voxel data. It's used for meshing, sending things over the network, physics, and more. The octree still has nodes which are 16 bytes on average, with a base 2x2x2 block of voxels being a *total* of 17 bytes (one extra byte for some flags). I like the way algorithms can be designed with octrees, so I want to keep using them. In the future, though, I would love to switch to a sparse voxel DAG and see if I can get any performance gains there! The big challenge will be efficiently generating the DAG when models are created or changed.

    • @russellsorin1856
      @russellsorin1856 7 месяцев назад

      @@DouglasDwyerFascinating, thank you for sharing! For my project i plan to stick entirely to ray marching. I also plan on experimenting with DAGs later on, although from my research it may be more trouble than it's worth if you have complex leaf nodes, since i am using a simple material pallete i am still going to try it because the potential memory savings is enticing

    • @DouglasDwyer
      @DouglasDwyer  7 месяцев назад

      I will say that octrees' lack of coherency makes them a poor choice for rendering, so if I were to write another ray marcher, I would probably convert my octrees into a different format (brickmap, SDF, or mixture of the two) before sending them to the GPU. Sadly (as I discovered in this project) octrees have too many dependent memory reads. I would still use octrees on the CPU, though!

    • @russellsorin1856
      @russellsorin1856 7 месяцев назад

      @@DouglasDwyer I'm also curious if you have any elegant solution for maintaining code readability between your CPU and shader code. Squeezing my SVO data through the GPU is the biggest hurdle for me so far, since I can have an elegant octree structure on my CPU, but it has to be a very grotesque byte array to pass it as a buffer to the GPU since the gpu does not support any complex data structures inherently. i'm not past this yet but it seems in my head constantly breaking the octree down, squeezing it through a gpu u32 buffer, and rebuilding it on the gpu comes at a cost to performance and developer experience.
      currently figuring out all the bitwise operations and janky buffer reading i have to do to construct an octree from my arbitrary node byte size is what i'm slugging through. wgsl on top of all that.
      in the future i do plan to maintain a majority of octree manipulation on the gpu side, and pass only small changes through from the cpu that are then ran through a compute pass.
      it was disappointing to me when i learned shader languages do not support tree like structures easily.

    • @russellsorin1856
      @russellsorin1856 7 месяцев назад

      @@DouglasDwyer i'll take a look at that, that may be a good lead thank you! you replied so quickly, i left my previous comment before reading your reply :)
      interesting that you say this. wondering if a dynamic/animated scene changes anything

  • @OffBrandChicken
    @OffBrandChicken 2 года назад

    of course youre not going to notice much difference in performance for lods in a test like that, lods are for LARGE renders, you got to make the world big to see the impact

  • @nou5440
    @nou5440 2 года назад

    b