Parallax Voxel Ray Marcher

Поделиться
HTML-код
  • Опубликовано: 26 авг 2024
  • Project in High Performance Computer Graphics (3rd place) at LTH by Theodor Lundqvist, Jiuming Zeng and Jintao Yu.
    Paper and code is available on github:
    github.com/the...
    Paper will also be listed here:
    cs.lth.se/edan...

Комментарии • 48

  • @Conlexio
    @Conlexio 8 месяцев назад +31

    it would be cool to see you run the cellular automata code in a gpu compute shader. definitely would get better performance

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад

      You are definitely right about that!

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад +1

      Apple does not support compute shaders for OpenGL unfortunatly. We could maybe render to a buffer anyways.

  • @JMPDev
    @JMPDev 8 месяцев назад +9

    This is awesome. Looking forward to you sharing the paper!

  • @MajikayoGames
    @MajikayoGames 6 месяцев назад +2

    Awesome!!! This looks like Alex Evan's talk for the initial version of the Dreams PS4 renderer. He did a talk on it at Umbra Ignite 2015. And the claybook renderer a bit which the dev also did a talk on. I want to do something like this as well, also been meaning to read the GigaVoxel paper so I can get normals too for a smooth look like how Alex did. Thanks for doing this! I'll be checking out the source code when I have time.

  • @MiriadCalibrumAstar
    @MiriadCalibrumAstar 7 месяцев назад +4

    Minecraft with 200 chests: "stop, im crashing"

    • @alonepoptart24_6
      @alonepoptart24_6 3 месяца назад

      cuz rust is compiled and java (kinda) isnt
      rust is better performance

  • @user-co3nl9co5g
    @user-co3nl9co5g 8 месяцев назад +1

    This looks pretty darn cool
    It really reminds me of the voxel space algorithm for comanche but with a blocky look

  • @addvector4918
    @addvector4918 7 месяцев назад +1

    Incredible! Great work

  • @floerwig2194
    @floerwig2194 8 месяцев назад +1

    This is truly amazing

  • @gabrielbeaudin3546
    @gabrielbeaudin3546 7 месяцев назад +1

    great work. It's really impressive

  • @SydneyApplebaum
    @SydneyApplebaum 5 месяцев назад +1

    Cool

  • @colin_actually
    @colin_actually 7 месяцев назад +4

    Very cool. Are you using bindless textures or is the bottleneck just the memory bandwidth hit every frame?

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад

      Thank you, we upload the textures every frame which creates a limit on the render performance of about 6-10ms. The actual raytracing is also pretty slow at 22ms for complex terrain (see the paper). On M1 we even get a 200ms CPU overhead from uploading the texture, even though there is no "upload" on M1 since it uses unified memory.

  • @olbluelips
    @olbluelips 7 месяцев назад +1

    wow, really cool

  • @10bokaj
    @10bokaj 7 месяцев назад +2

    Hey, dont upload every frame, keep the buffer mapped on the CPU with persistant mapping. This reduces the overhead a lot

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад

      Yes I wrote about this in the paper. I did not know how to do it at the time.

  • @Puzomor
    @Puzomor 7 месяцев назад +2

    Amazing. Does the technique require uploading the 3d texture data to the gpu every frame? Seems like it could be avoided by just sending it when data is changed.
    I might me completely wrong because I don't know how exactly it's implemented, but please reply, I'm really interested in this!

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад

      Sorry for the late reply, I had no idea that this video would get any views. Thank you so much for the feedback, I have added the code and the paper in the description.
      We upload all the textures to the GPU every frame. This creates some overhead that we would like to remove. You are completely right that it would be much better to only upload on edits.

  • @gmanster_ster
    @gmanster_ster 8 месяцев назад +4

    very cool. what gpu is running, I might have missed that info. thanks.

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад +1

      Thank you, In the video it is running on a 2080TI, in the paper we used a macbook with a M1 chip for the performance measures. I was not able to record on M1 due to a large CPU side overhead, probably coming from Apple having ditched OpenGL.

  • @TheOneAndOnlySecrest
    @TheOneAndOnlySecrest 6 месяцев назад +1

    Interesting technique I tried sth similar, but the biggest issue for me was the missing depth buffer resulting in pretty bad performance.
    As far as I know early Z testing is disabled when you discard pixels in the fragment shader so you get a bunch of overdraw. Did you resolve this issue?

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад +1

      I tried rendering the chunks in order of distance to the camera. Did not make any significant change though. However, I did not notice any large performance difference actually.
      Looking at 15 chunks stacked like x-x-x-x-x-x-x-x-x so that the camera forward intersects them all is not that much more heavy than just intersecting one or two of them from the side. The largest impact came from traversed pixels * average depth

  • @martin128
    @martin128 7 месяцев назад +1

    Nice job!

  • @EricMartindale
    @EricMartindale 6 месяцев назад +1

    Would love to see more! If there's code, this seems like a good basis for a game I'd like to make. What's it written in?

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад

      It is written in c++ and OpenGL. I have linked the code in the description.

  • @MenkoDany
    @MenkoDany 7 месяцев назад +1

    Great project all!

  • @mladengavrilovic8014
    @mladengavrilovic8014 26 дней назад

    Is there a reason you aren't using ray tracing? It is faster with more possible optimisations and maybe even easier to implement.

  • @sannfdev
    @sannfdev 4 месяца назад

    So tell me if I'm understanding this correctly:
    You're rasterizing triangles to make a cube like you would in "traditional" rendering, but you're then using a shader, 3d texture, and I assume a bunch of matrix math to give the illusion of voxels? If that's an apt description that is truly genius! How does the performance compare to just rasterizing a full-screen quad like with most ray marchers?

    • @theodorlundqvist8174
      @theodorlundqvist8174  3 месяца назад

      Yes that is correct, each cube of 6 faces can be seen as containing a 3d matrix of blocks. I then trace a ray for each pixel through the matrix where the ray intersects the cube. When a block is hit, the color of the block will be drawn on that pixel. Since a render the shader on the inside faces of the cube, the pixel will be drawn there. So even if the camera is inside the cube, the illusion will look good.
      I did not try drawing everything on a full screen quad. My thinking was that the rasteriser will solve things like frustum culling automatically so that I do not have to search for those chunks to draw.
      However, I do not recommend using 3D textures. Since they are dense data structures, there is a large overhead when uploading uniform models. Especially uploading everything every frame as I did 😂. Also, even though M1 has the same memory for CPU and GPU, there was a 200ms overhead sending the 3D textures up. On my desktop is was closer to 5-10ms. So still quite large.

  • @kintenkinten
    @kintenkinten 8 месяцев назад +1

    Nicely done! Fortsätt så! 😊

  • @MitchellTalyat
    @MitchellTalyat 7 месяцев назад +1

    Amazing work. Any idea when the paper will be published?

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад

      Thank you and sorry for the late reply, I had no idea that the video would get any views. I have added the paper to the description.

  • @Conlexio
    @Conlexio 8 месяцев назад +3

    really cool! were you inspired by douglas’s engine?

    • @delphicdescant
      @delphicdescant 8 месяцев назад +2

      A lot of graphics programmers have been doing "small voxel raymarching" stuff for years. Douglas is just one of the more recent ones. You can find renderers like this that are 15 years old floating around the internet.
      I'm not saying they aren't cool - I made one myself because I thought they were cool - I just wish some particular youtubers didn't give the impression they were inventing novel stuff when they are, in fact, following established techniques.

    • @Conlexio
      @Conlexio 8 месяцев назад +6

      @@delphicdescant i am actually very knowledgable about this stuff. i was asking because in the video he mentions “parallax raymarching” which is a proprietary term douglas did create himself.

    • @delphicdescant
      @delphicdescant 7 месяцев назад +1

      @@Conlexio Ok, I see where you're coming from. Addressing that point, though, I think "parallax raymarching" is a term that any graphics programmer doing adjacent work could easily come up with coincidentally. Both words are used so commonly in the literature that it might even be unavoidable.
      Additionally, the algorithm Douglas labels as "parallax raymarching" looks like a pretty natural solution to fall upon for someone else working on a similar renderer.
      I don't mean to be testy. I just get tired of the incessant jockeying for credit among devtubers who maybe aren't quite as noteworthy as their followings have led them to believe.

    • @theodorlundqvist8174
      @theodorlundqvist8174  6 месяцев назад

      @delpicdescant @Conlexio
      Hi guys, I have watched Douglas' videos and they are very good. However, I was first inspired by this video by Teardown.
      ruclips.net/video/0VzE8ROwC58/видео.html
      Douglas' videos and many others have provided additional guiding. However, I do like trying to implement the techniques without following some given code or guide.

    • @delphicdescant
      @delphicdescant 6 месяцев назад +1

      @@Conlexio I forgot to add: His algorithm is most definitely not "proprietary." He does not own it. He may have had the idea independently, and may have given it a name, but you can't own an idea. You can't own an algorithm.
      Another person may also have the idea independently and give it a different name. It's very likely that another person *has* done so. The vast majority of personal engine projects aren't publicized on youtube. I've never publicized mine, despite mine also containing many ideas I've had independently.
      The only thing that can (or at least *should*) be protected as "proprietary" would be the specific implementation, i.e. his code itself, which he's free to license or to maintain exclusive copyright over.
      The fact that software patents have been granted in the past does not make it ethical. Software should not be patentable. An algorithm is provably the same thing as a mathematical formula, which the law prevents from being patentable. Software patents are ethically wrong, and a legal grey area *at best*, but some companies or people have gotten away with them anyway. This doesn't justify such behavior.
      This discussion is a whole can of worms, and if you're knowledgeable about the field, you should be just as disturbed as I am about the way some youtubers parade about their ideas as if they're demonstrably novel, encouraging their followings to call them geniuses or whatever, without ever having reviewed the literature or done their due diligence.
      Again, you may not be in disagreement with me here, and I don't want to sound hostile - I'm just, uh... passionate about this. I've probably turned a small thing into a big rant.