Adding A CACHE To My Custom VOXEL Game Engine | Devlog #5
HTML-код
- Опубликовано: 2 окт 2024
- If you like the video and want to see more, please give it a thumbs up and share. More devlogs coming every month so subscribe to be notified.
This month I have been working on the voxel caching system for the game engine. I have tested the cache down to 1 Mb of RAM showing that this system is working well. This is similar to what gigavoxels has done you can check out there paper here: maverick.inria...
This custom c++ engine is using Vulkan to do voxel octree ray casting to render the scene. This allows complex LOD systems to be used to optimise rendering. My goal is to create an open world game based of voxels at a scale never seen before!
♫ - Music:
"Lukrembo" - "Teapot"
/ teapot
I think your biggest concern for this project is RUclips's compression lol
Yeah lol
@@voxelbee Best way through that is to make sure to encode all capture/output sources (and your final export) at as high a bitrate as you're willing to handle waiting for the video to upload. Since RUclips recompresses your uploads into it's various formats (resolution dependent), capture and export at ~40-60Mbps for 1080p (H.265 gives a better result over H.264 if you can spare the extra CPU overhead)
@@slavkosky also if you upscale the video and upload it the RUclips compressor will lose less data since there are more pixels to work with
@@woofcaptain8212 That's...partially true. RUclips sets a target bitrate for it's VBR compressor based on resolution..so more resolution = higher bitrate ceiling, but that doesn't guarantee quality if the source material is overly compressed to begin with
@@slavkosky so tl;dr: record and render at a really high bit rate (lossless if feasible). Upscale to 4K for the final render if you didn't record in 4K.
Wow, that's one of the most efficient voxel renderers I've ever seen. This is amazing!
I would love to learn how to do such complicated things in Compute shaders... and yes... that is a crazy efficient algorithm that I would not of thought of would work. You have dumbfounded my 3AM brain....
Thank you.
RUclips's video codec really doesn't like those textures at all.
Neither do I tbh, but it doesn't take away from how slick this tech demo is.
Thanks! I'm going to try some other scenes that might work better for the next time!
@@voxelbee I think rendering to 4K, or even higher, might help. Even if you don't have a _native_ 8K recording, youtube is going to give the 8K streaming a much wider bitrate, which will massively help with this.
You could make an entire detailed, non voxel looking game game using that amount of voxels. Insane.
The only problem with that would be the manipulation of the voxels. I may be entirely wrong but it seems like a dynamic (on the level of actual geometry moving at 60fps+) world of these voxels would either need to approximate movements and not simulate the motion of all voxels, bake animations, or pretty much just use intersections with mesh geometry for estimates at which point you might as well use normal meshes again. It's brilliant as a voxel simulation but I don't think it's suited for a non-voxel game with any level of dynamic motion.
@@superkingofdeathify Look up atomontage and John Lin's Sandbox.
@@MDGeist-ws2rh both of those take a different approach to the rendering and data management, as far as I understood. Whilst I wouldn't say it's not possible, the engine in this video doesn't lend itself to soft body or armature deformation and I still think it wouldn't work so well for normal dynamic gameworlds.
Would be incredible for to be proven an idiot though.
@@superkingofdeathify maybe It doesn't work now, but that doesn't mean we can't make a mixed approach or evolve later ta a fully dynamic world with quadrillions voxels... Just need to keep evolving, maybe we can get to a point so ridiculously high in efficiency that we can make old 98 windows era Hardware run lifelike realtime dynamic voxel worlds...
@@BigDickEnergy777 oh I fully agree, I was only referring to the difficulties in doing so as it is, in reply to the original comment about it being good for use in standard games.
Problem 1 was a buffer overflow. You can talk dirty/tech to us, we'll get it ;)
At least there's no RIP overwrites occurring... haha
Dude, this is sick. I had this idea to store Manhattan distance fields to voxels in 3d textures with a chunk loading system (the distance fields could be made in only a couple ms with separable filters). Traversing that was FAST (>400fps with 4 billion voxels). But seriously, a trillion voxels wtf? I just wonder if both systems combined could be even better?
Thanks man! That's a pretty cool idea haven't explored distance fields but that sounds like some good performance! Maybe should look into it...
cant wait for a game that was made with this engine
this is all i look forward to in my life now
Damn, that's impressive - I thought I was being clever when I used slices of alpha textures to represent filled and unfilled voxels - I used to brag about numbers like 8million voxels with only 49k triangles xD - your engine has officially won the voxel race :D
Imagine if Minecraft could handle that many blocks.
If we could get a variation on this engine that allows for rotated and textured voxels, this would be incredible. The next thing is to have lighting, but all of these features lead to a less optimized video engine
@C M What?
@@anselp.498 Best thing would be to make it so the features are toggleable and can work independently for max perf
@@anselp.498 Call me an idiot but I don't see why you would need textures in an engine that can render potentially infinite levels of detail. Using view frustum culling, like the engine is already doing, and some refined level of detail optimization, you can easily have a voxel per pixel visible (or more) and achieve perfect resolution. An engine like this doesn't need textures in the traditional sense. But then again, I'm not an expert.
@@gownerjones I guess textures can still be used to save some calculation time for close views, but you're right, textures can probably be done using only those voxels and some defined colors
your devlogs make me want to work with vulkan again, thanks 😌
you are doing a very good job by the way 👌
Thank you so much
I wonder how such a simple and efficient rendering method wasn't found sooner, good work man!
compute and data oriented programming is still a very young area of programming, with largely unexplored capabilities
Wow, this is amazing! The cache system idea is really simple and elegant. One question, could you point out some resources that you used to learn the Vulkan API?
Again, amazing work, looking forward to see more of it!
Thanks so much! I'm gonna post a video about how I learnt about all this but one thing that was useful was vulkan-tutorial.com/ you should check that's out!
so are you actually shifting all the memory one voxel over to the "left" when you put append the recently used voxel to the "right"?
I can't imagine that you actually do that, it would probably be inefficient right?
Well I'm using a circular buffer so it doesn't actually require shifting the whole array!
I love circular buffers! This is a great use case :)
So if you update the array in oldest to newest order, then the voxels that are rendered this frame will remain in the same order relative to each other, which is great. Exactly what you want.
But it seems like the non-rendered voxels can get shuffled around relative to each other... so then the slots that get overwritten first with new voxels won't necessarily contain the 'oldest' voxels that you would prefer to discard first.
Do you have a way of avoiding this, or do you find it doesn't affect performance too much?
Very interesting video series btw, thanks for posting!
@@voxelbee so I guess that means one voxel is unloaded while you're rendering another voxel. No need for taking some time out to clear unused voxels.
I've just found your videos and I really instantly binged them all. I love how informative they are and how you're able to explain such complex algorithms in understandable and simple languages. Something I would really like is if you could show small code snippets and going through them explaining what they do. This way I might be able to better understand how an algorithm works in order to use it myself at a later point in time.
Keep up the work, I think (and hope) your channel will blow up in the near future!
Awesome progress! A suggetion I would make is to add some sort of smoothening to the trasition that occurs on the edges of the viewport when you move the camera around. A simple 2D mask (like you did with that green effect on previous videos) would already improve it a lot, in my opinion :)
Thank you! :) Yeah I am going to improve the transition at the edges of the camera in later versions!
Just randomly found you, what kind of black magic this is? I'm impressed this is the most optimized voxel engine I've ever seen
Very impressive! I randomly got this recommended, so I haven't watched the previous parts, but if you render the voxels slightly outside of the viewport, you could probably get rid of those edge artifacts without having to increase the cashe size a lot.
Thanks! Yeah expanding the view of the camera would probably get rid of a lot of those problems!
this is ingenious!!! keep going, dude!
pausing the cache blew my mind
RUclips compression did this one dirty
These videos are really cool! It's so interesting to see someone making their own game engine among the sea of RUclipsrs making devlogs for just games. It's really unique and fun to see this and I'm excited to see the full engine someday! I especially loved that huge scale change zooming in effect you were working in another video, there can be some really cool games made with that ability.
Thank you so much!
This is some truly impressive technology. You should definitely write a paper about this.
Thanks! You should check out the papers I used to make this... I've linked one in the description!
I have no idea what is happening, but I can imaging a few cool concepts with this system. Cant wait to see the game design part!
Yo, got recommended by youtube and straight up subscribing! This is amazing!
Wouldn't making the cache as huge as you suggested ("several gigabytes") make things extremely slow? Moving a voxel takes O(N). If you're moving one from, say, the middle of the array (because it hasn't been accessed for a while) it's going to be quite slow with such a massive array. Or am I missing something?
yeah i think thats the weird popping were seeing
Such talent has to be supported with more subs! I wish you all the best!
I know this is late maybe you dont see this.
But i think i have a better system for you that does not constantly require you to shift arrays around.
What you could do is instead of using a arraylist you could use a HashMap (or Dictionary or orderedMap) that uses 2 arrays a key array and a value array.
And for iteration you use a long[] where the first 32 bits are the previous index and the second 32 bits are the next index and the arrayIndex is self (so you get key value & next & previous with the same value) that way if you want to move indexes around you just need to adjust the bits of the long[] instead of arrayshifting the arraylist itself. Very little movement and also very fast iteration speed, and when adding a new entry you just put it in the first position, and when clearing the overflow you just remove the last indexes. shifting the entire array by changing effectivly 3 values. (Discarded, First and new).
if you want a implementation example: github.com/vigna/fastutil
Yes it is a different coding language but its insanely fast vs an arraylist approach and has very little data modification.
Yeah I think this is quite an interesting idea I am currently working on upgrading the data structures it's just it's quite difficult to implement a lot of these complex structures on the GPU. However, this might be of use on the CPU! I'm definitely going to look into this!
...isn't that just a doubly-linked list with extra steps?
@@ZeroPlayerGame not a data structures expert, but I think that linked lists have slow random access. i.e., if you want to look at the 24759th element in a linked list, you might have to look at the first 24758 elements. This proposal could be faster than a linked list.
EDIT: In another comment, voxelbee says he is using a "circular buffer." Not sure exactly what that is but it sounds like it would solve array shifting problems!
Voxelbee is using ring buffer array.
When inserting new element you decrement your tail (example if tail was pointing to 6, now its pointing to 5th element of the array), then you increment the head and insert this new element to the array with head index.
When you are inserting recently rendered element to the front from the middle of the ring buffer array, you increment the head (if the array is always full, the you'll probably have to save the element with tail index, since the head now has index of the tail) place your voxel from the middle into array with new head index, then you place voxel from the original element with tail index to the place where rendered voxel was.
This is only 1 array + two values (head index, tail index). I dont think you can make a simpler implementation of cache than this.
This is a really nice trick :-)
this is very interesting. I'm not into Voxels, but I'm working in some custom/compute shaders and I want to try your cache system
Almost unbelievable ! Your work is incredible, and your determination is inspiring ! Good luck man, I can't wait for your next devlog !
love from the UK!
Represent! 🇬🇧😂
🇺🇦
Hey mate. I am eagerly awaiting the next devlog. This project is just too interesting. Any chances that you will upload in the next couple of days?
Anyways keep it up, much love :)
Thanks so much! Just posted a video about why i'm making this engine! A devlog update will be on the way in a few weeks!
Coincidentally, my gpu l2 physical cache is 1MB
The dream to run a game on the scale of the universe with a potato.
Have you thought about extrapolating camera movement to make pop-in less noticeable? I think you should experiment with that when you get the meat done.
Yeah that Is one of the goals once I have the whole caching setup I need to the optimise the loading and remove all the poping. Camera interpolation is definitely a good thing to try!
@@Joorin4711 Doesn't account for camera translation tough.
@@HAWXLEADER How so? I'm just a pleb, so it'd seem to me that: If the renderer is rendering a 10x10 area in front of the camera, and we are only able to see a 6x6 area in the middle, there would be a 10x2 area around what we can't see that is still being rendered, as a 'buffer zone' for pop-in. The scene is still being rendered outside the viewable area, and since there is a 'buffer zone' the renderer would be able to 'account' for camera translation. Instead of rendering the entire scene or only what's viewable, it'd render only what's viewable and then a little extra. Surely with enough being rendered just outside the camera's view, that'd eliminate pop-in? Like I said, I don't know what I'm talking about. Just going off uninformed experience. Extrapolating camera movement might be superior to something like that, though it would be interesting to see what'd happen 🤷♂️
@@dinkledankle Occlusion would still cause pop in.
Besides, You don't have to render more, just cache more than you see.
Unreal engine has the same issue (when occlusion culling is enabled), so you're in good company ;)
Now expand what's being loaded beyond what is visible so as you pan around, you are less likely to all the pop-in as you look around.
I have no idea how these things work, but it seems easy to do simply by increasing the angle used in the GPU that it assumes the camera is viewing. So if you have a FOV of 90º, the GPU could render the equivalent of, say, 110º even though 20º aren't actually being seen on-screen. Of course, you're relying on the person controlling the camera not doing a full 180º turn, but it seems like it would work rather well at the cost of increasing what gets rendered.
@@carlosmspk Scaling the view frustum wont work. As you can see in the video, occlusions also cause pop-ins.
@@TheFlynCow hah, damn, forgot about those
This is a great suggestion. This is especially efficient if you can anticipate the movement of the camera given the current angular velocity.
This is amazing! keep it up man, this is going to be awesome.
I get that this test scene is great for performance testing but it'd be nice to have a test scene which is:
1) better for youtube compression
2) lower resolution voxel colors are the computed average color of the higher resolution voxels
You'll want some sort of offline compute method for generating the procedurally lower res voxels with the average color so that it's not as jarring when the voxels on the edge of the screen get 'seen' and suddenly their average color changes which can be distracting.
Yeah I agree that the scene right now isn't the best for the compression! And the colors are part of generation code that I will look at soon...
But it's definitely time for an updated scene!
Random pedantic side note, normally Mb is megabit, MB is megabyte.
I’ll be “borrowing” this idea mate thanks!
RIP RUclips compression.
this looks promising
Nice job! If the textures are not so noisy, youtube can deliver better video quality, so we can see what you see!
This is going to get even more interesting once you add cellular automata rules that evolve on their own. The best way to introduce a time component to this system is at the voxel level directly in the octree, represented by a one dimensional array in memory. Fractals are a good step in that direction, and from there it's easy to imagine all the different emergent phenomena that you will be able to observe at universe-scale with this game engine.
Looks neat, wondering if you can shrink the performance overhead of walking the array every time you add to the cache. Maybe a circular buffer? That has its own problems though. Maybe you walk the array as you search, that way you already have space up front for the new object. I would be tempted to cache chunks and store them at fixed points in memory, relative to camera location/orientation, and just load the whole chunk when you need a voxel within it, that way you don't have to walk so many tiny voxels up and down the array.
Thanks!
It is actually implemented in a circular buffer so there isn't much array shifting going on within the array itself. I am looking into storing chunks of voxels too that might be useful!
@@voxelbee How do you perform that "move to front" operation on a circular buffer? Swapping with the last element and incrementing the head would suddenly move the formerly last element further up the buffer (as in, the former position of the element being moved to the front). Is there another way, or is this not a problem?
Super cool, instead design outside, design inside
Dont you have to shift everything along the array, for this to work? Isnt that inefficient?
Yes it's actually a circular buffer so not much shifting is needed
@@voxelbee oh I see. You just move some pointers. Thanks
This will prolly get buried, but you can speed up the cache by using a balanced binary search tree augmented with order statistics. Currently moving a voxel from somewhere in the middle of the cache to the beginning when it is rendered should take O(N) time, you can turn that O(N) into a O(log N) by turning the cache array into a cache treap.
Yeah that is an interesting suggestion but algorithms like this are difficult to run in parallel on the GPU. I maybe should look into that a bit more...
Those voxels, in game design, have a size way smaller than what is deemed insignificant, this is a good sign as it means that in a game environment the player would be able to see very far away.
You know those planet and star size comparison videos, you could do that in real time with your voxel engine.
What is this dark magic?
Very cool... I was thinking it would be interesting if at the smallest level color was dependent on structure (analogous to how a butterfly's wing gets its color), so you only need to record solid or non-solid but the number of solid faces adjoining influences the hue. That might save some memory and give a unique mesmerizing view. It would be necessary to count adjoining faces on at least 2 dimensions or cells in at least 3 dimensions and then combine the colors. Using primes for RGB could work so 2 is blue, 3 is green, and 5 is red, while leaving 7 for clear and 1 (and higher primes like 11 and 13) for black. By this logic a 6x5 flat face would be white because it can be decomposed into the first 3 primes, but so can 2x15 and 3x10 faces. The highest practical co-prime would be 35 (5x7) so I wouldn't test for color above 64x64 for processing, so just default to grey above that. It might even be possible to make this process procedural for recursion.
Wow! That's a very cool idea I like it! I'm going to be exploring different generation methods to get procedural views and this sounds like a cool idea to try...
@@voxelbee Awesome! I feel I should mention a slight variation which would probably be easier to implement, more controllable, and be computationally less expensive... Test for the thickness/depth of a face as that would be just a 1D query (at the cost of color space)... As an example, and using the previously mentioned primes to colors key, imagine a 4x3x5 block. The 3x4 faces are red (because depth is 5), the 4x5 faces are green because (because depth is 3), and the 3x5 faces are blue (because depth is 2x2). Now imagine the same block but hollow with a wall thickness of 1. Around the edges the colors would be identical, but the middle area of all faces would be black. It could kind of reveal internal structure of things but if cleverly used could be used to introduce a small palette of arbitrary colors to a surface artistically... Best of luck with whatever you choose to do! (also, for the record, I claim no ownership for these ideas)
I can't stand compression...
I c$'t st$d compression
I c$'t st$d compre#ion
imagine a collaboration between voxel devs on YT
Do you think you would be able to create a 3D cellular automata in this voxel game ... or would that be too complex? there are a lot of different rules and it would be interesting to see it running even if it's just confined to some bounding box. Great work
I'm very interested in looking into cellular automata for the engine. I think it's doable!
@@voxelbee This also reminded me of transitive percolation in Percolation theory, there may be a way to encode information into the index of each voxel and in doing so may give you a further memory advantage
This is incredible to see how smooth it runs ! Good Luck man !
This is some really cool tech you got going on there! I recently got in to gpu compute stuff and now I kinda wanna copy your idea. ^^
Got 3 questions for you.
1. Did you take any concepts from other voxel tech like OpenVDB or is your spatial tree completely purpose built concept?
2. Have you thought of how much versatility you want for this engine? For example, do you plan to support many volumes to be rendered at once in order to create a sort of a volume as object concept that many other voxel engines do (eg Teardown) and how would you approach things like bullet physics for such complex structures?
3. Are you concerned about the performance of the rendering as you do not have the convenience to rasterize almost everything then spice it up with fancy methods like raytracing?
Thanks for watching!
My tree is a pointer based octree I have customized the data structure for my needs but it's similar to what Gigavoxels have done.
I think I need to experiment with having multiple volumes being rendered it's definitely something i'm exploring!
Performance is one of my main concerns but at the moment it is quite performant. Just as more things are added they need to be optimised to work well
@@voxelbee OOh nice! I was looking in to gigavoxel before but never got around to really exploring how it is doing things as I got way too distracted with openvdb (and now nanovdb)
I asked about performance because I fear your current building blocks might not function as intended and it might be worth planning in advance for things like this.
For example if you wish to create shadows from a sun or point lamp then you will put much much more strain on your "recent voxel accessed" cache system. (what is incredibly cool btw)
Or if you wish to embed extra data in to the volume (such as material index or on the fly baked values like AO, GI etc...) then it will need to be able to work with less voxels on the same memory budget and even more reads and writes.
It would suck to see the repeat of what I was doing in my game engines. What I was doing was basically, I think of a concept and really polish it to what i'm using at the moment. Then later I find myself needing to heavily modify it or completely rewrite as I was way too optimistic when thinking about the scope it would be used in.
Best real example would probably be:
At one point I had this really cool system of optimizing mesh data by generating many vertex indices based on how close you were to the model. (almost like a way ahead of its time version of Unreal Nanite) It ended up being a HUGE hog on my code base and was barely making things faster because I never considered that I would use it on a model that are bigger than 200 verts. (sounds silly but I barely knew modeling at the time) The worse part was that this was so entangled in the engine that I could not even disable it easily.
@@lapissea1190 Wow! That so cool you made a system like that!
But yeah I think the best way to fix this is I need to move onto other parts and keep going so I don't spend to long polishing one part. But it's really fun to polish one part...
I can choose when the voxel should be marked as accessed so I think the system should be able to scale to very large accesses pretty well! But I agree that I need to move forward so that I can actually find out if it works as I want... Don't want to talk to soon then it doesn't work...
Thanks so much for the feedback too! :)
Can't you use an LRU tree? Although great work nonetheless!
Nice! But after you ends with render and try to go further you we find that DRAW a lot of voxels it's just a start of game %) Goof luck in a voxel journey!
Thank you! Yeah there is a lot of work to go for sure!
How small do you think the voxels will be in the game?
I'm aiming for around 10cm at the moment.
Oh yeah youtube Set the resolution 420p automatically
This reminds me of the Euclideon engine from Australia.
That's INCREDIBLE! Big Brain!
I have a million of questions, your algorithm is genius! Never went out of my way to learn Vulkan, but you show here what power it enables. Instantly subscribed, looking forward for more, wish you good thoughts and success. You're inspiring! Thank you!
Amazing!! You should consider making a discord for your project! I would totally join that :)
Thanks man! For sure I'm looking into it hopefully read for next month!
Holy shit this is amazing! Thanks for sharing
Very interesting. I think the reason this works is because the cache is small, otherwise if you don't turn around you would get voxels at the center of your cache that don't get rendered for a long time and also don't get moved. Since the cache is small most of the cache is probably used rendering what is on screen.
If you add a small bit of momentum to the player's movement you can use predictive rendering. Any direction change will take a split second as the character has to slow / change direction and this time can be used to start rendering what is expected to be in the new heading. Ie. when player pushes left button the engine can start rendering what is to the left but the camera doesn't start moving left right away.
This method was used a lot back in the day and works for most games except FPS where it can feel like input lag. In a physics based world it is normal though to have momentum and as long as it is kept small this can buy you a few ms as a buffer.
Thanks!
Camera interpolation and other techniques to get rid of the loading is something I am going to explore...
Moving (shifting) items in an array seems slow. How did you do it?
What's really going on is more of a circular buffer so the front index moves most frequently. However, voxels do get swapped to the front of the array which is where the circular index currently is. Then you increment that index. That means we don't have to rearrange every element in the array as often making it fast.
@@voxelbee Interesting!
@@voxelbee Was just about to ask if you used some sort of ring buffer for this :D
With how cg is made i had never considered something like this being remotely close to possible on current hardware, picturing a trillion tris seems like such an absurd joke in comparison to this lol! I am super intrigued, youre on the road to something amazing here! One question out of curiosity though, ive never seen voxel engines deal with differently sized varied voxels, only scaled&homogeneous, why is that?
Thanks! Well to make the ray casting more efficient having homogeneous voxels is very helpful
Can't wait to see more!
@bluedrake needs to check this out would be cool for your future in game dev
I wish lots of games could actually use that
Will this be open source at any point? I want to create a performant voxel engine for standalone vr (like oculus quest), and this looks exactly like what I was planning on doing.
At the moment I haven't decided! But probably not anytime soon once I've done a bit more...
I'd like to see this implemented in a game similar to Teardown where you have a world made of many very small voxels. This is a really cool project!
Have you got in touch with Miguel at Voxel Farm, I;m sure the two of you could revolutionize the voxel industry!
I've seen some of his videos very cool stuff!
Thought about using 2 buffers for cache and cycling between them? You can put voxels in one and then clear the other. It might be less memory efficient, but maybe you can find an even better approach. I just feel like moving all the other voxels in your cache might be kinda expensive? Or do you use some sort of linked queue?
Yeahhh the voxels only get moved to the front when they were going to get removed from the cache. So it doesn't actually move that meany voxels around in memory
Super impressive work, and (If I read the log right) this is running smoothly on a 555X? Jeez you'll have some headroom for some wicked stuff.
Looking forward to how physics will be handled.
Thanks so much! Yeah it is currently running on a 555X! haha
me thinking my 1 million 'voxels' is good:
That is brilliant. Now you've got my gears turning on similar caching systems for my projects.
Question (I don’t know anything about game design or coding): would building a game out of this eliminate the need for textures? Does assigning a color to a voxel count as texture? How does any of that affect the memory usage?
Yeah that could be the case if the voxels are small enough!
how is that there is 1 trillion voxels rendered with 1MB of ram if 1MB is 1 million bytes? are you compressing them to the point that you can store 1000 voxels per byte?
i was also confued by this! (I assume the algorithm unpacks them following some rule, idk)
for the love of christ change the textures.
Would be cool to see what would be the best quality your engine could generate/render point cloud models. (instead of points we would have voxels obviously)
But is it actually an array? Because an array is a fixed size and to shift non used elements back you'd have to search them and if you wanna add a new element you'd need to create an entirely new array with the wanted size and fill it up with all the elements you like.
Yeah It's actually a circular buffer so that fixes the problem with shifting
I hope you try making a delta force like demo.. would be nice to see a modern take on modern hardware
How do you handle moving all the entries in the array? Considering youre moving i.e. pos. 200 to pos 1, you need to move 199 entries by one spot, which normal arrays do not support?
Yeah well I'm actually using a circular buffer to move things to the front of the array!
@@voxelbee Ah interesting, so that handles the replacement of the last element for you. I guess that still leaves the moving of the complete buffer by hand? moving the start pointer would break the chain of newest rendered
@@Joorin4711 ah yes, makes perfect sense, but that would also mean that in some cases (i.e. A is, lets say, 3rd last position), that it would then be copied to the new start, and then be in the cache twice?
Can you make those blocks store a frame and just make the engine pull up those boxy frames in the correct order? Is that a good idea?
I'm going to be working on removing the loading on the edges of the screen over the next months or so...
Have you considered using a ring buffer for your cache? It might help you reduce churn in your array.
Yeah it is actually using a ring buffer right now because as you said it reduces churn in the array!
@@voxelbee cool, I'm glad I could guess that 😁
What is the size an individual octree node in your implementation? And what information exactly does that node contain?
At the moment the nodes are 8 voxels so take about 64 bytes, storing material IDs, child pointers and parent pointer!
Kind reminder: Fractals
1 MB damn man that's crazy !
so many comments talking about the array shifting. I can't help but think they've only read a few comments.
cool as hell
can the different color voxels be arranged in order to build procedural textures....
Yeah you can do that!
00:43 Narration = 1 Megabyte, Text on screen = 1 Megabit.
Yeah I just noticed!
This is actually very impressive.
man.. sick work. good job.
HOLD ON... that's the same UI the Flight Simulator SDK uses!
Haha they must use the same API!
0:43 says 1MByte, shows 1Mbit on screen. 0/10 horrible video /s
on a serious note, i'm not nearly smart enough to understand everything, so i don't know if this is a stupid question.
but whatever. do you think Resizable BAR (ie the CPU being able to access all of the GPU's memory at once) would have any noticable effect on performance with your engine?
Yeah I got a bit mixed up there! I did mean 1Mbyte
And I think that resizable BAR should have a pretty big impact on my engine as it would speed up the cache. But i'll have to get a new GPU to try it out!
@@voxelbee honestly that would be a first. AFAIK no existing engine is optimized to make use of Resizable BAR as it's pretty new and barely anyone has a GPU that supports it.
so it might be interesting to look into and see how much extra power you can get out of it, for the few people that got a compatible card.
this is Minecraft 2?
I see you are running at 30-60 FPS, what resolution are you rendering at? I don't see it on your GUI. You seem to lack shading/shadows in this example. How much of an FPS hit do you take with shadow-tracing a single light source? Would shadows & reflections mess up your caching system? What kind of overhead is there to navigating the oct-tree? Have you considered a 64-tree?
It's rendering just slightly under 1080p. Shadows and lighting would have an impact on FPS for sure but they wouldn't affect the caching system. I have a lot of optimizations to do with the ray-tracer at the moment!
@@voxelbee So 1024x768? And what if a lightscource is behind the camera? Voxels that are not in the view frustum should still cast shadows. You are not worried that with reflections & shadows your algo will end up trying to cache everything?