I made a Compression Algorithm for Heightmap Terrain

Поделиться
HTML-код
  • Опубликовано: 21 июл 2024
  • This show the General algorithm for compressing a Heightmap image using quadtree and quantization.
    This compression algorithm will more optimize with linear regression model calculated with least square method in 3 dimension. I can say we implement delta encoding compression with linear regression in 3 dimension.
    Video and image resources used in this video:
    Quadtree collision calculation demo:
    • 2D Collisions with Qua...
    QuadTree video series The Coding Train channel:
    • Coding Challenge #98.1...
    QuadTree image compression paper:
    / quadtrees-for-image-pr...
    Patreon:
    patreon.com/mohsenzare?...
    Github repo:
    github.com/mohsenph69/Godot-M...
    ---- Chapters
    00:00 - intro
    02:02 - quantization
    04:14 - quadtree
    08:43 - storing data
    11:07 - delta encoding
    12:25 - linear regression
    14:48 - compressing more

Комментарии • 265

  • @mohsenzare2511
    @mohsenzare2511  5 месяцев назад +53

    In this video I compare this to PNG: ruclips.net/video/ODYxPh5Cca4/видео.html

    • @godofdream9112
      @godofdream9112 5 месяцев назад +1

      sir, updated terrain_plugins tutorial ?

    • @Siromakha
      @Siromakha 5 месяцев назад +1

      How can i generate grass data from BW image?
      I have an BW png image that is the same size as the heightmap and I want to generate that grass file from it. How can I do that?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      I will man

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +2

      @Siromakha you can set get with set_pixel function on grass, you must save grass as .res function and then set the grass with set_grass_pixel After setting them you need also to save grass res again, I will try to make a tutorial about that but in alien planet demo in one section of the demo I explain about this

    • @tasahik
      @tasahik 5 месяцев назад

      You can use 2d splines

  • @JDoucette
    @JDoucette 5 месяцев назад +256

    Amazing work, and great ideas! A few more optimization ideas:
    1. Don't use double or even float. 32-bit float uses only 23 bits of the 32-bits to store the value; the rest is the exponent (size/magnitude/scale) and +/- sign. You already have a scaling factor min..max for other "types"; do the same here. There is no point in repeatedly storing the scale for each number. Floats are general purpose; when you know what you're doing, you don't need them. (P.S. errata: if your quantization step is 0.5, then 0..255 maps to 0..127.5, not 254)
    2. Your ranges of data type do not need to be restricted to 4-byte int, 2-byte int, 1-byte, 4-bit, 2-bit. You can also include 3-byte, and 1-bit. In fact, you can include any-bit... you can have a stream of bits, instead of a stream of bytes: 3-bit numbers, stored in bytes: [010 110 01] [1 011 001 1] [01 011 110] etc.
    3. Let's attempt to break your claim "it cannot be more optimized than this": A. The more options you have for compressing small blocks (flat, linear, curved) would allow for more compression. E.g. Linear could be handled vertically, or diagonally, and possibly be smaller. B. The quad tree could be divided into 3x3 or 3x2 sections to allow repositioning to better capture a flat next to a hill, where 2x2 may always cut it down the line. Etc. This would require depth testing, like a search, back to the tree, to select the best one. Sort of like how ZIP/RAR on max compression takes longer since it looks for more options.

    • @MrCine4d
      @MrCine4d 5 месяцев назад +6

      Everything sounds fine except for the #2

    • @Troloze
      @Troloze 5 месяцев назад +13

      ​@@MrCine4d it is weird, but it should work, specially when dealing with a big chunk. you might have a bit of leftover bits that won't be used, for an example, let's suppose we have 32x32 points in our chunk, if we deal with them with 4 bits each, that will use 512 Bytes, if we use 2 bits, that's 256 bytes. But let's say you want to use less data than 512 Bytes but with a bit more precision than 2 bits, you could use 3 bits. It will use 384 Bytes, at the cost of extra compressing and decompressing time, since you'll need to break bytes down in order to construct the 3 bit data. e.g. take this as an example: [001 101 01][1 010 101 1][11 011 000] you will need some extra time to deal with the data that is shared between bytes. A way to minimize this is to work with larger data types, so the shared points will occur with less frequency.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +49

      Thanks for your suggestion, your are right about that!
      About float: a block of tree rarely use float encoding, 99.99% of time 2 byte integer are enough for storing each block, this is just for in case there is some strange terrain!
      About adding other encoding type your are right, Specially considering the fact that between 2 byte integer and 1 byte integer there is a huge gap, 2 byte integer can hold up to 65353 but 1 byte integer up to 255, You can see how much gap exist between them, and I notice if I can use a stream of bit this can be further compress a lot as you said in your comment!
      Up to this point I tried to keep everything in power of two, this make everything more easy!
      This is the first step, and obviously in future there are a lot of space for improvement

    • @MrCine4d
      @MrCine4d 5 месяцев назад +6

      @@mohsenzare2511 you are right, keeping things in power of two might be better and more important than further compressing with bits stream. I can see a few ways to more or less improved the performance with simd, threading or keeping the data more memory friendly, but bit-stream might disallow it.

    • @JDoucette
      @JDoucette 5 месяцев назад +30

      I believe we are talking about best compression with persisted storage, thus not worrying about decompress speed. Power of 2 is easier and more simple; a great first step, and bit streams will give you the next immediate largest, and quickest to implement, improvement.
      Real-time access of compressed data is another conversation: Data access patterns and caching results will be king. For massive maps, compression and real-time access are equally important. Bit-streams are great here; I have used them to achieve both, as accessing bit streams requires only a few operations. CPU integer operations are fast (0.5 clock cycles for an ADD), whereas memory access is not (400 clock cycle for a read). Compressed data means less bytes, more stuffed into cache lines.

  • @askeladden450
    @askeladden450 5 месяцев назад +220

    bro is casually making a terrain system thats a thousand times better than what unity (a multi billion dollar company) has ever come up with.

    • @delphicdescant
      @delphicdescant 5 месяцев назад +76

      to be fair, Unity has never innovated a single thing in its entire existence, instead just existing as a derivative parasitic mass that draws inexperienced devs into its trap of an ecosystem.
      So really not all that surprising. Lots of indie devs have outprogrammed Unity with simple projects.

    • @timmygilbert4102
      @timmygilbert4102 5 месяцев назад +10

      ​@@delphicdescantnah bro, unity used to innovate until the fire nati... I mean Riccitello burn it down

    • @askeladden450
      @askeladden450 5 месяцев назад +13

      @@delphicdescant yeah, I remember having to program my own terrain system, which wasnt anything special (just a regular ROAM terrain), but was still a lot better than the garbage that unity has.

    • @AFE-GmdG
      @AFE-GmdG 5 месяцев назад +12

      @askeladden450 This isn't a complete terrain system, it's "only" a very good way to compress height data. On a more sophisticated data model you need additional things which are stored in traditional terrain data like texture painting data to store multiple texture indices, density data for foliage and so on. Each of these data may be compressed in a different way to get the maximum compression rate.
      Nevertheless, this is a really good algorithm!

    • @askeladden450
      @askeladden450 5 месяцев назад +4

      @@AFE-GmdG yeah, but i'm talking about the entire project of which this video is only a part of.

  • @saeedserwan
    @saeedserwan 5 месяцев назад +47

    You're a brilliant programmer, sir! I really hope your hard work pays back

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +8

      Thank you very much!

    • @st20332
      @st20332 5 месяцев назад +3

      it's already paying back, look at the results of his work! THAT is the pay of this type of work, the result :) tho yeah, i hope he also gets a high paying career from of his work, finances are a good pay for this type of work too Lol

    • @nagashiokimura994
      @nagashiokimura994 5 месяцев назад

      ​@@mohsenzare2511 If i had the money, i definely would pay you for this. Please continue :))

  • @D0Ct0Rran
    @D0Ct0Rran 5 месяцев назад +32

    Consider using FFT (fast Fourier transform). Actually what you tried to do somewhat similar to Fourier, but with only one lowpass with linear spline (or biquad). Also due to continuity of a landscape consider using T-elements in basis (add points to quads adjacent to subdivided ones) it'll make differences lower.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +3

      This also can be done, it need to be tested

    • @MrMediator24
      @MrMediator24 5 месяцев назад +8

      Also thought of that, but in this case DCT would more applicable, it's what JPEG uses

    • @D0Ct0Rran
      @D0Ct0Rran 5 месяцев назад +2

      @@MrMediator24 Agree, also maybe wavelets (may be used to enforce locality). All give the same level of computation complexity and similar compression, actual result depends of data representation, required accuracy and desired code complexity.

    • @rbarghouti
      @rbarghouti 4 месяца назад

      Isn't that basically how jpg works?

  • @f1f1s
    @f1f1s 5 месяцев назад +2

    The linear regression made a surprising appearance! As you know, there are regression-based ‘random forest’ methods that sub-divide the space into partially linear fits where each break-point minimises the prediction error. You might want to use some of these random-forest-based techniques to come up with a set of planes yielding the desired accuracy without relying on the 2×2 sub-division (the cut points are arbitrary). In the second example where your terrain consisted of two surfaces, this approach would have yielded a ridiculously small file size because of how well that surface can be approximated with two planes.

  • @openlink9958
    @openlink9958 5 месяцев назад +9

    If this doesn't reach a million views Im going to be surprised, I was certain this was going to be one of those videos of: ground breaking coding project recommended to most people years after the author made it and he is no longer active.

  • @Barteks2x
    @Barteks2x 5 месяцев назад +4

    I had a case where I had integer heightmap for some terrain with known value range, but needed to store the exact values, no rounding or approximation and wanted it compressed. Regular compression algorithms were somewhat unsatisfying as there wasn't much that repeated directly.
    The way I approached it was that I took all of the most significant bits first, all together as a bit array. Then the next bit, and so on, and then compressed that with gzip. The result was shockingly good compression and I'm yet to see anything lossless that beats it by any significant margin.

  • @giggio1747
    @giggio1747 5 месяцев назад +11

    Uow! This is very clever and well explained. Thanks for bringing awesome tools and knowledge to the community!

  • @FROZENbender
    @FROZENbender 5 месяцев назад

    reading the title I thought of a few methods but nothing of the scale presented in this video. this is brilliant!

  • @ristopaasivirta9770
    @ristopaasivirta9770 5 месяцев назад +5

    Awesome video!
    I'm amazed there hasn't been much research done in to this area or they have been kept private by the studios that have done them.
    Thank you for sharing your hard work and I will look forward to your advancements.
    I'll come drop a donation help you pay the bills.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +2

      Thanks man :)

    • @ratchet1freak
      @ratchet1freak 5 месяцев назад +1

      anything that would work on high dynamic range grayscale images would work for heightmaps though.
      In fact I'm halfway sure that if you juggle the bits around of each the heightmap pixels into 4 1-byte channels so that the 4 most significant bits are the significant bits of each channel and then use png you can get a very good result already.

  • @diguifi0fficial
    @diguifi0fficial 5 месяцев назад +8

    thanks for contributing to open source my friend

  • @2dozen22s
    @2dozen22s 5 месяцев назад +2

    This is absolutely fantastic and I really hope more people take note of it!

  • @harveyhans
    @harveyhans 5 месяцев назад +1

    holy, this is so technical but the way you broke all of it down makes it easier to digest. well done!

  • @n8style
    @n8style 5 месяцев назад

    Incredible work, great to see, loved the video!

  • @darrennew8211
    @darrennew8211 5 месяцев назад +3

    I love videos like this that explain the algorithms.

  • @colly6022
    @colly6022 5 месяцев назад +2

    this is very similar to the method i came up with for mesh compression! you split the mesh per an octree until the geometry inside each leaf is within some tolerance to geometry as constructed from 8-bit local space coordinates. i.e., each component of a vertex in an octree leaf is stored as a single byte (3 bytes total). the higher detail a part of a mesh is, the deeper the octree will need to go before re-representing the vertices with single-byte components will yield a mesh "close enough" (within tolerance) of the original mesh.

  • @Neal_White_III
    @Neal_White_III 5 месяцев назад +1

    I must say: I am impressed! I've been a software engineer for decades (and I've seen a lot of technology demos), but few were as impressive than this.
    Here's an idea for a future video: Before rendering the terrain mesh, subdivide the squares (triangle pairs) into a more detailed mesh, and tweak that height map mesh using a roughness setting (perhaps stored as quadtree per-leaf parameter). Near terrain should have the most subdivision and distant terrain could skip this step entirely. When tweaking the subdivided heightmap, I'd suggest using the xyz world coordinates as the seed to a noise function (so that the data can be easily recreated, not stored). The actual perturbation will usually be slight, otherwise it will look unnatural, but it would be interesting to see a high delta area based on white noise.
    This is the first video of yours I've seen. I'm now going to watch some more of them. :-)

  • @NoctorOnwood
    @NoctorOnwood 5 месяцев назад +4

    The algorithm is so powerful, you really expanded the boundaries of the Godot engine!

  • @zisumevoli96
    @zisumevoli96 5 месяцев назад +1

    i had this idea a while back and ive been waiting for someone to code it up, thanks!

  • @rodrigopintos2251
    @rodrigopintos2251 5 месяцев назад +1

    Amazing work! Looking forward for more Quality videos

  • @RossUnger
    @RossUnger 5 месяцев назад +11

    This is amazing. I really want to see this as a c++ gdextension.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +3

      Thanks man, I will publish that a near future

  • @NeZversSounds
    @NeZversSounds 5 месяцев назад +1

    You keep mindblowing me with each video!

  • @addmix
    @addmix 5 месяцев назад +3

    This is very cool. I may need to implement this, or a similar algorithm into my wip terrain plugin.

  • @nikolin2162
    @nikolin2162 5 месяцев назад +3

    Dude is the main character from silicon valley

  • @sebbbi2
    @sebbbi2 5 месяцев назад

    I once wrote a super simple terrain delta compressor. 10x+ compression ratio. Decode in scanline order. Predictor uses previous scanline (y-1) pixel and previous (x-1) pixel and (x-1, y-1) pixels. These 3 points form a plane. Then we store difference of the new pixel to the plane. Which is close to zero. Then zstd/lz compress.

  • @doltBmB
    @doltBmB 5 месяцев назад +1

    by adding a height offset to each block, you can make sure that it is only the difference between the highest and lowest point that matters, and not the absolute height, so you would only need more bytes for blocks with a large difference inside of the block.

  • @Protoxy
    @Protoxy 5 месяцев назад +2

    Thanks for explaining and sharing your work

  • @cpu_UP
    @cpu_UP 5 месяцев назад +1

    At 4 minutes the max for you will be 255/2 = 127.5, in all cases good video.

  • @bunni3140
    @bunni3140 5 месяцев назад +6

    the godot foundation should be supporting this

  • @USBEN.
    @USBEN. 5 месяцев назад +2

    Such a well made video and easy to understand.

  • @Splarkszter
    @Splarkszter 5 месяцев назад

    Long live open source, we need a culture that praises and donates.
    Education is the solution for humanity.

  • @jacobjacob5735
    @jacobjacob5735 5 месяцев назад +6

    What is the file size and quality of yours compared to a jpg compressed version? Maybe it was not even necessary to create a custom version, the motivation to use another encoding is a bit missing in my opinion. Of course you may want 100% accuracy in some cases, but would the difference really be too big?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      It is not good to compare this to a jpeg compression as Jpeg is designed to compress images, with color value with each channel of color has the range between 0-255, Terrain pixels has much more range, But if I want to somehow compare this an image with same dimension with this compression will take more data compare to JPEG but less compare to png

    • @NXTangl
      @NXTangl 5 месяцев назад +3

      ​@@mohsenzare2511You could use JPEG on a monochrome image. Thanks to how the compression works, it would compress the color information to basically nothing even using the stock algorithm. My hunch is that using DCT blocks as the leaves of your quadtree could pay some dividends...

  • @sean7221
    @sean7221 5 месяцев назад +1

    Amazing work! You're a credit to the community

  • @realhet
    @realhet 5 месяцев назад +2

    Before I watch it: It got to be hierarchical delta coding. :D
    Edit: Yes. That 2nd order 2D poly regression at the end was a great idea! That's not just delta, that's delta*delta.

  • @porglezomp7235
    @porglezomp7235 5 месяцев назад +1

    I wonder if you could do better by doing layered coding so that interior nodes also store those linear regressions and then layer those together? e.g. the root node stores a plane, the children of that store planes, and all those get summed up, allowing more aggressive quantization earlier? That sloped terrain you demoed early on could be stored in just 2 layers with flat leaves below that, because the linear interpolation would be correct already. I guess this would be basically layering delta encoding on top of your delta encoding? (This was inspired by thinking about how fractal noise is higher detail layered on top of lower frequency shapes.)

  • @kristoferkrus
    @kristoferkrus 5 месяцев назад +4

    3:43 I think you mean 127.5 here and not 254, right? Otherwise, how did you arrive at 254?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      Each unit in byte is 0.5 meter in that case, which 254*0.5 = 127

    • @kristoferkrus
      @kristoferkrus 5 месяцев назад +3

      @@mohsenzare2511 Why 254*0.5 and not 255*0.5? 255 is the maximum value when the difference between the values is 1, so shouldn't half that (i.e. 127.5) be the maximum value when the difference between the values is 0.5 since everything is scaled by 0.5?

  • @_caracalla_
    @_caracalla_ 5 месяцев назад +2

    satisfyingly technical video. you have a subscriber.

  • @divyamkhatri8501
    @divyamkhatri8501 5 месяцев назад +1

    I didnt understand many things you said but I really appreciate your work. respect++

  • @stevethepocket
    @stevethepocket 5 месяцев назад

    As cool as this is, I want to mainly thank you for making me aware of QOI and, by extension, QOA. They both seem like they could be very useful for game devs who want to save disk space and improve load times without sacrificing all the clock cycles needed to unpack it all. I know games are expected to use DXT for instant hardware decoding, but maybe at load time the CPU could decompress the QOIs, send them to the GPU to recompress into DXT, and it would still be fast enough to get a net benefit? Gonna have to look into this.

  • @iritheshadowcreature8537
    @iritheshadowcreature8537 5 месяцев назад

    This is really cool. I had once an idea for a similar compression algorithm, though in my case I was thinking how images can be losslessly compressed which is a bit different from heightmaps
    Your video describes doing something similar in a lossy manner though, where my idea involved a similar code, essentially height + gradients, and adding a residual pixel map on top of it representing difference between approximated pixel and real value stored with fewer bits than a typical image would be stored as. This 'error map' could then recursively be compressed through the same algorithm
    If I can offer an idea, instead of quad tree you might be able to get higher compression in some cases by using different structures like triangles or angled slices especially since gradients and sharp height changes are more likely to happen along lines that neatly defined quads. Imagine drawing a line between 2 pixels in a quad and storing these slices in order of first pixel that they contain. I don't know how difficult it would be to implement something like this in your code, I imagine it might require a bit more complex testing to estimate the correct angle and offset at which this should be done, though actual compression boost or loss might depend on situation

  • @NitroxNova
    @NitroxNova 5 месяцев назад +5

    very impressive, planning to use your plugin for my rpg (once the character creator is done lol)

  • @blacklistnr1
    @blacklistnr1 5 месяцев назад

    interesting approach! I would have went for hilbert curve to linearize then run length encoding for flats + delta encoding for slopes and call it a day :))

  • @SadShiry
    @SadShiry 5 месяцев назад +2

    Just commenting for the algorithm to blow this video up because you deserve ❤

  • @FreedomAirguns
    @FreedomAirguns 5 месяцев назад

    Have you considered using RGBA? You can get creative with *color coding* too. 🎉
    There are an indefinite amount of scales that you can infer to a range of numbers. For example, you can infer an exponential progression or any other series to a linear one, say 2^n to 1to255. And with color coding, you already have 3+1 components with which you can create anything you want. You could use RGB for XYZ and A(alpha) as a multiplier or as a trigger for events, like a vector to simulate forces in that precise spot or literally whatever you wish, even the opposite, so A as the B&W heightmap (just as you did) and RGB for whatever else, even a vector field for physics (as I mentioned earlier). *Anything is possible* . After all, it's *all just numbers* .❤

  • @flyguille
    @flyguille 5 месяцев назад

    oh, you reinvented the wheel, more exactly, COMANCHE for MS-DOS game!, remember that helicopter video game?, that damn game showed me that I had a BIT cell error in the CACHE SRAM of my 486 cpu!, each time the map reached the hardware bit error in the cache, the height value goes crazy, spiking the map.

  • @TheVideoGuardian
    @TheVideoGuardian 13 дней назад

    Isn't the final mesh going to be decompressed and triangulated anyway? Or is the purpose here just to decrease file size, in which case why not just use LZ or even just ZLib?

  • @TheMcSebi
    @TheMcSebi 5 месяцев назад +1

    Another approach: put the entire heightmap through jpeg compression and employ the artifacts into your landscape xd

  • @user-xe8oi5oq6c
    @user-xe8oi5oq6c 5 месяцев назад

    You're genius! Look at blosc and mafisc compression. Maybe they can be good backend for your compression method. Blosc can be even faster than memcpy.

  • @P-G-77
    @P-G-77 5 месяцев назад +1

    Interesting... i starting a project like that, using my IA and the initial results are very promising, but in the end... for the same reasons of "time" i leave... but this project has a particularly good run.

  • @lostvisitor
    @lostvisitor 5 месяцев назад +1

    nicely explained.

  • @nonchip
    @nonchip 5 месяцев назад +1

    did you just make a video about "and that's how i somehow didn't learn that a greyscale image is a greyscale image" tho? would help if you could be a bit clearer on that one because all those formats you ruled out "for images" are actually better than what you made at compressing heightmaps, and the whole folder full of text files full of floats is a massive pain. so yeah not sure what you *think* you're storing "as a 4MB floating point", but that's not what a float is.

  • @maymayman0
    @maymayman0 5 месяцев назад +1

    Mohsen you are so epic !!

  • @MrEdrum
    @MrEdrum 5 месяцев назад

    Since neighboring pixels will only have very small differences in height, I think you should encode the difference to the neighboring pixel, instead of the normalized height itself. you should be able to either use one of the smaller data formats, or be able to use a lossless compression algorithm on your encoded file afterwards (which works best if there are many of the same bytes) and since smaller distances are more common, it should work quite well.
    Another idea:
    If you use jpeg style compression first (divide your hightmap into blocks, then use a FFT to match the block exactly, then discard a good portion of the high frequency parts (they are close to 0 anyways))
    then you can subtract the jpeg compressed version, from the original and store the differences with very shallow bit depth, as all of the numbers will be around 0.
    This difference version is mostly, because at the edges of the jpeg blocks, you'll get the typical jpeg artifacts, depending on how much of the high frequency part you discard. If you only do a little bit of jpeg compression, you might not need the difference layer. And I'm not sure if it's better to use less jpeg compression and no difference layer, or more jpeg compression and a difference layer
    If you want even more compression after that, try compressing the difference layer. either using the non destructive png compression,
    or try dividing the difference layer (let's say it uses a bit depth of 4 bits) into 4 1-bit-depth files, then use run-length encoding on each of them. since neighboring pixels will probably be rather similar, this should work well.
    You can then play around with these parameters:
    - jpeg compression ratio
    - bit depth of difference layer
    - png/ run length /quad tree encoding (or something else) for the difference layer
    If you use a FFT on the whole height map (instead of chunks), you could get away with discarding more of the high frequency information, as jpeg artifacts and rough edges are way more noticeable, than minimal but smooth hight differences

  • @iGaktan
    @iGaktan 5 месяцев назад

    If I understand correctly, the data is decompressed during runtime, and stored uncompressed on the GPU right?
    It's always a pretty bad idea in games to store textures as png files anyway, since you need a (quite heavy) decompression step, to convert it to the GPU format.
    Usually games use BCn formats, which compress the data stored to 1 byte per pixel at most (0.5 bytes per pixels for a couple formats).
    These compressions are lossy, but depending on the format, it's usually not perceivable.
    GPUs have dedicated hardware to decompress them on the fly, so no decompression needed when moving the texture from your hard drive to the GPU memory. You might think the decompression might add extra GPU time, but it's actually more or less the same, it's pretty fast.
    This should save quite a bit of GPU memory compared to a raw uncompressed texture.
    Saving disk space is nice and all, it can improve loading times and streaming a bit, but then what is the point if the uncompressed data hogs your GPU memory?

  • @TheOnlyPixelPuncher
    @TheOnlyPixelPuncher 5 месяцев назад

    If you go with FFT why not store in a jpeg? That's precisely what JPEGS are optimized over to use, remove less noticable higher frequencies and thus saving space. What's the reason for developing an entire algorithm from scratch?

  • @bipl8989
    @bipl8989 5 месяцев назад +1

    Great work. The typical modern programer does not worry about memory or file size, but it makes no sense to store mm when you're only ever going to zoom down to meters, or even km.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      Thanks man

    • @bipl8989
      @bipl8989 5 месяцев назад

      @@mohsenzare2511Welcome. It is important to realize that the solution always remains proportional to the problem at hand. This is especially true for massive point cloud data files that store raw X,Y, Z data in 500MB files. It is usually far more convenient to transform it to Z values placed on a uniform grid, as does NASA STRM data stored as HGT files. That is much more efficient for all postprocessing use. But that is still inefficient when the terrain is flat. Your solution to store data only where it is actually of use in definition of detail is exactly what the terrain modeling field needs. It can also be adapted for any shape modeling task. Specifing the level of detail is very efficient too. When you don't know what level of detail a modeler wants to work with, the best solution is to always leave the level of detail to be specified by the user. Read it once, transform it once and eliminate the data processing overhead for all subsequent work. Having started computing with building my own Z80 based system in 1981 that had only 32k and a 180k 5.25" floppy disk, at an equivalent cost today of $10,000, I fully appreciate any attempt to solve a problem by reducing the size of the problem, rather than always increasing the size of the hardware. I am looking forward to trying this out. Again... Great stuff. Thank you.

    • @bakedbeings
      @bakedbeings 5 месяцев назад +1

      Game programmers, especially on console and mobile games, have to think about it constantly, as do the artists. For a long time your iOS title had to fit in 50mb if you wanted people to download it away from home; this was usually critical for success, because they're.. so mobile.

    • @bipl8989
      @bipl8989 5 месяцев назад

      @@bakedbeings I didn't think about console programing work. Great example. Thanks.

  • @darkboxstudios
    @darkboxstudios 5 месяцев назад +2

    From ignorance, can you add a index table for common values? Would that reduce the size a little bit? maybe is not worth it...

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      I tried other compression method, it not worse it, as this method I implement it has very fast decompressing speed, adding other stuff will not result so much compression (maybe a little) and also it can increase load time

  • @Zkryhn
    @Zkryhn 5 месяцев назад +1

    Would it be theoretically possible that multiple singular textures can be assigned to the Terrain in like an "List" so that it creates the Arraytexture automatically from those instead of creating one manually? Could also be optional with one being able to decide between the manual Texturearrays and the automatic one ig...
    But your Work on it is it incredible stil

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      What your are saying yeah it is possible! I have a video about how to use color paint I suggest to watch that

  • @realdragon
    @realdragon 5 месяцев назад

    I wonder if at the end if it would be better to work with sin and cos instead with polynomials. Since terrain have multiple ups and downs sine can add period to it instead having multiple polynomials to describe same thing

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      I don't think sin and cosin will work here, But it can be tested, terrain usually have slope and it is not repetitive

  • @chrisb3358
    @chrisb3358 5 месяцев назад

    Cool Stuff!
    What is decrompression cost? Reading/Loading heightmap during gameplay should be very fast.
    The limits maybe different for open world game (data streaming) and level based game (loading once during loading screen ).
    Maybe you can talk about that in a future video.

  • @therecon448
    @therecon448 5 месяцев назад +1

    I am not an expert on the subject but I wonder how it would be if we store the height map in pixels( color value )and compress it as webp.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      This is design for heightmap and will perform better

  • @arez4906
    @arez4906 5 месяцев назад +4

    You said in the Quadtree explanation that you are finding the minimum and maximum height for each part. Do you do this by iterating every single pixel in that region? If so, this might be optimized by using compute shaders to process the whole heightmap all at once and finding the max and min height in the shader.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +4

      Yes we are doing that by iterating in each pixel, but that is not a problem as we do this only for compression not for decompression, in another word this happen only in development time, not when user want to play your game

    • @arez4906
      @arez4906 5 месяцев назад +2

      @@mohsenzare2511 Ah yeah. So its not THAT important gotcha 👍 Still would be a way to improve this :)

    • @ristopaasivirta9770
      @ristopaasivirta9770 5 месяцев назад +2

      @@arez4906 Anything that can be processed during a lunch break is 'fast enough' :D

    • @darrennew8211
      @darrennew8211 5 месяцев назад

      @@arez4906 Plus, he's scanning a million numbers, once. OK, maybe a few thousand times, as he breaks down the terrain. The processor is going at billions of ops per second. We've moved beyond "we have to worry about linear operation speed in human timescales."

  • @FromLake
    @FromLake 5 месяцев назад +1

    Interesting! Thanks

  • @KaletheQuick
    @KaletheQuick 5 месяцев назад +1

    You compared it to the raw size of floats, 4mb. But how does it compare just using the normal image compression formats?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      Already compared, usually people use PNG 16 bit or openEXR for storing heightmap, and this perform much better than those

  • @xotmatrix
    @xotmatrix 5 месяцев назад +1

    Very interesting.

  • @MhdAliAlashkar
    @MhdAliAlashkar 5 месяцев назад

    أحسنت حياك الله
    خوارزمية تستحق التعلم

  • @peterixxx
    @peterixxx 5 месяцев назад

    Have you thought about saving it as a JPEG? It already does a lot of this. Maybe the color hue information could be removed from the JPEG format and it would be a decent way to compress heightmps.

  • @doyouwantsli9680
    @doyouwantsli9680 5 месяцев назад +1

    How does it compare to gz or some such compression?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      I did not compare to gz, but I compared that to PNG, watch the video link in pin comment

  • @oraz.
    @oraz. 5 месяцев назад +1

    This is awesome.My game idea uses a huge terrain so I can use this. So it subtracts from a plane, stores the plane coefficients and flat-encodes the subtracted numbers, I guess that is lossy because of the quantization?

  • @TheDoomerBlox
    @TheDoomerBlox 5 месяцев назад

    All I'm stuck wondering is whether or not this is trying to solve a problem that already has very strong solutions.
    For example, then there are attempts at recreating wobbly lines (terrain?) with multi-dimensional correlation, these things are called Audio Compression Algorithms, which have several channels of wobbly things going on that have exploitable correlation.
    It would be funny to see how well FLAC, WavPack (lossless) or even Opus (lossy) handle terrain data packed into a multi-channel ""audio file"".

  • @matsv201
    @matsv201 5 месяцев назад +1

    How much beter (or worse) is this compare to say just using PNG?
    (I use 16bit PNG as a hightmap in a research project and give me a pretty decent size)

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      In this video I compare that to PNG: ruclips.net/video/ODYxPh5Cca4/видео.html

    • @matsv201
      @matsv201 5 месяцев назад

      @@mohsenzare2511 tanks.. nice

  • @yekaneast
    @yekaneast 5 месяцев назад

    Did you use Middle Out?

  • @KoryKinnett
    @KoryKinnett 5 месяцев назад

    Now imagine using this to speed up Minecraft's Terrain Gen.

  • @mohammadhh5113
    @mohammadhh5113 5 месяцев назад +1

    GREAT job
    What the impact on performance?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      The heightmap uncompressed in load time, and then in run time we use the uncompressed heightmap, and uncompromising is quite fast, So there is no performance cost

  • @TheMcSebi
    @TheMcSebi 5 месяцев назад +1

    nice video!

  • @chimeforest
    @chimeforest 5 месяцев назад +3

    This is awesome =D
    Out of curiosity, if you took your 10,000km map and compressed it using this algorithm how much would it shrink down from 58GB?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +2

      Thanks man, I will try that, did not have time to convert that to this format yet, But I will do that!

    • @chimeforest
      @chimeforest 5 месяцев назад

      @@mohsenzare2511 I look forward to the results ^^

  • @myne00
    @myne00 5 месяцев назад +1

    Seems like you're basically doing all the things jpeg/mpeg does.
    Which makes perfect sense since a heightmap and a bitmap are basically the same thing.
    I'd be curious to see how jpeg/mpeg go on the same data.

    • @pvanukoff
      @pvanukoff 5 месяцев назад

      That's what I was thinking. Just save it as a grayscale jpeg file.

  • @rbarghouti
    @rbarghouti 4 месяца назад

    I'm a little confused why an image compression algorithm wouldn't work fine for a heightmap.

  • @AntonioNoack
    @AntonioNoack 5 месяцев назад +1

    Hey, these videos are always motivating me to work on the topic, to optimize them as far as I'd like.
    Could you publish your dataset with corresponding rules, where you compressed your 256MB down to ~14MB? Kind of like a small competition without price.
    The most elegant/fastest/best result may be included into your plugin.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      I am sorry, I am not understanding your question can you explain that more

    • @AntonioNoack
      @AntonioNoack 5 месяцев назад +1

      @@mohsenzare2511 Your thumbnail says 256MB -> 13.7MB. Please publish the raw 256MB file, if possible, and what settings/restrictions you used on accuracy.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      As my understanding increasing accuracy even by 0.3 meter still keep a good quality, it just add some small noise to terrain which can be consider natural, For this terrain I used 0.3 meter accuracy, I will publish everything, This still need some work to be finished!

  • @guigazalu
    @guigazalu 5 месяцев назад +1

    I loved watching it! Even more, the "we must read the right byte". I made a image format (implementation in Rust; @ crates rs as pixlzr), and suffered a lot with readding the wrong part. But I hadn't gone as far as making a quadtree, just some blocks.
    Also, as yours seem to be a really *compressive* algorithm, any plans on also building a really *simple* one, like QOI but for generalized height maps?
    Also also, could it be used for faster communication between the CPU and the GPU?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      OH yeah, I should have said "Bit not Byte", About turning that to a general format, Yes I love to do that, But still there is a lot of room for improvement, and I believe in future I will change some stuff which will break the compatibility with this current version, so let's wait for that until we reach a more stable version
      uncompressed happen in CPU side, And still I don't know how to uncompressed that in GPU side, so we will stick to this for now

    • @guigazalu
      @guigazalu 5 месяцев назад

      @mohsenzare2511
      It's always fun launching a 0.1 format specification, and then, 0.2 totally breaking it because it's better.
      Well, uncompressing on GPU could happen by sending major tree blocks, and then exploding into texture-threads (by pixel), right? I'm no GPU dev, so I don't know.
      Bonus: is it already multi-threaded? With what you showed, there's no need for that. But if you wanted, having major tree blocks encode a little more about themselves, could make it easier, right?

  • @mightyhelper8336
    @mightyhelper8336 5 месяцев назад +1

    Wait. And you didn't compare this to G-Zip? Or other generic algorithms?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      I compare that to PNG watch the video in pin comment

  • @bunni3140
    @bunni3140 5 месяцев назад +3

    6 ads in 12 minutes is making this really hard to follow. the breaks keep interrupting your explanation. do you post to other platforms?

    • @AFE-GmdG
      @AFE-GmdG 5 месяцев назад +1

      I had 2 ads before and nothing in between. I guess it depends on the time of day when the video is played?

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      Sorry about that, I did not place those add, RUclips did that automatically, for future videos I will try to change video setting

  • @TeamUnpro
    @TeamUnpro 5 месяцев назад

    Hey :D very cool idea!

  • @IsYitzach
    @IsYitzach 5 месяцев назад +1

    There is one way to get ridiculous compression ratios: procedurally generated terrain. If you generated your terrain from Perlin noise, then you would only need the seed value and the Perlin noise algorithm and the height map will fall out for infinite terrain getting an infinite ratio.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      Yeah, But how many AAA game made with Perlin noise!!

    • @darrennew8211
      @darrennew8211 5 месяцев назад

      @@mohsenzare2511 Minecraft. Starfield. :-)

    • @stark_energy
      @stark_energy 5 месяцев назад

      The problem with this Perlin algorithm is that it is no longer apple to apple comparison of compression because of one huge drawback: you cannot edit any of the terrain, it is there accept it or not, because if you change the seed, you change everything. That is why all games that need to have custom map / adjustment will never use that.

  • @Andserk
    @Andserk 5 месяцев назад +1

    I have a question, from a very unexperienced perspective. are using LODs better than loading and unloading chunks? in the context of an already generated world and populated with assets. Your videos are very informative, keep up the good work!

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      Maybe I don't get you question well, But LOD is related to chunks which is already loaded, you can not compare that to load or unloaded, But if I don't your question well let me know

    • @Andserk
      @Andserk 5 месяцев назад +1

      @@mohsenzare2511 it's ok, I wasn't very clear lol. I was referring to using a chunk system to unload regions of the map completely instead of keeping them loaded but with a very low level of detail. Like saving a chunk to a file and remove it from the world until I visit it later and I load it back up. maybe I'm confusing the idea, my objective would be having like a 50km2 map but using a top down view, I wanted to see how I would optimize performance in that scenario.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      @@Andserk What you are talking about is supported by MTerrain, it will load and unload chunks automatically, and when the chunk is loaded it will use some kind of LOD system to reduce the vertex amount!!
      I already made a demo which is 100kmX100km I don't know you see that or not!!
      By the way if your game is far from terrain I suggest to use bigger horizontal scale (heightmap pixel per meter)

    • @Andserk
      @Andserk 5 месяцев назад +1

      @@mohsenzare2511 I see! then it's perfect! 😁 sorry, I'm still learning the ropes

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      @@Andserk No problem man

  • @jandrabik5519
    @jandrabik5519 5 месяцев назад +2

    @mohsenzare2511 5:22 FYI there's no such thing as 'most-optimized' since the word 'optimized' means the best by definition. So you tried to optimize quadtree, period :)

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      What I meant was that, with current encoding, where should be divided and where not should be divided, (most optimized in this sense) not in general

  • @yunuszenichowski
    @yunuszenichowski 5 месяцев назад

    Why can't you just png compress the b/w height map? Perhaps jpg would be great as well.

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      You can put a heightmap inside a png, But png is not designed to compress heightmap like this

  • @minamozaffari53
    @minamozaffari53 5 месяцев назад +4

    It’s genus, seriously you are awesome 🎉 So proud of you 🥂🫶🏻

  • @piotrprs572
    @piotrprs572 5 месяцев назад

    Long time ago, company S3 that made graphics cards, also do some texture compression. This isn't something new.. Problem is to manipulate such compressed items, without decompressing.

  • @BboyKeny
    @BboyKeny 5 месяцев назад +1

    Good man

  • @NotSeggsySage
    @NotSeggsySage 5 месяцев назад +1

    For the algorithm.

  • @kreuner11
    @kreuner11 5 месяцев назад

    I'm pretty sure any compression algorithms would work. PNG is just ZIP/DEFLATE

  • @treelibrarian7618
    @treelibrarian7618 5 месяцев назад +1

    just a thought: why not use jpeg compression?
    reduce (float?) height data to 8-bit integer
    use a monochrome jpeg algorithm to compress
    check error when decompressed
    expand the error to fill 8 bits again
    use the monochrome jpeg algorithm to compress the error data
    repeat if more accuracy is still needed
    store as a series of images with scale factors used to recombine them to get original float height data

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      Jpeg really won't work, Not only for Terrain, also for anything else in game development, Even for game texture I recommend to not use JPEG as it deform the image really bad, PNG is good for that !!! beside that that won't work Terrain as the range of terrain pixels change is can be a lot.

    • @stark_energy
      @stark_energy 5 месяцев назад

      When decoded from JPEG, you will have many minor artifacts and misplacement. Quite inaccurate. And the more you save and decode with JPG the farther it is from the original data, which is not a problem with perceptual image (that is what JPG is designed for) but quite problematic for height map.

  • @MinimumADHD
    @MinimumADHD 5 месяцев назад +1

    is that Manjaro OS?

  • @AmaroqStarwind
    @AmaroqStarwind 5 месяцев назад +3

    Can you make this compatible with "Array Set Addressing"? (Useful for stuff such as Hexagonal pixels, 4D worlds, etcetera.)
    Also, you should check out the recently open-sourced LZHAM and LZFSE compression algorithms!

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад +1

      Thanks for your suggestion, I will look at this, and see if they are useful, I knew about LZHAM and LZFSE but the "Array Set Addressing" is something new for me, I will read more about that. Thanks

  • @CielMC
    @CielMC 5 месяцев назад

    I don't see a reason why image compression algorithms can't be used? Since a heightmap is really just a one-color image, no?

  • @gsestream
    @gsestream 5 месяцев назад +1

    jpeg and/or png?

    • @puppeli
      @puppeli 5 месяцев назад +1

      He said he didnt use them because they are limited to 8 bits per channel. Im an idiot, so i would have prolly tried to multiply the red, green and blue channels... So in theory i could have around 16 million levels of height. If that didnt work for some reason. I would have tried to save the height map as AVIF. Its a relatively new image format, that supports 12 bits per channel (and if i couldnt get that to work, i would have given up)

    • @mohsenzare2511
      @mohsenzare2511  5 месяцев назад

      JPEG is not good for heightmap, PNG16bit is good, but consider PNG is design to compress color images, this is special for heightmap, and perform much better

    • @gsestream
      @gsestream 5 месяцев назад

      64-bits or 128-bits per pixel jpg/png?@@mohsenzare2511

  • @DrDarak
    @DrDarak 5 месяцев назад

    Looks like you are reimplementing image compression - fun but maybe just find a good compressor.

  • @aqua-bery
    @aqua-bery 5 месяцев назад +1

    KDE USER LETS GOOO