ZGC: The Future of Low-Latency Garbage Collection Is Here

Поделиться
HTML-код
  • Опубликовано: 13 янв 2025

Комментарии • 18

  • @surenderthakran6622
    @surenderthakran6622 2 месяца назад

    Finally someone who discusses the inner workings of the ZGC

  • @toinouH
    @toinouH Год назад +1

    Thank you for this video and all the amazing work done on ZGC :)

  • @TheDoppelganger29
    @TheDoppelganger29 2 года назад +7

    Even better with playback speed 1.5x ;)

  • @sblantipodi
    @sblantipodi Год назад

    Amazing job guys

  • @JorgetePanete
    @JorgetePanete Год назад +4

    👏👏 how does throughput compare now with other GCs?

    • @dumdumdumdum8804
      @dumdumdumdum8804 Год назад

      good question but if they are saying 4 times increase that means it is the best GC in terms of throughput. or may be other GCs will get some benefits from this learning.

  • @keyboard_toucher
    @keyboard_toucher 10 месяцев назад

    After you jump (with jnz, etc.) to the slow path, how does the slow path know where to return to?

  • @sblantipodi
    @sblantipodi Год назад

    Is it normal that max heap size is doubled when using Generational ZGC? If I set -xmx512 my max heap size is 512MB when using ZGC, it became 1024MB if I use Generational ZGC.

  • @StefanReich
    @StefanReich Год назад +5

    Amazing work. I know GC development is fiendishly complex

  • @slr150
    @slr150 Год назад +1

    Just curious, is it possible to use reference counting for those objects whose lifecycle can be determined at compile time? I think a majority of objects could be cleaned up this way including iterators, Streams etc...

    • @JorgetePanete
      @JorgetePanete Год назад

      I think the idea is reusing old objects instead of deallocating and allocating

    • @StefanReich
      @StefanReich Год назад

      I don't think reference counting is a good fit for the existing infrastructure

    • @cptfwiffo
      @cptfwiffo Год назад

      Why would you want to have the the overhead? If you know the lifecycle at compile time, you know when you can free the memory anyway right? Then you don't need the memory overhead to store the reference counts, nor the cpu to update them. Reference counting isn't a silver bullet...

    • @slr150
      @slr150 Год назад

      @@cptfwiffo I agree with what you're saying. But may be there was some confusion as to what "reference counting" is. More specifically I meant compile-time reference counting (not runtime reference counting)
      For example:
      void foo()
      {
      var x = new StringBuilder("Hello");
      x.append(", world!");
      System.out.println(x.toString());
      // Compiler to inject code to free x
      }
      Here the compiler needs to count the references in order to ensure that references to the new StringBuilder is limited to local variables before injecting code to free it (it could of course do more complicated analysis).
      My understanding is that, this is not being done by javac.

    • @quananhmai5701
      @quananhmai5701 Год назад +1

      @@slr150 In your example x never escapes so the compiler can explode it into its components and no object is allocated at all

  • @dumdumdumdum8804
    @dumdumdumdum8804 Год назад

    awesome stuff, when will it be available as part of jdk.

  • @berndeckenfels
    @berndeckenfels Год назад

    There is a chance to rename it to GZGC ,)