Garbage Collection (Mark & Sweep) - Computerphile

Поделиться
HTML-код
  • Опубликовано: 30 окт 2024

Комментарии • 457

  • @exaaltare1170
    @exaaltare1170 Год назад +386

    This Gent has ability of explaining complex topics, in extremely simplified way, which is phenomenal. Please, 🙏 do more videos with him, diving in all possible topics.Respect

    • @vfryhn2
      @vfryhn2 Год назад +2

      Absolutely I loved every part of this man explaining such a complex topic

    • @rudiklein
      @rudiklein Год назад +1

      Even I understood and that's telling.

  • @njdarda
    @njdarda Год назад +59

    i love the clarity of Laurence's examples and explanations

  • @pfefferle74
    @pfefferle74 Год назад +564

    As a C# programmer you often don't spend much time thinking about garbage collection because it just works silently in the background - until you start interfacing with native unmanaged code, and then you go though a hard school of troubleshooting, having to learn how the runtime actually manages your objects in memory and why it constantly keeps moving them around.

    • @syjwg
      @syjwg Год назад +6

      Finding thread local storage use in managed code was a pain because the GC is running in a separate thread than the allocating one.

    • @MZZenyl
      @MZZenyl Год назад +28

      True, it is surprising how long you can go without ever having to learn how the GC works, the differences between reference types and value types, as well as the entire topic of unmanaged memory (Marshal class, unsafe, pointers, stackalloc, Span/Memory, P/Invoke, etc.).
      In my experience, most C#/.NET development ends up being for the sake of a web service, in which case efficient memory management really doesn't matter, as any speed improvements you can make are outdone a hundred times over due to response times over the web. But when you do have a project where memory usage improvements do make a difference, boy can it make a difference. Even something as simple as using StringBuilder instead of string, and using it properly, can yield massive improvements.

    • @bbq1423
      @bbq1423 Год назад +2

      Sounds like you're referring to List etc, tip: instead of using pointers to elements, use an index instead, even if the pointers change, arr[42] will still be arr[42] (unless we remove/insert elements in middle).

    • @toby9999
      @toby9999 Год назад +27

      As a C and C++ programmer, memory management is always something to be considered from the get go. It's just an integral part of the development process. Never seen it as a problem.

    • @rreece90
      @rreece90 Год назад +10

      @@toby9999 With modern C++ memory management has become very intuitive. With full support for the RAII technique ('move' operators etc.), there is still no garbage collector (so no performance burden) and yet it feels like it's automatic, because there is almost never the need to manually clear memory anymore. It just gets freed, when a variable using it falls out of scope.

  • @sanderbos4243
    @sanderbos4243 Год назад +13

    Laurence is probably my favorite person on the channel, he's able to simplify and explain things so incredibly well with a lot of humor

  • @marksilverman
    @marksilverman Год назад +137

    Most modern computers maintain a virtual address space for each process. That means that pointers aren't really a direct reference to a "physical part of the chip". Inside the CPU, the address translation hardware converts the virtual address to a physical one. Pages of memory are swapped in and out as needed, so a process can use more memory than is physically present on the machine. That makes programming vastly easier in some ways and vastly more difficult when things go wrong.

    • @annyone3293
      @annyone3293 Год назад +38

      Laurence mentioned that they were simplifying that part. The video is about software after all.

    • @yonatanbeer3475
      @yonatanbeer3475 Год назад +13

      Yes. C is often called "close to the metal" when this is not really true.

    • @connorclark8945
      @connorclark8945 Год назад +19

      @@annyone3293 Fair, but he did also say that a double-free would cause other "good citizen" programs to malfunction, which just isn't very applicable to 99%+ of programs.

    • @gljames24
      @gljames24 Год назад +4

      @@yonatanbeer3475 It is if you're compiling for the processor directly instead of an operating system. OSes and microprocessors often use baremetal C/++ and Rust.

    • @phizc
      @phizc Год назад

      @@connorclark8945 and I also thought that read after free would cause a segmentation fault.. At least I think that would happen if the program tries to read memory that isn't allocated any more, but since malloc uses OS allocation methods under the hood, maybe the memory page is still "allocated" from the OS if there are multiple sub-allocations on that page. I don't know the nitty-gritty details of memory allocations.. I usually stick with C# 😅

  • @circuit10
    @circuit10 Год назад +112

    You didn't mention what people normally say is the main disadvantage of garbage collection which is that it pauses the program every now and then (but there are some clever ones that can run on a different thread)

    • @mr_waffles_the_dog
      @mr_waffles_the_dog Год назад +24

      That’s only true for stop the world allocators. There are numerous examples of concurrent collectors and interuptible collectors that can have hard real time requirements. In principle you can induce pathological behaviour from any GC, but you can do the same with refcounting or malloc.

    • @circuit10
      @circuit10 Год назад +2

      @@mr_waffles_the_dog I did say there were ones that can run on a different thread, I know there are incremental ones too but I guess those still technically stop the program, just for less time (but then so does malloc)

    • @greatwolf.
      @greatwolf. Год назад +5

      yea, the term used for that is 'stop the world' I believe.

    • @RandomBlackBox
      @RandomBlackBox Год назад +4

      Something I noticed playing big modpacks for Minecraft Java edition. It just stops for a few frames from time to time and allocating more ram than needed makes this problem way worse.

    • @9e7exkbzvwpf7c
      @9e7exkbzvwpf7c Год назад

      @@mr_waffles_the_dog "but there are some clever ones that can run on a different thread"

  • @Richardincancale
    @Richardincancale Год назад +44

    Really good account! Underneath there’s another story that’s also interesting - how should the operating system allocate memory - this was a subject of debate in the early days of OS design when memory was much more limited. Donald Knuth analysed three options: first fit (take the first free chunk that’s greater than the size needed), best fit (take the chunk that’s nearest in size to the desired amount) or a binary chop method called the buddy system. Somewhat non-intuitively the first fit works better than best fit for long term stability!

    • @warlockpaladin2261
      @warlockpaladin2261 Год назад +2

      I want to know more about "binary chop".

    • @oblivion_2852
      @oblivion_2852 Год назад

      I'm guessing that binary chop is because it's easier to find two blocks that fit the memory size than just 1

    • @Richardincancale
      @Richardincancale Год назад +11

      @@warlockpaladin2261 One key problem with memory allocation is fragmentation of memory - where you have memory divided up I to loads of bits that are too small individually to meet the demands being made. This is why first fit is better than best fit - it leaves pieces that might be useful for something else whereas best fit tends to leave a load of small useless bits. When memory is fragmented you can try and coalesce adjacent unused pieces to make larger more useful chunks. For a first fit or best fit this requires traversing a data structure along the whole length looking for places where addresses of the end of one block and the start of another are sequential. Using the buddy/binary chop method memory is repeated divided in two to get a piece just bigger than the size requested. The coalescing is much more efficient though as it requires traversing a binary tree rather than a linear search, which is much faster.

    • @DFPercush
      @DFPercush Год назад +2

      Fragmentation isn't really an issue on 64 bit machines. The OS can stitch together a bunch of 64K physical pages into a contiguous range in user space, and on 64 bit systems the address space will always have room to accommodate. If you look at the implementation of malloc, any request over 128K goes directly to the OS, so large allocations are going to be hardware mapped.
      The discussion of different strategies is interesting though. I've tried my hand at writing my own user mode allocator, where I basically do one large malloc to get a big block of memory from the OS then hand it out as needed. My strategy was, instead of keeping one linked list or tree of free chunks, I divided them into 64 buckets based on their size. Each bucket has the head of a linked list of free space blocks (or a nullptr). Free space will be added to the slot that's the highest power of 2 less than its size. So initially, perhaps only slot 20 has a 1 MB buffer available. As this is chopped up by allocations, it will start putting the remaining free space in the smaller buckets. When an allocation is requested, it looks at the bucket for the lowest power of 2 that's able to hold it, or greater. Link the blocks, make the new free block of leftover space the head of the linked list, and store in the appropriate bucket for its new size. Whenever a block is freed, it tries to consolidate it into a larger block. It looks at the metadata structures immediately before and after its address range. If it sees another free block, it consolidates them into one. The metadata struct also has a first and last marker, and if the newly consolidated free block has both the first and last flags, then the entire range can be returned to the OS. You'll notice that nowhere in that description did I mention searching through a list or a tree of blocks. Everything is constant time, except maybe finding a populated bucket, but that's at least in a cache friendly structure. It just pops the head off the list, pushes the leftovers at the head of another list, bing bang boom.
      P.S. The motivation for this was wanting to run C++ code in web assembly without the overhead of a huge runtime like emscripten. Wasm allows you to grow the size of the heap, which acts as the initial malloc.

    • @Richardincancale
      @Richardincancale Год назад

      @@DFPercush Agreed, this is really only a problem on machines not using virtual memory, and especially with extremely large address spaces.

  • @paxrsi
    @paxrsi Год назад +80

    There are also some downsides to the garbage collection. In many cases, esp. smaller programs the user will not notice them. I am maintaining an enterprise application written in Java which needs to run on a machine with 1 TB of RAM (yes, terabyte) and there I did experience some problems. For example, the GC did sometimes run for more than an hour (yes, more than 60 minutes) and all connections to the outside world including the database ran into timeouts, because of the "stop the world" nature of the GC. That's the reason why the GC has all these fuzzy parameters to get control of those situations but as an user, you are just buying an application and you want to use it and not get an expert in Java GC to run it. It gets even more complicated with all these different JVMs available. The IBM JVM runs different to the Oracle JVM and the GCs have different parameters. So if someone is able to solve this in Oracle JVM with some cool parameter settings you may still struggle with the problem because these parameters simply do not exist in IBM JVM. Someone might argue to choose hard- and software from different vendors than IBM, but then Java is not write once run everywhere. In my opinion the GC is a tool to make the life of the programmer easier but in some circumstances the price is it will make the life of the user harder. However the user is the last one who should deal with memory management of the application.
    There are also other problems e.g. you cannot take advantage of CPU memory access optimization like prefetch.

    • @raterix2
      @raterix2 Год назад

      I'm curious. Is there any reason you would use a machine with 1TB RAM instead of multiple machines with smaller RAM, like 64GB?

    • @robocombo
      @robocombo Год назад +6

      I'm picturing you with a "thousand yard stare" like a combat stress victim

    • @flapjack9495
      @flapjack9495 Год назад +8

      @@raterix2 It's simpler. There's no running network interconnects between the systems and worrying if they're fast enough, building a communications subsystem for your software, etc. You just throw money at the problem. 1 TB sounds like a lot but it's really not - I managed relatively inexpensive ($20K?) systems at a small college that had a quarter terabyte of RAM each and could have taken more.

    • @aenguswright7336
      @aenguswright7336 Год назад +2

      I wouldn't go as far as to say that GC makes life harder for the user. Some programs would never have been written if it wasn't for java, javascript, go etc. However I would say that there are some applications where GC just isn't appropriate.

    • @sjzara
      @sjzara Год назад +3

      The GC is vital these days. Manual management of shared memory in multi-threaded applications is a total nightmare.
      As for the pause problem, commercial solutions for this have been available for a long time: check Azul’s products.
      These days the ZGC in standard Java solves these problems.

  • @coreymartin9630
    @coreymartin9630 Год назад +37

    Love that laptop, I've got one as well (It's called a Framework and is designed to be customizable and repairable)

  • @mahdizarepoor8964
    @mahdizarepoor8964 Год назад +63

    I literally love your channel , the way you create videos, the instructors and the topics you make video about are just awesome . thanks for your serve to the community

  • @MladenMijatov
    @MladenMijatov Год назад +25

    So nice seeing framework laptop in the wild. Good job man! Excellent choice.

  • @jvcmarc
    @jvcmarc Год назад +51

    this was really fun, i always wondered how garbage collectors work but never searched for it
    could you guys maybe do a video on different memory management strategies? like Rust's borrow checker?

  • @scbtripwire
    @scbtripwire Год назад +17

    15:56 Note: Turning off and on again is the time tested workaround for memory leaks, but definitely not a fix in itself, that's up to the developer. (I know this is what he meant, I'm just clarifying for others who might not know better.)

    • @alst4817
      @alst4817 Месяц назад

      Ooh that’s cool. Presumably it’s because of bad memory management that a lot of program hangs happen (along with loops that never end, etc)? Is that why turning off and on again works?

  • @janblaha2023
    @janblaha2023 Год назад +5

    really happy to see the framework laptop

  • @danielbrenot5173
    @danielbrenot5173 Год назад +10

    I immediately noticed the framework laptop. Nice to see other people using them, especially in more popular videos like this one!

  • @superjimnz
    @superjimnz Год назад +7

    Never is a strong word; there are many programs where you do know exactly what you'll need and never perform dynamic allocation, or at least only on the stack... no heap allocation. This is especially true in embedded programming.

    • @boagz3
      @boagz3 Год назад

      I would also argue that in many programs there are plenty of situations where you also know exactly how much memory you need or at least the upper bounds of the memory needed. This happens all the time in games for example

    • @harshsason3725
      @harshsason3725 5 месяцев назад

      Yup

  • @navalenigma
    @navalenigma Год назад +15

    32768 is 32k, 32678 isn't. I'm a right pedantic arse for pointing that out on this excellent video. A typo under pressure of typing on a live video. Oh I feel grubby pointing this out. I'll go write some code using GOTO as punishment.

    • @eternalmec3777
      @eternalmec3777 3 месяца назад

      Could you explain what you mean? I'm having problems understanding the difference that would not allow for 32678 to be designate as 32k.

    • @navalenigma
      @navalenigma 3 месяца назад

      @@eternalmec3777 in Computing they store numbers and address memory with binary. In 15bits, 111111111111111, which is the maximum number for 15bits the decimal of this is 32767. As 000000000000000 is also a memory address this gives 32768 possible combinations. And this is reffered to as 32k, in normal language 32k would be 32000, but that's a decimal system. This is 32k using a binary system and converting to decimal gives that slightly odd value.

  • @tan.nicolas
    @tan.nicolas Год назад +10

    Computerphile is such a gem! cheers from Chile and thanks for all the top content you share!

  • @DennisKovacich
    @DennisKovacich Год назад +6

    My first experience with garbage collection was in AppleSoft BASIC on an Apple II+ circa 1982. I was working on a program that read text from a file and put it onscreen, but it had to find where words ended and insert returns so no words were split at the end of one line and the beginning of the next. AppleSoft programs sit at the bottom of RAM, and numeric variables are stored from the end of the program up while strings are stored from the top of RAM down. As strings are changed, the old value stays where it was and the new value is added. This program (which my boss had written) ran fine for a while, but would periodically pause for a couple minutes while producing a screen of text. It turned out AppleSoft had run out of RAM, and was moving all the current string values back to the top to clear out the old values that weren’t being used anymore. And the way it was written, building a word one character at a time until it found a space, there were A LOT of old strings! We solved the problem by using AppleSoft’s command to trigger garbage collection manually at the end of a page before printing “Press RETURN to continue.” That was fun to figure out, considering I barely even knew what a computer was two years earlier.

    • @kc9scott
      @kc9scott Год назад +1

      Another Applesoft user here. That type of garbage collection also gets rid of fragmentation, something not discussed in this video. It relocates all the existing objects to be together at one end of memory, which is why it tended to take so long.

    • @DennisKovacich
      @DennisKovacich Год назад

      @@kc9scott, well, when we started doing it manually, it took a lot less time, presumably because there was less to move.

    • @kc9scott
      @kc9scott Год назад +1

      @@DennisKovacich Same experience here. You quickly learn to routinely call the garbage collector as a preventive measure. Even then, it would take some time, like a second or two.

    • @DennisKovacich
      @DennisKovacich Год назад

      @@kc9scott, that’s why we put it at the end of a page/screen. A few seconds with a full screen to read was a lot better than a few minutes halfway through displaying it! I believe ProDOS had its own garbage collection that was faster than AppleSoft’s when it came out later.

  • @hasan_haja
    @hasan_haja 11 месяцев назад +3

    On a completely unrelated point to garbage collection, I love seeing Framework laptops and Neovim in the wild!

  • @Syntax753
    @Syntax753 Год назад +2

    Brilliant! Thanks for covering a subject misunderstood by so many devs these days

  • @msclrhd
    @msclrhd Год назад +16

    Garbage collection is useful for memory management, and can also be helpful w.r.t. performance if allocating a lot of objects and releasing them at the same time. It is less helpful for other resources (database connections, file handles, etc.). There are techniques to help make resource management easier in garbage collected languages (closeable objects, scope blocks/scoped try, etc.), but I prefer the C++ approach of closing the resource in the destructor, which happens when the object goes out of scope (typically at the end of the function). It would be interesting to see if a programming language has both in a way that is easier to use -- I quite like Kotlin's "use" extension method, but it can get complicated with using multiple resources in a single function.

    • @NuclearCraftMod
      @NuclearCraftMod Год назад +2

      As others have mentioned in other comment threads, I recommend giving Rust a go. It can feel a bit awkward and restrictive at first, but it’s worth the time.

    • @AntonyMBenedict
      @AntonyMBenedict Год назад +4

      I agree. I do find C++ RAII and reference counting to be much better and predictable compared to GC. That is why it is still preferred in computationally intensive applications.

    • @NuclearCraftMod
      @NuclearCraftMod Год назад +1

      @@AntonyMBenedict When you don't need to be completely optimal though, a GC allowing you to not have to think about it, even at the relatively simple level of RAII, can be quite nice :P

    • @storczykorg
      @storczykorg Год назад

      Yeah, but there's problem when destructor has to do IO and it can block a thread or crush. C# has a "using " statement/block which marks when the "destructor " executes or "await using" which does same but in multithreaded environment

  • @greatwolf.
    @greatwolf. Год назад +4

    would have like to see him go into RAII techniques used in modern C++ and how that contrasts with gc approach of memory management.

  • @onlyeyeno
    @onlyeyeno Год назад +3

    Thanks for another great "little video" on a "giant topic" :) And I can only agree with all the other commenters who have "said" that You have here got (another) great presenter and explainer!
    And I would also love to see him present more topics that he finds interesting.
    Best regards.

  • @TheBigBemaster
    @TheBigBemaster Год назад +23

    You guys should speak to an expert about Rust memory management. It’s a fascinating way to do memory management without traditional GC

    • @DanCojocaru2000
      @DanCojocaru2000 Год назад +3

      I mean, it’s just reference counting except limiting the count to 1.

    • @Aidiakapi
      @Aidiakapi Год назад +14

      ​@@DanCojocaru2000 Not only is that technically seen inaccurate (it's not a limit to 1, each value has exactly 1 owner at any time, and therefore no references need to get counted), it's also stating the problem, preventing 0 or more than 1 "owners" that free a resource (in other words, memory leaks or double frees), as if that's the solution used.
      The "novel" technique in Rust isn't that it keeps a single owner though (C++ does exactly the same with its RAII), it's the way it tracks object lifetimes, and the way it (dis)allows aliasing and unchecked mutation.

    • @zactron1997
      @zactron1997 Год назад

      @@DanCojocaru2000 Yeah that's about as unhelpful a statement as saying a computer is just angry sand. Rust's lifetime system is (to my knowledge) entirely unique and non-trivial to reproduce in another language (especially at compile time!). Combined with the borrow checker (unlimited read-only references XOR one mutable reference), and the ownership system (similar to RAII from C++), Rust is able to provide basically the highest quality statically analyzed code you could write in C/++, but as a basic feature of the language guaranteed to be performed.

  • @Luredreier
    @Luredreier Год назад +33

    Sounds like a excellent time to address Rusts memory management.

    • @djazz0
      @djazz0 Год назад +5

      Or Nim’s ARC/ORC

    • @Markus_included
      @Markus_included 4 месяца назад

      Or especially PHP's memory managment, which uses a very interesting algorithm for clearing reference cycles

  • @BiggFanDDD
    @BiggFanDDD Год назад +4

    Great video! I've always wondered how garbage collection worked. Thanks!

  • @marsgal42
    @marsgal42 Год назад +12

    I've spent a lot of time tracking down memory leaks in C programs. I'm currently doing some C# development. With user interface code, databases and REST APIs I'm spraying objects and memory all over the place, but it doesn't (usually...) get confused.

  • @slayerofyounglings66
    @slayerofyounglings66 Год назад +2

    You all should check out CJDNS and how Delisle sets up a fairly complex and mostly accurate inheritance-based memory system. It even does a decent job at error handling. It's mostly in C and python with more rust being added over time.

  • @-.._.-_...-_.._-..__..._.-.-.-

    I'm a hobbyist game designer using Unity. When I switched to C# from C++ years ago, I thought memory management was something I'd never need to worry about again. Then I ran into lag caused by garbage collection and learned memory management was still my responsibility. It's simple, though: you just need to reduce the amount of garbage created. With C#, garbage is created every time the _new_ keyword is used (except with struts). So, in regards to efficiency, best practice is to keep the _new_ keyword (besides with struts) outside of loops and frequently called methods.
    It's cool to see how garbage collection works behind the scenes. I'm always learning. If anyone has any corrections or tips, please clue me in.

  • @jeremiahblum7833
    @jeremiahblum7833 Год назад +1

    I was just studying this in my Java course and this answered all my questions, thanks!

  • @olivierbegassat851
    @olivierbegassat851 Год назад +2

    I enjoy this guy's style

  • @_azterisk
    @_azterisk 5 месяцев назад

    Finally understand the basics of memory leaks
    Great video, thanks

  • @dokie8260
    @dokie8260 Год назад +2

    There is also Rust, which prevents that kind of errors(doesn’t need malloc/free) and doesn’t have a garbage collector.

  • @lilwilly5839
    @lilwilly5839 7 месяцев назад

    This is the most helpful video I've seen in a long time, thank you

  • @7guitarlover
    @7guitarlover Год назад

    I wish Laurence Tratt was my teacher ! So simple and Lucid explanation.

  • @Waschlack
    @Waschlack Год назад

    Really interesting topic and awesome explanation!
    I personally only knew about the ref counter and didn't imagine there to be such an interesting and better option out there

  • @blenderpanzi
    @blenderpanzi Год назад +5

    Another problem is that the garbage collector needs to stop the program while it is marking and sweeping. There are optimizations on that, so that it will maybe only stop on a per thread basis and only for a very short amount of time, but it still can introduce tiny stutters in certain very high performance real time applications, which is why some engineers don't like to use such languages for such applications (or at least not for certain parts of such applications).

    • @rudiklein
      @rudiklein Год назад +1

      I was wondering about that ("what if something changes between marking an sweeping?") and you answered my question.

  • @ecdhe
    @ecdhe Год назад +6

    Python is using the reference counting method for its objects, but is using a GC for orphan reference cycles.

    • @mr_waffles_the_dog
      @mr_waffles_the_dog Год назад

      Yeah, I came to say: isn’t Python refcounts with a cycle breaker?

    • @barneylaurance1865
      @barneylaurance1865 Год назад

      I think PHP does the same - except that most PHP programs are so short lived that there's no need to actually run the GC to collect cycles. They must all be be collected automatically at the end of the PHP script (even if the actual process is continuing, PHP objects not traditionally shared between separate script invocations)

    • @capability-snob
      @capability-snob Год назад

      That's a great description of cpython. Other python implementations often do variations on mark&sweep.

    • @barneylaurance1865
      @barneylaurance1865 Год назад

      @blot Yes, PHP is mostly run inside a web server, but in the most common setup the web server launches a separate invocation of the PHP 'script' in response to each web request from the browser. When that request is dealt with either PHP's shutdown routine, or the web server that PHP is running inside will I think free all memory allocated by that script. Objects are not shared from one script invocation to another.

  • @bishan_8617
    @bishan_8617 Год назад +2

    I respect the C with vim on the framework

  • @owlmostdead9492
    @owlmostdead9492 Год назад +5

    +1 for framework

  • @YTHandlesWereAMistake
    @YTHandlesWereAMistake Год назад +8

    Noticed they're using a framework laptop. Props to them, go-go pro-repair community

  • @a_maxed_out_handle_of_30_chars
    @a_maxed_out_handle_of_30_chars Год назад +1

    wow, simple and to the point

  • @pokemaniac3977
    @pokemaniac3977 Год назад +3

    Thanks for the video

  • @TheUglyGnome
    @TheUglyGnome Год назад +10

    1:18 Yes. You are really simplifying it. "malloc" and "free" are not interface to the operating system, but to clib. clib then does ir's own housekeeping and calls OS services as required ("brk" system call in Unix-like systems). And even OS service is not allocating chip memory in any modern OS, but virtual memory.

    • @capability-snob
      @capability-snob Год назад +1

      Libc is traditionally considered "part of the operating system" in the same way the shell is. Each libc bakes in some expectations on the kernel and system services in use. It's a fairly modern concept to have more than one libc target the same system.

    • @stensoft
      @stensoft Год назад +1

      And in modern operating systems, a program can't free memory of another program, programs going haywire are detected and dealt with (e.g. by refusing to give it more memory), Android even can restart the program while trying to look like nothing happened. But these are still only tools to limit the impact when things go wrong.

    • @Aidiakapi
      @Aidiakapi Год назад +1

      Everything in this video is simplified to a trivial point, but if you were to go in-depth on every point, this video would be at least dozens of hours long.

    • @steve_j_grundon
      @steve_j_grundon Год назад

      *Yes* ... I also had to grit my teeth and try to just "let it go", for the sake of brevity and "clarity of concept" (for the average viewer). Argh! It hurt. 😀😀

  • @dopplereffect7325
    @dopplereffect7325 Год назад

    thanks, i like the way you explained everything , C was my first programming language, we only talked about memory , when we were dealing with pointers and variables , i had some understanding but this is new knowledge, a million thanks

  • @ShALLaX
    @ShALLaX Год назад +1

    Non-determinism is usually a bigger problem than messing up forgetting a root. If you want to write high-performance software, you can usually see random dips in performance when garbage collection occurs. This is because the garbage collection routine is usually blocking.

    • @darkengine5931
      @darkengine5931 Год назад

      We avoid it in my industry (real-time computer graphics) partially for that reason. The other reason is that it's actually very easy to create logical memory leaks using GC. For example, a scene might have a shader that wants to reference a texture so the programmer naively stores a strong reference to it. Then the user requests to remove the texture from the scene at which point the texture memory should be freed (these could be ultra high-res HDR textures with 32-bit channels taking hundreds of megabytes of DRAM), but it's not freed because that particular shader is still storing a strong reference to it and failed to also handle that user event even the scene properly handled the event and removed the texture reference. Then the memory for that texture only gets properly freed when the user removes not only the texture but also the shader when merely removing the texture should have sufficed.
      In a language like C, there would be no memory leak in this context. Instead we get a dangling pointer crash, and that might sound worse than a memory leak but in real-time applications, the hard crash is often preferable because logical leaks are disastrous for performance and also extremely difficult to detect (they're like stealth bugs eluding our unit and integration tests and we don't have tools like valgrind to point them out to us). Meanwhile a segfault is trivial to detect and reproduce consistently, especially with sound testing procedure.

  • @ZedaZ80
    @ZedaZ80 Год назад +1

    The syntax highlighting and color scheme is so pretty 😍

  • @martijn3151
    @martijn3151 Год назад +1

    As a concept GC works wonderful. And in a lot of practical applications as well. But as with most concepts: there is always a downside. And with GC there is a big one: speed. When a speed and frame rates are to be considered, GC is usually terrible. Your game runs fine and super smooth and all of a sudden bam! GC kicks in and gone is your frame rate.

  • @elliotmacneille3446
    @elliotmacneille3446 Год назад

    I would have liked to see some mention of the Big O complexity of garbage collection and why we expect programs running malloc and free to be fundamentally faster than programs with automatic garbage collection. This can be a major factor in choosing programming languages; languages that allow you access to 'bare metal' memory management are typically be better options for high performance programs.

  • @silverchairsg
    @silverchairsg Год назад +2

    Speaking of the turning off and on trick, here's a joke.
    There are four engineers traveling in a car; a mechanical engineer, a chemical engineer, an electrical engineer and a computer engineer. The car breaks down.
    “Sounds to me as if the pistons have seized. We’ll have to strip down the engine before we can get the car working again”, says the mechanical engineer.
    "Well”, says the chemical engineer, “it sounded to me as if the fuel might be contaminated. I think we should clear out the fuel system.”
    “I thought it might be a grounding problem”, says the electrical engineer, “or maybe a faulty plug lead.”
    They all turn to the computer engineer who has said nothing and say: “Well, what do you think?”
    He thinks for a moment. “I know!" he says. "Let's all get out of the car and get back in again.”

  • @fahnub
    @fahnub 9 месяцев назад

    Explained in simplest way possible

  • @jongeduard
    @jongeduard Год назад +4

    Not mentioned in the video, but the concept of all those tracing garbage collectors of Java and C# is getting somewhat obsolete as well (even though they are amazing technologies nevertheless). What Rust 🦀 does is kind of going back to the C/C++ way, but then everything is now compiler generated, combined with, more importantly, a ton of language rules: memory ownership and mutability rules.
    The most amazing part is that Rust solves 2 categories of problems at the same time: memory management and concurrency management.
    Just think about the double free example (which can also be caused by 2 threads using the same block of memory). This won't byte you in Rust at all, because you simply can't make that mistake, because of all the borrowing rules.
    Those guys who invented what's in Rust language are very, very smart people. Amazing programming language.

  • @casperes0912
    @casperes0912 Год назад +4

    Freeing something twice is not going to behave like that on any modern OS with an MMU in the hardware which covers pretty much any PC/Mac and phones and yeah. Might cause problems if you run your programs on your dishwasher that might not have an MMU but otherwise it's not an issue. At least in userspace. Only you own your memory space (another disclaimer, plugins, interpreters and such can be exceptions)
    ARC also automatically manages reference counting. You don't manually need to think about incrementing and decrementing your counter (this is in languages like Swift, Objective-C and apparently based off of this video, sometimes even Python). You can still have issues with cycles but there are ways of dealing with that. In Swift and Objective-C for example you can mark references as weak or unowned to not increment the reference count and prevent cycles. An example of use is if you hold a reference to yourself. This should probably be a weak reference since otherwise when can you free yourself? Unowned is essentially the same but with fewer protections against code crashing if you do it wrong.
    ARC is often faster than garbage collection so while performance was mentioned as a downside for reference counting, it's certainly also a downside for garbage collection and more often than not worse for GC

    • @retropaganda8442
      @retropaganda8442 Год назад +3

      malloc/free doesn't interact with the OS and MMU pages for every "bit" of allocation/deallocation otherwise it would be too slow. So often you can still access memory that was freed.

    • @strayling1
      @strayling1 Год назад +1

      @@retropaganda8442 Only within the process which owns the page, so it would require multiple threads sharing the same PID.

    • @friggindoc
      @friggindoc Год назад

      What is an Access violation?

  • @gfrewqpoiu
    @gfrewqpoiu Год назад +10

    Maybe Nim would actually be an interesting case study here, since they have multiple switchable Garbage Collectors or manual memory management and it transpiles to C code.

    • @PouriyaJamshidi
      @PouriyaJamshidi Год назад

      Nim indeed is special

    • @capability-snob
      @capability-snob Год назад

      Tratt works on pypy, which comes with loads of collectors and collector options. It's a gc engineers abstract dream.

    • @FrankHarwald
      @FrankHarwald Год назад

      Indeed. Recent Java, Python versions have these too, as well as many C++ dynamic memory management libraries, & Nim reuses what its C/C++ or JavaScript back-end compiler provides & exports as memory allocators through compile flags.

  • @patdbean
    @patdbean Год назад +4

    Many early digital CCTV systems would reboot once every 24 hours (the user could pick the time) so that the system would not run out of memory.

  • @vincentouwendijk3746
    @vincentouwendijk3746 Год назад

    Great video, which explains some annoying and hard to explain problems I experienced in the past :)

  • @TangTony-q3h
    @TangTony-q3h 7 месяцев назад

    Use Delphi to programming , don't need to manage the memory by you, it can auto manage. But the video is explain it how to works, it is good

  • @michamaj6290
    @michamaj6290 Год назад +1

    Very nice Framework laptop!

  • @duxoakende
    @duxoakende Год назад +2

    Finally a video on this topic! Been waiting so long!

  • @PaulWilkinsNZ
    @PaulWilkinsNZ Год назад +5

    1:48 It doesn't really matter for his example, but 32768 is the figure that would normally be used instead of 32678

    • @jhonbus
      @jhonbus Год назад +2

      Haha, that one sprung out at me too. I said "You spelled 32K wrong!" before realising "spelling" wasn't quite the right word.

  • @yogiturtleseraph8208
    @yogiturtleseraph8208 Год назад

    The simpler mechanism used for memory management is to open/close scope and declare variables in that scope.
    Then another mechanism is to call a function.
    Most programs could in theory be written using only these two mechanisms, although it might be a bit clunky; and a continuation mechanic doesn't hurt either.

  • @marcelijankowski9593
    @marcelijankowski9593 Год назад

    Great explanation!

  • @pierreabbat6157
    @pierreabbat6157 Год назад +3

    Can you explain how memory management works in Haskell (it's GC, but with immutable data) and Rust?
    How does a garbage collector know what in an allocated block of memory is a pointer? In C++, pointers, 8-byte integers, and 8-byte reals can be in any order, and a bunch of bits could be valid as more than one of them.

    • @bluesillybeard
      @bluesillybeard Год назад

      Rust doesn't manage memory at runtime using a garbage collector or anything (unless the programmer implements it themself)
      The Rust compiler enforces some extra rules that make it harder to accidentally forget something.
      Languages with a garbage collector add information about which variables are pointers into the compiled output, so the garbage collector knows what's a pointer and what isn't.

    • @capability-snob
      @capability-snob Год назад

      The original paper on the Spineless Tagless G-Machine outlines the original memory layout for Haskell: the mark procedure for a function is referenced from each thunk. There have been some cool changes since then, but it's a great read.

    • @FrankHarwald
      @FrankHarwald Год назад

      Haskell is a different beast entirely. Not just is (safe Haskell) it a functional language, but it is also lazy-evaluating everything per default. This means nothing, except for references to functions & definitions (called thunks), is evaluated (computed) just because it is used anywhere _until_ something also requires the result, recursively from the outside inwards, & not just for memory management but for all function evaluations (computations) unless some function is explicitly marked as evaluate, is a bang pattern or uses strict data types. Getting that program semantic to transpile to a (stateful) machine-executable form requires different kinds of compiler-abstractions than most imperative languages (C/C++/Java/Python): In particular the Glasgow Haskell compiler is based on an intermediate language/abstract programming model called "The Spineless Tagless G-machine" which works differently to tree- or DAG-based IL used in compilers for imperative languages and does even more things & transformations internally than imperative compilers do which means it's really difficult to say how a Haskell program relates to the actually used memory allocations operations of both the operating system as well as runtime libraries it's running on except for a few synthetic cases until it is actually running.

    • @FrankHarwald
      @FrankHarwald Год назад

      Rust's dynamic (which is what this video is about) memory management (heap allocator, Box-ed in Rust speak) doesn't work so differently to how heap alloctors in other languages work. What Rust does differently to almost all other languages is its static memory allocation, that is, when the memory management usage is fully known at compile-time.

  • @rob876
    @rob876 Год назад +1

    Minor point - 1:57 32678 bytes is not 32 KB, 32768 is.

  • @PixelOutlaw
    @PixelOutlaw Год назад +1

    Like most ideas in computing, the origin is lost to modern trends. But we can look back to Lisp for the first garbage collectors. And quite a few other things.

  • @tablettablete186
    @tablettablete186 Год назад +2

    Now we need one with Reference Counting! (5:20)

    • @DanCojocaru2000
      @DanCojocaru2000 Год назад

      The compiler inserts hidden code that increases or decreases the count. That’s basically it.

  • @juansolo1617
    @juansolo1617 Год назад

    Memory leaks don't happen if you're aware of the control pathways and make sure to release unused memory. The overhead of the garbage collector isn't necessary, really, other than Rapid Application Development where someone's trying to pump out code quickly without thinking about the hardware aspect. In a way it can be faster than heap allocation but overall I think the heap wins out for efficiency. Often times with GC you need to use handles to memory, locking memory before local use. With direct allocation and management that's not a thing... you just have a direct real pointer to data, and there's no sweeping cycles (or calculations made for initiating the sweep).

  • @carpediemcotidiem
    @carpediemcotidiem 5 месяцев назад

    00:00 Garbage collection automates memory management
    02:12 Memory management can go wrong in several ways
    04:14 Automatic memory management simplifies programming
    06:13 Garbage collection is a common automatic memory management technique.
    08:29 Garbage collection algorithms automatically deallocate memory.
    10:32 Garbage collection is a process of freeing up memory
    12:33 Dynamic memory allocation allows programs to allocate memory as needed.
    14:24 Garbage collectors optimize memory usage
    Crafted by Merlin AI.

  • @blablubb1234
    @blablubb1234 Год назад

    An then there’s Rust, which doesn’t have a garbage collector nor reference counting, but still does safe memory management for you.

  • @edwardpurwanto7787
    @edwardpurwanto7787 Год назад +3

    I feel like this video really needed an example where reference counting fails but mark and sweep works.

    • @mandolinic
      @mandolinic Год назад

      Create two objects, each one pointing at the other. Then set both root pointers to null. Both objects should be swept away but because each one is being referenced by the other their ref counters are 1 instead of 0, so neither gets cleared away. 🙄

  • @MiniArts159
    @MiniArts159 Год назад +1

    Shoutouts to the Framework laptop!!!!

  • @ecdhe
    @ecdhe Год назад +6

    I'm not sure that malloc() and free() are directly interacting with the operating system, but rather the C runtime which is implementing these functions. So a free() may not change anything as far as the OS is concerned, but the C runtime keeps it for any future allocation.

    • @Frugl1
      @Frugl1 Год назад

      This is correct Malloc can call segment break if it needs to extend the virtual memory segment of the heap. Hence why malloc might return a pointer to non-zero memory. Despite the OS only exposing zeroed memory when providing reallocated memory to another process.

    • @framegrace1
      @framegrace1 Год назад

      malloc and free do not interact with the OS on normall usage. malloc just asks for more memory to the OS when it runs out of it.

    • @blenderpanzi
      @blenderpanzi Год назад +2

      That's why he said he's simplifying.

    • @blenderpanzi
      @blenderpanzi Год назад

      @@Frugl1 Also, some malloc/free implementations use mmap instead of sbrk.

    • @pierreabbat6157
      @pierreabbat6157 Год назад

      What happens in double free is usually not that another process overwrites the memory, but another thread does, or the same thread allocates a block starting at the same address. The next block pointer, which is just before the block, is overwritten with another next block pointer, or with garbage, and freeing it corrupts the heap or crashes the program.

  • @jakoon_the_manic
    @jakoon_the_manic Год назад

    Great video! Thanks for it.

  • @cgazzz
    @cgazzz Год назад +1

    Didn't really mention the C++ way of using movable unique_ptr lvalues and reference rvalues. No reference counting, no explicit freeing, as long as data structures are acyclic

  • @Gooberslot
    @Gooberslot Год назад +2

    Garbage collection definitely has a performance impact though.

  • @sixpooltube
    @sixpooltube Год назад

    God bless you for this video and channel

  • @manon-gfx
    @manon-gfx 8 месяцев назад

    Another downside of garbage collection is performance issues in real-time applications.
    In games you can see a couple millisecond longer frame due to the garbage collector, resulting in frame spikes. This is why most games still use manual memory management.
    Although you can see a shift happening to reference counted memory management happening in the less performance critical parts of the code.

  • @Bozebo
    @Bozebo Год назад +2

    Hang on just a sec, is that a Frameworks laptop?

  • @quillaja
    @quillaja Год назад +1

    Is that a Framework laptop? Nice!

  • @laughingvampire7555
    @laughingvampire7555 7 месяцев назад

    and he doesn't talks about the really most common garbage collector, arenas, this is used with php, the most dominant backend language in the internet, and Go is trying them. Erlang has independent runtimes per green thread so when one of those dies all its stack is collected, so technically, it is a type of arenas that do reference counting.

  • @RonJohn63
    @RonJohn63 Год назад

    0:39 You *can,* but then that means you must manage that big pile of memory all by yourself (re-implementing most of the memory management tasks which the kernel already provides). That, of course, is time-consuming and even more bug-prone.
    3:46 In MS-DOS (under something like DesqView) or Windows 9x and maybe OS/2, but certainly not Unix/Linux, Windows NT or other Real operating systems of the past 45 years.

  • @seedmole
    @seedmole Год назад

    My last memory leak was from leaving the text viewing window open in Pure Data when using a nonquantized midi looper I built. Didn't expect it to, but apparently any memory used to store text is retained once that text is replaced if the window for viewing it is still open. Found myself staring at several gigabytes from just kilobytes of text data.
    And yeah, always turn off and on again. Last night I booted up all my synths too quickly (my retrospective take) and one of them was responding veeery strangely to midi signals -- somehow it was behaving as if it was receiving Control Change and Program Change signals when I was only sending Note, Pitch Bend, and Modulation Wheel signals. Sure enough, a quick off-and-on-again cleared that right up (though not until after I checked some other things first... by turning them on and off).

  • @reddcube
    @reddcube Год назад +7

    I like that he is using a Framework laptop

  • @krazymir
    @krazymir Год назад

    Super cool laptop! Go Framework!

  • @NoahSpurrier
    @NoahSpurrier Год назад

    When I was a C programmer I used static arrays whenever possible. It wasted a bit of memory, but my code never had memory leaks.

  • @pdform
    @pdform Год назад

    They should definitely make a video about RAII for contrast

  • @tristanwegner
    @tristanwegner Год назад +3

    I get that the Operating system cannot free memory a program claimed, because it has no insight what the program wants to do with it. But what I don't understand is how the OS can allow program A to wrongly "free" memory that now belongs to program B. Isn't the OS supposed to say "sorry A", this memory is not allocated to you right now, so you have no rights to it. ? Isn't it one of the central task of an operating system?

  • @konga8165
    @konga8165 Год назад +1

    This is why Rust is so amazing

  • @lukaswerner4390
    @lukaswerner4390 Год назад +5

    What program was used to draw in the screen?

  • @FrederikFalk21
    @FrederikFalk21 Год назад

    1:40 “I always forget the bits”
    Talking about computer memory 😆

  • @sashapetrenko74
    @sashapetrenko74 Год назад +2

    Great video, as always! I'd like to get a better grasp as to why developers over time have had a love-hate relationship with GCs. When I learned about it, garbage collection was almost a naughty phrase, but it seems that even problems arising from GCs come from developer abuse just as much as with with manual memory management. I see GCs as a tool just as anything else that has its uses depending on the application.

    • @dylan2452
      @dylan2452 Год назад +1

      I think that because garbage collection is so tied to the language, it’s not a tool you can choose to use or not use. If you don’t have it you wish you did at times, but if you do you end up fighting it sometimes.

  • @laughingvampire7555
    @laughingvampire7555 7 месяцев назад

    no, it isn't difficult to do it manually, is pretty easy.
    buf = malloc(byte_size)
    free(buf)
    see it was easy, the tricky part is knowing and understanding the life cycle and when to terminate it to avoid a segfault

  • @585ghz
    @585ghz Год назад

    thanks!! nice lesson

  • @spotandjake1008
    @spotandjake1008 Год назад

    Reference counting can handle cycles its just a harder problem to solve that requires some more compile time analysis.

  • @Squossifrage
    @Squossifrage Год назад

    8:23 “RES” is short for “resident”, not “resources”.

  • @blazingazong
    @blazingazong Год назад +1

    Rust is cool, cause it is like semiautomatic memory management!

  • @blacklistnr1
    @blacklistnr1 Год назад +1

    13:18 @[unknown customer's basket size]: That's just careless, there's always a worst case e.g. your car trunk's size.
    If you don't plan for it with a MAX_ITEMS, you're just going to run into weird errors when that one customer adds 1 million items.

    • @jbird4478
      @jbird4478 Год назад

      It's better to combine both though: dynamically allocate as needed, but with a reasonable maximum.

  • @henrycgs
    @henrycgs Год назад +4

    obligatory “rust devs laughing in the back” comment.

    • @no_name4796
      @no_name4796 4 дня назад

      Rust devs are laughting, just to forget they have to deal with borrowing and lifetimes instead

    • @henrycgs
      @henrycgs 2 дня назад

      @@no_name4796 it's really not that hard

  • @UncleKennysPlace
    @UncleKennysPlace Год назад

    In the good old days of writing Win2.0 programs in ANSI C (without a programming environment) a lot of people would just malloc a chunk for the entire program to use, with offsets for various functions, and free it at then end. Very wasteful [given how precious RAM once was]... and successful.