I love how people are rediscovering Arenas again! They are pretty much all I use for all my allocation needs because an arena equates to a lifetime group for allocations. And using arenas (like in Odin), you can just free all of the things at once rather than individually (e.g. paired malloc/free in C), which means in practice, you rarely need to free anything manually and just allocate when needed. And because arenas are really simple allocators, they are as fast as they can possibly get compared to any other memory allocation strategy.
@@ThePrimeTimeagen I think this makes Go a more attractive alternative to Zig in some cases. It's performant, easy to use, and now has performance as an option? That's great!
6:05 regarding Go's GC. This is a very simplified overview because I can't detail it all in the comments, but the GC in go is mostly concurrent. It's a mark and sweep GC and when it kicks in, it takes 25% of the cores to identify what needs to be cleaned and what can live on. It does stop the world for a little bit for the cleanup but in general allocations can still happen when the GC is working since 75% of the go routines running are doing your work as opposed to 25% for the GC. I love Arenas btw.
I use memory arenas for some years now in C, it does not only improve performance over malloc everything, it also simplifies memory managment. You think more about lifetimes of groups then lifetimes of individual allocated objects (and free everything at ones instead of one by one.
C programmers been doing arena mem mgt for decades - for long running programs (as in 24/7) it can also aide in reducing memory fragmentation. This latter is a concern for Go programs as its GC doesn't do heap compaction (like, say, the Java GC implementations do).
Arenas make memory management in C almost trivial, so I'm very excited that Go is getting them too. It seems like Go is going in the right direction as opposed to other modern languages in my opinion.
Unfortunately, as the article notes, this memarena proposal is currently on hold and, as Ian says, "No decision has been made and likely won't be made anytime soon." So yeah, I was really excited about this feature as well, but now a bit conflicted feelings about it
@@banatibor83 I wouldn't say it's terrible. It can be verbose if all you do is just *if err != nil* it but I would take Go's error handling over exceptions any day.
Yay arena allocators! About time Go caught up with... C. 😂 But really though, one of the things proper modern languages should have is easy, pluggable, context-based memory allocator management.
6:02 Yes, it is. In one application I made for Domotic, the program creates an array of about 2GB so that GC is only called when a lot of memory is used. There is probably a better way to do it, but that worked at the time and was never changed.
Avoiding garbage collections can kind of be achieved by setting GOGC and specifying a soft memory limit. GOGC by default is 100. This number is a percentage meaning (over simplified here) that if your app is using 20MiBs of memory between the live heap, go routines, and other global pointers the GC will try to complete a cycle before the next 20MiB is allocated. If you set GOGC=200 that means in this case you can allocate an additional 40MiB before the GC kicks in. You can set this in combination with a memory limit which is a limit for all memory (heap, go routines, globals, stack). Regardless of the GOGC setting, the limit will be honored by the garbage collector (soft limit so this is a ballpark figure). Doubling the GOGC value will double the heap size and halve the GC CPU cost. Lastly, setting GOGC=off and a high GOMEMLIMIT, say around the 2GiB mark you mentioned, will allow your program to keep allocating until it is approaching that memory limit at which point the GC will kick in to get rid of dead objects.
@@FabulousFadz So, that would achieve the same thing. As the solution is working sufficiently well, I'll not change. That would be to much "customer handling" work ^^
I think in the javascript vs go example, Go would probably have to track pointers to other things and track that separately. However, if you are using a value, it would be just like part of the struct as in C (the same block of memory). I think a good example of this is slices where the slice header has a pointer to an array under the hood, similar to a standard array list. Copying a slice header would copy the pointer to the array, but not the array. If you do stuff to that slice while the original is unchanged, I think the copied one will eventually create a new array. Run the below test for example. You should see the capacity changing and the appends I think will cause a copy to new arrays in both. But in this case they'd be different arrays now under the hood. func TestSlice(t *testing.T) { s1 := []int{1, 2, 3} s2 := s1[1:] t.Logf("s1: %v, s2: %v", s1, s2) t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1)) t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2)) s2 = s2[1:] t.Logf("s1: %v, s2: %v", s1, s2) t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1)) t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2)) s2 = s2[1:] t.Logf("s1: %v, s2: %v", s1, s2) t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1)) t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2)) s2 = append(s2, 4) t.Logf("s1: %v, s2: %v", s1, s2) t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1)) t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2)) s1 = append(s1, 5) t.Logf("s1: %v, s2: %v", s1, s2) t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1)) t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2)) }
Fun fact I learned recently: LISP had garbage collection in 1960, 3 years after Fortran was invented and 22 years before C. LISP was also the first language to use a memory heap, so garbage collection actually *is* "traditional memory management."
Mark and sweep is kind of garbage (no pun intended) and trivial to implement (there are loops you need to watch out for). There are better algorithms now that don't require 'stop' the world style sweeping.
This is effectively like a stack memory that is unwound and returned once you are done with whatever you wanted to do. Ofcourse it's a lot more flexible and I can imagine you can implement this directly on top of an OS call to fetch / return memory pages. So you use a whole page at once and return it once you are done.
Yes, Go does have "Stop the world" garbage collection. Erlang does garbage collection as an event, which is local to that event loop. It's about the only version of garbage collection that I thought was manageable.
@@nosferratu That's true. If you must have a GC, then Go's is one of the best. I still have had to fight it, though, and most of my projects would end up doing so. Therefore, I'm going with Rust. Zig looks nice, though.
Good to see but is a pretty standard solution to this problem. Using fixed type memory pools built on large slabs of preallocated memory is common in C, C++, Java, etc.
In Java, most of that magic is done at runtime by a JIT. Getting new escape hatches to have a bit more influence over the runtime behavior while guaranteeing there are no violations of specs defined for the language or the JVM would be a really cool thing to add. Project Panama is still a preview feature in JDK21, so the problem isn't completely solved yet.
You know this was edited by chatGPT when the text feels the need to remind you that this feature is currently experimental and unsupported three times.
Maybe it has changed in recent versions, but the Go GC used to work by stopping the world long enough for the GC thread to scan memory, then it would release the global lock and deallocate unused objects in the background. Of course, this applies only to memory on the heap; if you can avoid the heap altogether, then you can avoid spending time in the GC stop-the-world (easier said than done, especially if you have data that needs to be accessed concurrently via Mutex/RWMutex).
So this is basically the concept of having specialized heaps in C/C++. One of the cool things about computer programming is that we reinvent the wheel, over and over with new tools.
I find it quite amusing that were solving problems like its 1999 😄 yes it does help but its literally the same as writing a custom allocator for c. I would even argue that this isnt suited for go as go is meant to be simple and foolproof. This adds quite a few pitfalls to keep in mind
Looking into it it looks like something simmilair to ECS principles? It's interesting how much GameDev usually associated stuff can still be utilized in traditional workflows
Looks like a feature which you can benefit the most when you know that you are about to load a huge amount of domain aggregate objects into memory. It is hard to imagine how Go decides which can go into one arena.
I never understood the point of garbage collection. For simple allocations it doesn't provide any benefit and for complex allocations it doesn't do what you want and you need to do some weird tricks to try to make it do the right thing. And all the time there is this mysterious performance killing process going on in the background which is extremely difficult to observe or control. As a C++ programmer I never do manual memory allocation on principal and everything "just works" and is super fast.
Yep my experience with C++ is that value semantics covers what I want 90% of the time, unique ownership 9%, shared ownership 0.9%. The remaining 0.1% is where something like a garbage collector might make sense. That all these languages design their memory systems around the 0.1% case is crazy to me.
I think the Go team should get a z80 banked system so they can realise that with banked memory management it is not so dissimilar to arenas, which is a very common memory management paradigm
Wow - what's old is new again. The Amiga had this concept decades ago, called "Memory Pools". You can make and release small allocations out of the pools, and then free the whole pool at once.
Interesting... appears to me that someone realizes that "Java Heap Space" is not a bad idea at all ... but I liked a LOT that Go is evolving memory management!
I wish every GC collected language had this. In C# one could just spawn a bunch of UI nodes (like in unity's UIToolkit) and be able to delete all of them later
Okay, so I'm still waiting for when the competition and contention is being explained, and there's some kind of bits and bytes destruction with fitness functions going on. Disappointingly it's just all talk of memory pooling. I mean, I get it, the term memory pools instantly puts sweat on my forehead, all the prerequisites and thought and planning that goes into using a memory pool properly, designing data structures so they can be reused (mutability ugh), fetching and releasing them, dealing with being over limits, resizing strategies. I mean, judging by the explanations and context given here, this sounds like runtime managed memory pools. An improvement on manual pools, but still sounds like pools.
@5:30 Yes, latency from GC is a big issue in most languages. One reason Erlang's GC system is so interesting and why a server in Erlang can have much better overall latency and performance characteristics than other dynamic languages and even compiled languages with GC. Working in fintech... the hoops Java devs have jumped through over the years to remove latency is entertaining. At the end of the day they mostly end up doing most of their own memory management after sometimes years of painful rewriting and profiling. Sometimes custom JVMs are needed.
@@white-bunny it should be much better than older GCs, but even if it's included in most JDKs, it's not seeing much adoption I think. It will never beat Erlang's tho (or the BEAM's, rather), since in Erlang everything is a process and the GC is per process. So pauses are virtually inexistent.
read Jason Gregory's Game Engine Architecture once and it has a chapter on allocators and memory pools. well worth a read that book, but its a chonky tome
I've written a few hundred projects in go and I honestly think this feature will lead to more wasted time than anything. Go needs to change how it allocates memory by having memory semantics. This could be achieved through the type system internals but instead of doing that they add generics and more ways to complicate your memory model. Every problem their generics implementation solves could have been covered by allowing interfaces to specify data, allowing structs to have predefined values (thus omitting the need for interfaces in the first place), or just allowing operator overloads (which would also solve some of the problems the current generics implementation introduced). It's crazy to me. They work so god damned hard to solve irrelevant problems. Parametric syntaxes are all you need.
@@Phasma6969 oh, i'm a hobbyist too. I wish it wasn't, but admit I've spent a number of afternoons working on different aspects of it, lol. I've tried talking about these sorts of things in the issues but they get no traction over there. Rust almost gets it right, but I just can't be bothered to haggle with a borrow checker. I think the general philosophy is "let unoptimized code run inefficiently" with a dose of "we could manipulate the way data is read instead of mutating the data itself". It's a game of turtles all the way down that begins with kernel specifications. Like browsers. They're basically virtual machines at this point since they run their own assembly code. Why do it that way? I blame microsoft for not having converged to, or generalized, posix. Since then all hell's been breaking loose.
@@morgengabe1Winderps is a problem in so many ways. Anyway, I just began experimenting with Go a couple weeks ago and love it except for the quirks with the generics -- having two levels of generics, for example, would require writing wrapper functions for every method of the interface, which is a pain in the ass I don't wanna deal with, so instead I used a wrapper struct with optional members (using an Option struct that has an IsSet boolean member that can be checked and is false by default but gets set to true by a NewOption generic func). I'm sure there are a lot of things I can do to further optimize performance of this genetic algorithm library, but I'm still learning and want to successfully evolve some classifiers first.
This... makes no sense to me. Arena allocation *is* an addition to memory semantics. Maybe you're talking about something else? And what do generics have to do with arena allocation? Allowing interfaces to specify data would completely change how interfaces work at the most foundational level. That's why you won't be getting any traction in the issues. Operator overloads in languages that support them are routinely used to make code hard to read, because people do silly things with it. I would hate to see that in Go.
@@dougsaylor6442 arena allocation doesn't achieve anything you couldn't do with arrays and responsible memory management. It's about writing a program that checks the validity of deallocations after SSA is done
Primeagean moving away from Rust apart from Leptos? A couple of months ago it was a lot of praise. Anyway :) Cool that you're exploring different languages, Zig, OCaml etc.
If you mean that if you use this the GC doesn't exist anymore. No, you could use this in a single place in your program and everything else would be garbage collected like before.
12:52 Why pop all the items before nulling their array? If something other than their array held pointers to them, the gc() would not clear them anyway.
I really didn’t like Js prototypes before Go. Now I think they’ve brought me a lot of fun compared to boring go syntax(I mean I like go syntax a lot, but not in this case)
this isn't generation gc stuff (already in go, ts, jvm as you said) this is controlled regions of code where you can clean up 100x objects in a single call.
@@ThePrimeTimeagen After objects from young generation memory area are promoted to different generation (survivors), the jvm will clean up young generation memory area in a single call.
I really wanted to give golang a try but not being able to compile my code if I have an unused import or variable is a bummer. The fact that golang team isn't officially introducing a flag or something to be used while debugging the code is saddening. Their stance is "it will make the compile time slower & result in bad code" is childish. Many language compilers remove unused codes when building for production, but treat unused imports/variable a warning instead of an error. But golang is like, "we don't do it here" 😑 not gonna use golang unless they fix that issue. Otherwise sticking with rust and typescript. Thanks to golang team for upsetting so many new golang learners & forcing them to never try golang again 👏
shh, don't tell them but this was one of my big "wants" in a GC language. light control over memory where its needed. and boom goes the dynamite, we have it
@@mnlttt People talking about eventually consistency and still tells u, do it on the frontend, users now have powerful computers... and with EC, frontend just needs to dispatch actions, and react on changes... is that dead simple, and still people trying to introduce new frameworks, like they are solving something...
Learning golang and building efficient backend systems has been a lot of fun for me. Great language!
same
What specific backend systems have you built with golang?
what libraries do you use for backend? im in the process of learning it!
@colbyberger1881 what is more used in the industry to build backend services in go, standard net/htttp package or frameworks like Gin or fibre?
I'm learning go now, fucking insane experience. I love it
can't wait to see go 1.21 that introduces memory battle royales!
I love how people are rediscovering Arenas again! They are pretty much all I use for all my allocation needs because an arena equates to a lifetime group for allocations. And using arenas (like in Odin), you can just free all of the things at once rather than individually (e.g. paired malloc/free in C), which means in practice, you rarely need to free anything manually and just allocate when needed. And because arenas are really simple allocators, they are as fast as they can possibly get compared to any other memory allocation strategy.
yeah, this is really beautiful way of doing things. for me, this makes go significantly more attractive as a language.
I was surprised to see Odin mentioned, then I saw who posted it lol.
Love Odin, don't use it for anything serious, but it's a pleasant experience.
@@ThePrimeTimeagen
I think this makes Go a more attractive alternative to Zig in some cases.
It's performant, easy to use, and now has performance as an option? That's great!
Are arenas built in odin?
Also, noob question, why would you use an arena over a bump allocator?
@@Caboose2563Why would you not use it for anything serious?
6:05 regarding Go's GC. This is a very simplified overview because I can't detail it all in the comments, but the GC in go is mostly concurrent. It's a mark and sweep GC and when it kicks in, it takes 25% of the cores to identify what needs to be cleaned and what can live on. It does stop the world for a little bit for the cleanup but in general allocations can still happen when the GC is working since 75% of the go routines running are doing your work as opposed to 25% for the GC.
I love Arenas btw.
I use memory arenas for some years now in C, it does not only improve performance over malloc everything, it also simplifies memory managment. You think more about lifetimes of groups then lifetimes of individual allocated objects (and free everything at ones instead of one by one.
seems better
C programmers been doing arena mem mgt for decades - for long running programs (as in 24/7) it can also aide in reducing memory fragmentation. This latter is a concern for Go programs as its GC doesn't do heap compaction (like, say, the Java GC implementations do).
Arenas make memory management in C almost trivial, so I'm very excited that Go is getting them too. It seems like Go is going in the right direction as opposed to other modern languages in my opinion.
Unfortunately, as the article notes, this memarena proposal is currently on hold and, as Ian says, "No decision has been made and likely won't be made anytime soon."
So yeah, I was really excited about this feature as well, but now a bit conflicted feelings about it
i am super pumped, and i really hope this goes somewhere
I can see prime is starting his go arc
Love this addition to Go. Slowly experimenting with pushing the language forwards
Love your energy and passion doing these update videos, while also digging into the details. Keep it up friend!
Arenas + deferred is my favorite part of zig. Super curious to see how golang does with arenas
Whoever wrote this blog post is a genius! Instant sub for sure!
This was your best content in a while. Straightforward and informative, gg
Golang is awesome!
It was _really_ annoying when it did not have Generics. But now that it fixed this major problem, its incredible!
They only have to fix the terrible error handling.
@@banatibor83eh, I wouldn’t call it terrible. Verbose, sure. Terrible would be exceptions.
If they would only have the garbage collector optional
@@llothar68it already is
@@banatibor83 I wouldn't say it's terrible. It can be verbose if all you do is just *if err != nil* it but I would take Go's error handling over exceptions any day.
One of the best articles, piqued my curiosity to know more about go GC!
Yay arena allocators! About time Go caught up with... C. 😂
But really though, one of the things proper modern languages should have is easy, pluggable, context-based memory allocator management.
agreed, its super important
Only language I've seen with this done correctly is Jai.
@@smarimc Not even zig?
@@tanmaybansal1634 haven't used it enough tbh. I've used it a bit and it's been good. But I can't remember if it does this right.
6:02 Yes, it is. In one application I made for Domotic, the program creates an array of about 2GB so that GC is only called when a lot of memory is used. There is probably a better way to do it, but that worked at the time and was never changed.
Avoiding garbage collections can kind of be achieved by setting GOGC and specifying a soft memory limit. GOGC by default is 100. This number is a percentage meaning (over simplified here) that if your app is using 20MiBs of memory between the live heap, go routines, and other global pointers the GC will try to complete a cycle before the next 20MiB is allocated. If you set GOGC=200 that means in this case you can allocate an additional 40MiB before the GC kicks in.
You can set this in combination with a memory limit which is a limit for all memory (heap, go routines, globals, stack). Regardless of the GOGC setting, the limit will be honored by the garbage collector (soft limit so this is a ballpark figure). Doubling the GOGC value will double the heap size and halve the GC CPU cost. Lastly, setting GOGC=off and a high GOMEMLIMIT, say around the 2GiB mark you mentioned, will allow your program to keep allocating until it is approaching that memory limit at which point the GC will kick in to get rid of dead objects.
@@FabulousFadz So, that would achieve the same thing.
As the solution is working sufficiently well, I'll not change. That would be to much "customer handling" work ^^
@@programaths Lol, I get you. Although it can be set via the environment variables GOGC and GOMEMLIMIT without otherwise changing code.
I think in the javascript vs go example, Go would probably have to track pointers to other things and track that separately. However, if you are using a value, it would be just like part of the struct as in C (the same block of memory).
I think a good example of this is slices where the slice header has a pointer to an array under the hood, similar to a standard array list. Copying a slice header would copy the pointer to the array, but not the array. If you do stuff to that slice while the original is unchanged, I think the copied one will eventually create a new array. Run the below test for example. You should see the capacity changing and the appends I think will cause a copy to new arrays in both. But in this case they'd be different arrays now under the hood.
func TestSlice(t *testing.T) {
s1 := []int{1, 2, 3}
s2 := s1[1:]
t.Logf("s1: %v, s2: %v", s1, s2)
t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1))
t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2))
s2 = s2[1:]
t.Logf("s1: %v, s2: %v", s1, s2)
t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1))
t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2))
s2 = s2[1:]
t.Logf("s1: %v, s2: %v", s1, s2)
t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1))
t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2))
s2 = append(s2, 4)
t.Logf("s1: %v, s2: %v", s1, s2)
t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1))
t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2))
s1 = append(s1, 5)
t.Logf("s1: %v, s2: %v", s1, s2)
t.Logf("s1 len: %v, cap: %v", len(s1), cap(s1))
t.Logf("s2 len: %v, cap: %v", len(s2), cap(s2))
}
This is the first time I’ve learned about arenas, this looks amazing
Fun fact I learned recently: LISP had garbage collection in 1960, 3 years after Fortran was invented and 22 years before C. LISP was also the first language to use a memory heap, so garbage collection actually *is* "traditional memory management."
Runtime gc is traditional. Arc, autofree, raii are not
Mark and sweep is kind of garbage (no pun intended) and trivial to implement (there are loops you need to watch out for). There are better algorithms now that don't require 'stop' the world style sweeping.
@@gravisancan you name them (and where they are used), please?
This is effectively like a stack memory that is unwound and returned once you are done with whatever you wanted to do. Ofcourse it's a lot more flexible and I can imagine you can implement this directly on top of an OS call to fetch / return memory pages. So you use a whole page at once and return it once you are done.
Props to @acornspits in chat for saying "Don't pee in the pool" when buffer pools were brought up😂😂 That made me laugh harder than it should of
Dmitry Filimonov is the CTO of Pyroscope!! Cool!! Once I helped the company I was working for with Pyroscope. Team loved it. I did too.
Yes, Go does have "Stop the world" garbage collection. Erlang does garbage collection as an event, which is local to that event loop. It's about the only version of garbage collection that I thought was manageable.
Yep, Erlang is stop the world, but a single process is an entire world, so only that process gets paused.
@@nosferratu That's true. If you must have a GC, then Go's is one of the best. I still have had to fight it, though, and most of my projects would end up doing so. Therefore, I'm going with Rust. Zig looks nice, though.
Good to see but is a pretty standard solution to this problem. Using fixed type memory pools built on large slabs of preallocated memory is common in C, C++, Java, etc.
Also, Java is getting arenas for interacting with other languages
In Java, most of that magic is done at runtime by a JIT. Getting new escape hatches to have a bit more influence over the runtime behavior while guaranteeing there are no violations of specs defined for the language or the JVM would be a really cool thing to add. Project Panama is still a preview feature in JDK21, so the problem isn't completely solved yet.
You know this was edited by chatGPT when the text feels the need to remind you that this feature is currently experimental and unsupported three times.
Maybe it has changed in recent versions, but the Go GC used to work by stopping the world long enough for the GC thread to scan memory, then it would release the global lock and deallocate unused objects in the background. Of course, this applies only to memory on the heap; if you can avoid the heap altogether, then you can avoid spending time in the GC stop-the-world (easier said than done, especially if you have data that needs to be accessed concurrently via Mutex/RWMutex).
@@nosferratu shouldn't it be able to free memory concurrently? Idk, this topic is new to me -- I never thought much about it in the past.
This makes go more attractive for things like gamedev or other general app development
So this is basically the concept of having specialized heaps in C/C++.
One of the cool things about computer programming is that we reinvent the wheel, over and over with new tools.
I find it quite amusing that were solving problems like its 1999 😄 yes it does help but its literally the same as writing a custom allocator for c. I would even argue that this isnt suited for go as go is meant to be simple and foolproof. This adds quite a few pitfalls to keep in mind
Looking into it it looks like something simmilair to ECS principles? It's interesting how much GameDev usually associated stuff can still be utilized in traditional workflows
very close to the same thing
Other than DreamBird, Go is the greatest language to have been concocted by the wizards.
Looks like a feature which you can benefit the most when you know that you are about to load a huge amount of domain aggregate objects into memory. It is hard to imagine how Go decides which can go into one arena.
That manual gc stuff you're doing in node is very very very similar to what we're having to do in our maui app in C# because MAUI has memory ISSUES
I never understood the point of garbage collection. For simple allocations it doesn't provide any benefit and for complex allocations it doesn't do what you want and you need to do some weird tricks to try to make it do the right thing. And all the time there is this mysterious performance killing process going on in the background which is extremely difficult to observe or control. As a C++ programmer I never do manual memory allocation on principal and everything "just works" and is super fast.
Yep my experience with C++ is that value semantics covers what I want 90% of the time, unique ownership 9%, shared ownership 0.9%. The remaining 0.1% is where something like a garbage collector might make sense. That all these languages design their memory systems around the 0.1% case is crazy to me.
@@isodoubIet golang GC is built to safely manage concurrency and I think that's a sane enough reason.
Prime should give classes on stuff; maybe api-gateways, docker, prisma, and other common software?
I think the Go team should get a z80 banked system so they can realise that with banked memory management it is not so dissimilar to arenas, which is a very common memory management paradigm
Historic new paradigm - a chunk of memory that you get to use!
Wow - what's old is new again. The Amiga had this concept decades ago, called "Memory Pools". You can make and release small allocations out of the pools, and then free the whole pool at once.
Interesting... appears to me that someone realizes that "Java Heap Space" is not a bad idea at all ... but I liked a LOT that Go is evolving memory management!
I wish every GC collected language had this. In C# one could just spawn a bunch of UI nodes (like in unity's UIToolkit) and be able to delete all of them later
We used to call this concept in C++ a memory pool. Looks like it's the same thing or I didn't understand it correctly?
Flamegraphs are upside down to reflect the depth of the stack wrt function calls
Leptos is so good that it is the sole reason you continue to use Rust? Damn, that's high praise.
Okay, so I'm still waiting for when the competition and contention is being explained, and there's some kind of bits and bytes destruction with fitness functions going on. Disappointingly it's just all talk of memory pooling. I mean, I get it, the term memory pools instantly puts sweat on my forehead, all the prerequisites and thought and planning that goes into using a memory pool properly, designing data structures so they can be reused (mutability ugh), fetching and releasing them, dealing with being over limits, resizing strategies. I mean, judging by the explanations and context given here, this sounds like runtime managed memory pools. An improvement on manual pools, but still sounds like pools.
@5:30 Yes, latency from GC is a big issue in most languages. One reason Erlang's GC system is so interesting and why a server in Erlang can have much better overall latency and performance characteristics than other dynamic languages and even compiled languages with GC.
Working in fintech... the hoops Java devs have jumped through over the years to remove latency is entertaining. At the end of the day they mostly end up doing most of their own memory management after sometimes years of painful rewriting and profiling. Sometimes custom JVMs are needed.
Heard words about the Shenandoah GC. What do you think about that?
@@white-bunny it should be much better than older GCs, but even if it's included in most JDKs, it's not seeing much adoption I think.
It will never beat Erlang's tho (or the BEAM's, rather), since in Erlang everything is a process and the GC is per process. So pauses are virtually inexistent.
Golang is probably the greatest language ever made. Small, simple, pragmatic, fast, powerful.
cc
That sign off was amazing
-agen
read Jason Gregory's Game Engine Architecture once and it has a chapter on allocators and memory pools. well worth a read that book, but its a chonky tome
I've written a few hundred projects in go and I honestly think this feature will lead to more wasted time than anything. Go needs to change how it allocates memory by having memory semantics. This could be achieved through the type system internals but instead of doing that they add generics and more ways to complicate your memory model. Every problem their generics implementation solves could have been covered by allowing interfaces to specify data, allowing structs to have predefined values (thus omitting the need for interfaces in the first place), or just allowing operator overloads (which would also solve some of the problems the current generics implementation introduced).
It's crazy to me. They work so god damned hard to solve irrelevant problems. Parametric syntaxes are all you need.
Is it better to implement what you suggest over using another language? Genuine question, I am a hobbyist and no expert...
@@Phasma6969 oh, i'm a hobbyist too. I wish it wasn't, but admit I've spent a number of afternoons working on different aspects of it, lol. I've tried talking about these sorts of things in the issues but they get no traction over there. Rust almost gets it right, but I just can't be bothered to haggle with a borrow checker. I think the general philosophy is "let unoptimized code run inefficiently" with a dose of "we could manipulate the way data is read instead of mutating the data itself". It's a game of turtles all the way down that begins with kernel specifications. Like browsers. They're basically virtual machines at this point since they run their own assembly code. Why do it that way? I blame microsoft for not having converged to, or generalized, posix. Since then all hell's been breaking loose.
@@morgengabe1Winderps is a problem in so many ways. Anyway, I just began experimenting with Go a couple weeks ago and love it except for the quirks with the generics -- having two levels of generics, for example, would require writing wrapper functions for every method of the interface, which is a pain in the ass I don't wanna deal with, so instead I used a wrapper struct with optional members (using an Option struct that has an IsSet boolean member that can be checked and is false by default but gets set to true by a NewOption generic func). I'm sure there are a lot of things I can do to further optimize performance of this genetic algorithm library, but I'm still learning and want to successfully evolve some classifiers first.
This... makes no sense to me. Arena allocation *is* an addition to memory semantics. Maybe you're talking about something else? And what do generics have to do with arena allocation? Allowing interfaces to specify data would completely change how interfaces work at the most foundational level. That's why you won't be getting any traction in the issues. Operator overloads in languages that support them are routinely used to make code hard to read, because people do silly things with it. I would hate to see that in Go.
@@dougsaylor6442 arena allocation doesn't achieve anything you couldn't do with arrays and responsible memory management. It's about writing a program that checks the validity of deallocations after SSA is done
Go is the greatest language I've never tried.
Write code in C++ and use custom memory allocators and RAII, use flyweight pattern.
How would you flyweight in go😂?
Primeagean moving away from Rust apart from Leptos?
A couple of months ago it was a lot of praise. Anyway :) Cool that you're exploring different languages, Zig, OCaml etc.
serious C programming been doing arena memory management approach for decades
So now I can decide to use GC only or use this "kind of" manual memory allocation, right?
If you mean that if you use this the GC doesn't exist anymore. No, you could use this in a single place in your program and everything else would be garbage collected like before.
@@nightonfir3 thanks, no I mean that I can choose to allocate memory in both manual and GC mode within my application
12:52 Why pop all the items before nulling their array? If something other than their array held pointers to them, the gc() would not clear them anyway.
Rust has bumpalo for arenas, I wonder if you could write a nodejs package that exposes a rust-based arena interface using Neon
Remember when the Java team said that nobody should use reflection? Good thing that nobody does...
I really didn’t like Js prototypes before Go. Now I think they’ve brought me a lot of fun compared to boring go syntax(I mean I like go syntax a lot, but not in this case)
So, it's like Mark() and Release() we had in Pascal in 80s?
yeah!
"what is this" 00:00 ChatGPT Editor
Is it possible to implement arenas in JS? With ArrayBuffer I guess
i think we need a video or course on optimizing node applicaiton.
It surprises me that you actually call the GC in node. The only times I tried it, it actually made performance worse.
If you need frequent memory requirement, why sync.Pool{} ?
OCaml for backend projects?
Getting this correct without borrow checker will be "fun".
Maybe it's stupid question but GO with borrow checker is possible?
make more Go content thanks.
Young generation GC is part of JVM for 20+ years and you don't have to use any programming to trigger it.
Easy peasy
this isn't generation gc stuff (already in go, ts, jvm as you said) this is controlled regions of code where you can clean up 100x objects in a single call.
@@ThePrimeTimeagen
After objects from young generation memory area are promoted to different generation (survivors), the jvm will clean up young generation memory area in a single call.
RAII and smart pointers also eliminate the need for manual memory management with minimal overhead.
Leptos and Greg Johnston are great
id tune in on twitch if you did some more go stuff
Wait i thought this got pulled / permanent hold due to API issues?
I love Go
I guess wasm does not help you with your node memory problem at Netflix?
So we are going form OOP to DOP
Thank you for this kind of content loko, I really enjoy it.
A naive mark & sweep garbage collector can even pause for several seconds in extreme cases.
Golang is top language.
Go uses a lot of energy
Clap for greg! Clap Clap Clap
I really wanted to give golang a try but not being able to compile my code if I have an unused import or variable is a bummer. The fact that golang team isn't officially introducing a flag or something to be used while debugging the code is saddening. Their stance is "it will make the compile time slower & result in bad code" is childish. Many language compilers remove unused codes when building for production, but treat unused imports/variable a warning instead of an error. But golang is like, "we don't do it here" 😑 not gonna use golang unless they fix that issue. Otherwise sticking with rust and typescript. Thanks to golang team for upsetting so many new golang learners & forcing them to never try golang again 👏
Is go slowly turning into a low level language? Bcz that's what I'm getting from this video
Chat gpt as an editor
Editor ChatGPT lmao
In JS you can also do...
array.length = 0
And then call gc. It works in the same way.
All my homies use rust 🦀🦀
isin't there a max size for each network request like 64k
can't you just put it in the stack and forgot about it. i mean that what i did in c 😂
wow another 1960s concept making its way into a 2023 language. I guess that's... progress?
Arenas
Garbage Collection as memory management is older than C.
so java does it and it's a "java finally arrives at 2010 kekw", but go does it and it's a great, awesome, well designed language?
yes
@@ThePrimeTimeagen understandable
There you go… all the Rust foundation shitshow has started to alienate even the Primeagen
shh, don't tell them
but this was one of my big "wants" in a GC language. light control over memory where its needed. and boom goes the dynamite, we have it
traditional memory management is oppressive. liberal memory management is the future
Go ftw
Isn't that garbage collection?
I watch this vid because don't know if dude is scary or funny
4 views and 5 likes how tho
go
“Eventually consistent” architecture
@@mnltttPredictive Eventually Consistent architecture
@@mnlttt People talking about eventually consistency and still tells u, do it on the frontend, users now have powerful computers... and with EC, frontend just needs to dispatch actions, and react on changes... is that dead simple, and still people trying to introduce new frameworks, like they are solving something...
@@MrEnsiferum77 I’m sorry but I don’t understand your point
Htitty :D