Making a tool is not the same as making a game, even if the tool is used in games. He could have made Blender or CMake or whatever else, but that doesn't mean his advice on actual GAME DEVELOPMENT has any merrits. Games are way too unpredictable and an iterative process compared to a tool that has strict requirements and pre-determined output. Being able to quickly test and implement ideas has much more valaue in games. Sorry, but if you can't understand it, then perhaps you are the "braindead" one.
@@kevathiel1913 Then, why not mention the game he was making at the time this video was made, HandMade Hero. Or the game he worked on with JBlow, The Witness? Or the other projects he has worked on at RAD that were game engine related. (Hint, making a game engine library for engines as diverse as Granny did means he has more experience in this type of thinking than most game devs, because he saw all the crap code when he had to fix and troubleshoot their engines.
@@NdxtremePro This is actually the 2nd video of this discussion. Granny is used as a counter when people said that Casey only worked on a few systems of The Witness. Other than that, his HMH has been abandoned. You know, the game that was supposed to teach people how to make a COMPLETE game in C, only to abandon it after 10 years without getting even close to any form of gameplay, let alone the promises he made, while taking money for it.. So yeah, it's stupid to listen to the guy for game dev advice, when he in fact fell into the most basic beginner trap and underestimated the required time by probably decades.. Some kid who participated in a game jam has more relevant actual game dev experience than Casey.
@@kevathiel1913 There are thousands of hours of footage of him developing a game on twitch alone, so I don't know what you think has merit, but it's clearly not actual amount of game programming experience, so I don't know, maybe the OP applies?
I kept thinking the creator of the Odin language is probably in the n+2 stage. The language provides by default a "temp_allocator" that you can use/free on a per frame basis, and everything in Odin gets initialized to 0. Btw, Casey invented (or at least popularized) immediate-mode GUIs.
@@LinucNerd I can't remember the resources I used to learn it. But it's a paradigm where the UI is immediate, rather than retained. This refers to data retention and immediate rendering, though it's actually kind of a misnomer, as many immediate UIs do retain quite some data and defer rendering to the end of frame. (I personally prefer calling them dynamic vs static UIs, but I'm no authority on this). A "retained UI" is where you define all the widgets and layouts during initialization, and they will live statically in memory for the entire execution of the program, and each widget stores some data related to the state of your program, and has to be notified when something changes, or notifies back when interacted with. An "immediate UI" is where each widget is a function that is called every frame, with the relevant app state as a parameter. The function executes the widget code, returns some result, and discards the widget from memory (at least mostly). If rendering isn't deferred, then that function also draws the widget immediately. Dear Imgui is the most popular C++ "immediate mode GUI" library that retains some data and defers rendering. Raylib has its own companion gui library, called RayGUI, that iirc doesn't retain any data, and renders immediately.
@@LinucNerd look it up he has an old video from 2005 or something talking about it. basically retained mode alot of the event handling happens in the backend and you get these opaque references to all these internal data structures where you have alot of state where immediate mode you write the logic for the most part and in the backend all you get is whether some thing happened or not. for example in immediate mode you call a function like WasKeyPressed('A') and it returns a bool and you handle the rest. or like a button animation you make that happen rather than it being done for you in the backend and this gives you more control and simplifies the system overall. basically retained mode gives the system more responsibility making it more complex and opaque overall and immediate mode flips that on its head.
@@LinucNerd I can't remember the resources I used, but it's a paradigm where the UI is immediate, rather than retained. This refers to no data retention and immediate rendering, though it's actually kind of a misnomer, as many immediate UIs retain data and defer rendering to the end of frame. (I personally prefer calling them dynamic vs static UIs, but I'm no authority on this). A "retained UI" is where you define all the widgets and layouts during initialization, and they live statically in memory for the entire program. Each widget stores state data, and must be notified when something changes, or notifies back when interacted with. An "immediate UI" is where each widget is just a function that is called every frame, with the relevant state data as a parameter. The function executes the widget code, returns some result, and discards the widget from memory (at least mostly). If rendering isn't deferred, then that function also draws the widget immediately. Dear ImGUI is the most popular C++ "immediate mode GUI" library that retains some data and defers rendering. Raylib has its own RayGUI companion gui library, that iirc doesn't retain any data, and renders immediately. I tried to reply before but youtube deleted my reply, as it often does. Hopefully this one sticks.
Ah so _this_ is why my video got a third spike in views, thanks Cakez. On the subject of stubs, it is merely a technique you can choose to use, and you'd preferably only choose to use stubs in select areas where they make sense to use, you probably wouldn't want to use them everywhere. Unfortunately some people didn't seem to catch that part, and thought ZII demanded the use of stubs? I'm not sure what lead them to think that. When it comes to crashing early or using stubs, they're not mutually exclusive, you can use stubs in some areas where it don't matter, and crash early in other areas where it does matter. While I don't use stubs, I would probably set it up so that in the debug build it just crashes, and on release builds it actually returns the stub (And logs which calling function received the stub); it really depends on what you're doing. For the record, I did not set the timecodes/timestamps in the video, someone else (I don't know who) did. I'm not sure if that's the right term, but the "sections" on the timeline. I don't know how to overwrite what they wrote, as I rarely upload. Fun lil note: After I uploaded that excerpt from Casey's old stream, I got an influx of _incredibly_ angry C++ developers, some left mean comments (lol), and quite a few left some real nasty ones... And here I thought the Rust community had a toxicity problem. Lesson learned, never mention RAII or smart-pointers in any negative light (even slightly), that seems to be an emotional sore-spot for some.
I love how while diehard language fans argue about features, how they should be used and what's best in programming, others get stuff done. It's really worth agonizing over C++ features, we'll surely find the absolute perfect way to do resource management... any day now...
You can apply the same concept to file IO. Even if you have to open a lot of files, the types of files you must open are not that many and each type probably must be handled the same way on a failure case. Handling failure case is also trivial if you use errors as values instead of exceptions since you must handle it then and there instead of further up the callstack. If you choose to not handle it, it's extremely obvious in the code.
IMHO, the shift from n to n+1 is less about code quality and more learning which corner you can cut in your design. Smart pointer everywhere is inefficient sure, but when you don't have clear idea of the lifecycle of objects or no idea what the next feature would need, atomizing every elements into autonomous lifetime bring some stability to the design. As for the transition to n+2, ZII is just null object with the assumption that every null object are zeroed (leading to the optimization of overlapping them all). Good to see the wheel still gets reinvented from time to time. Also as pointed out by many comment, thats a good strat if you want to not crash but not to guarantee that work gets done.
The other issue with smart pointers is that it can give a false sense of understanding of the rule of 5. Just watched an interesting talk about the rule of 5 and why it still exists. the tl;dr is that not everything in C++ is moveable by default and as a result you should still consider how memory is getting from one place to another if you're worried about optimization. Shared and unique do not cover all cases.
Casey never claimed to have invented ZII he just coined that word to describe a commonly used pattern. He is completely right in the fact that bad programmer's don't use this pattern nearly as much as they should.
When you are developing something, like a game, in js engine, and you are after performance, after some time you will also come to the same conclusion as Casey. Like I did.
I don't think Casey meant nullptr when talking about stub. I think he means a small memory(ie: 4KB) you allocate beforehand and give to people if there is no more memory left in arena. Also I believe he said you can wrap around and give people memory from beginning of the arena again. Likely people will be done with beginning of the arena when you reach the end
Wrapping around seems a great way to get undefined behaviour. Just because you do something at the start of the frame doesn't mean you won't need it at the end. This can lead to all kinds of difficult to debug problems down the line.
@@kevathiel1913 I'm not sure about this, but I believe with this method, you wouldn't strictly follow C++ object model anyways. I don't believe they call destructors with this method at all. My guess is they use malloc and it doesn't requires C++ object model. (All of these are guesses, not necessarily correct information) Also you wouldn't wrap %99 of time, you'd wanna allocate enough space anyways. This is just a failsafe
I felt like Cassey had the 0 page as read-only. This would match the whole reusing the same memory space for _every_ default object and "zeroed object have no behavior". As long as the code agrees to not touch zeroed values, you would eliminate all error branches but you could end up passing zeroed value around and doing nothing. Not the worst trade off if you trust the game to eventually get back on track but a strange one. (At least, if I interpret it correctly)
@@aredrih6723 Note: most important one is number 4 1-) It's not that simple to figure out if you get a read only memory or actually the memory you asked for. Return type should be same. 2-) If you can figure out operation failed so you get a read only memory, you don't need that memory. You can just use nullptr 3-) What I'm gonna do with read only zeroed memory? It doesn't have any value 4-) If you listened carefully he talked about zeroing it every frame or whenever it's required. If it's read only why would you need to zero it so often?
@@paradox8425 ok. I've rewatched Cassey video where he talked about giving reference to a repeating 4k page (4k of physical memory, repeated over logical space "ring mapping [...] repeat the same 4k page forever" (26m30 in original video)) and allowing write to it. What I'm getting from this is that if Cassey ever allocates more than 4k from the repeating pages, the code would start going banana (overlap of object in memory). Still, to answer your points. 1) yeah. My take is that a write fault is better that silently corrupting another object. 2) 0-ed struct makes for good stubs. A length of zero means no items and if you use I'd over pointer (which Cassey mentioned doing) id 0 is a reasonable default. Also similar for enum but that's a classic. 3) again 0-ed makes for great stubs, the main concerns are a nullpointer deref and for that, you need a large page of zero. AFAICT, Cassey did describe code looking at return value as "the n style way" (26m05 in the original video) so this point seems to match. 4) so, I was wrong on that one, I did not expect the ticking time bomb in the "safe" code. Good catch and my bad.
I think a major issue is people are typically developing for standard PCs. These concepts become significantly more important when dealing with embeded systems, or specialized architecture (Like game consoles). For example: n) Makes perfect sense if all you've ever programmed for is a PC. These are really nice if you don't actually trust yourself to write proper code, and you need the language to hold your hand a bit. Also, Because there are so many variations of PCs (Different architectures, endianness, etc) and "generalizing big data collections" becomes a bit more difficult. You allocate one object and the language is responsible for managing where it is and how it works. It does have the added benefit of being clear on what the data's intention is (but so does a using/typedef). n+1) works wonderfully for embeded/specialized systems, and once you start to get used to it, migrating to PC actually isn't as difficult as it appeared in n. For example a VERY common method of programming games for consoles is to cache everything as it would appear in memory. Simply reinterpret cast everything to a byte array. There is your level, there is all of the data. You don't have to allocate each individual element or parse data. You're working with memory as a block of data instead of a collection of individual elements. You only have to request a block of data from a heap, and then it's up to you to construct a data structure that allows you to parse that data statically. There are methods of doing this on PC, and this is only one example. The block of memory its self is responsible for the lifetime of ALL the data, and it's up to the implementation to maintain it's state. This is significantly faster; however, you're taking control away from the language (and also taking over head away from the language) and it's up to you to delegate everything appropriately. People saying that you can have this stage and still do RAII miss the whole idea of the reason for programming like this. You aren't allocating resources by initializing them. They are initialized separate to the resource allocation process, the already initialized data in memory is just being navigated as a block of data. N+2) seems to be just a "lazy" version of n+1 with a zero state valid. Making the empty stack allocation a valid initialization in it's self. I'm not a big fan of this personally; however, I have seen the concept used before. I feel this is more of an "anti-pattern"
On RAII for files. In the modern era, 99.9% of file handle related operations are read_entire_file(), write_entire_file(), and append_to_file() In those 3 functions there might be some annoying calling of fclose() in error conditions. But it's not the 80's anymore, you usually don't have open file handles just hanging around in your program, and if you do, it's because the scenario is extremely performance sensitive, in which case you're not gonna wanna be doing calling destructors to have RAII. Defer's pretty good tho, in situation where this comes up. Also yeah, you'd still put a debug assert when you had to return a stub object. But the point is to keep the semantics simple by not have to track the lifetime or have the caller do a NULL check.
The "job for every tool" idea is one of the worst parts of online programming discourse. Some things are just bad. He is saying RAII is bad - that is worth engaging with. Maybe you will decide he is wrong! That would be fine! But it is immensely frustrating to watch a video where someone refuses to really engage with the central premise, doubly so when it pretends to be taking some more enlightened view by ignoring it. The commentary here just assumes that Muratori is being unreasonable, RAII and smart pointers are just a matter of taste, maybe they're useful in certain common circumstances, it must be useful because a lot of people use it and recommend it, etc. But half of what he said was all about how this is a mistake, you almost never want to use it, and the people who recommend it are thinking about the problem space wrong. I wish you had engaged with that question instead of just brushing it off and presupposing that some kind of Enlightened Centrism is obviously superior. I think your response here is actually very disrespectful, much less disrespectful than if you had merely said you thought he was wrong.
Kinda cringe at the end. Not only did you cut out the part where you saw the point of the Casey hater guy and agreed with him, but it turns out that this Casey video did in fact not provide the answers that were promised. But the most cringe part is where you revealed my secret identity. That I repeat parts of the sentence of the sentence is actually something that happens often, because whenever I watch a Cakez video, I literally get brain lag
Classic Casey: Using a strawman argument to make silly claims. He portraits RAII as bad and stupid, while his only argument is that many small fragmented allocations are bad. Those 2 things are completely unreleated.. However, RAII can be used for the n+1 style, and is actually found in basically any collection.. He never said why RAII is bad or worse than doing it manually. His "very few malloc/free new/delete" could just be RAII and avoid some potential bugs for literally ZERO drawbacks. He is not arguing against RAII, but against using RAII for countless allocations, but this would apply to to new/delete or whatever else as well. He is talking about a programmingstyle as a whole, not a specific technique. He also did the same with Rusts borrow checker(for whatever reason), but if he had any clue, he would know that the borrow checker pretty much forces you to use n+1. Using indices into arrays/vecs to refer to individual objects is pretty much the default way in Rust. Doing anything but data oriented programming will make you fight the borrow checker, but on the other hand, if you write in a data oriented style, the "Karen compiler" turns into Gordon Ramsay who shouts at you when you are doing something stupid. Lastly, his ZII is just plain stupid. He isn't doing error handling, he is just ignoring errors. That means his program can be in a faulty state, which then is not immediately obvious. Of course he gives an example that easily supports this claim by just not rendering something, which is as harmless as it gets, but when you are dealing actual logic, his stubs can easily butterfly into some wildly unstable state. A stub that is used for something else will keep producing stubs and so on, until it interacts with non-stubs, which then can do all kinds of unpredictable errors. This is also where his lack of game dev experience kicks in. Imagine storing such a faulty state(literally saving the game), then noticing and fixing the bug. The save state will still be corrupt and potentially unrecoverable.
Watch this: "Handmade Hero | new vs delete | struct vs class | How to get fired". It's not just about reducing allocations, he thinks that tying allocation to initialization is a mistake. Also where did he argue that stubs should be used for literally everything?
He pretty much explains it tho... you just need to pay attention. RAII makes it very easy to implement a system where basically creating any object implies an allocation. All memory associated with an object is allocated when the object is created. You CAN use RAII with buffers, but it is not natural. That's why RAII is bad, because it makes it easier and more intuitive to write the wrong solution. If you use RAII correctly, then you don't have anything to worry about.
@@rohitaug I am not going to watch another video where he won't make any arguments. Either tell me the point, or f off.. I won't sit here watch hours worth of videos to find the shadow of an argument..
@@tiranito2834 He didn't mention it in this video, what are you talking about? Timestamp it if he did.. "making it easier and more intuitive to write the wrong solution" is a bold claim when he there are evidently more issues with misusing raw malloc and new, based on data instead of feelings(e.g. memory issues being the largest offender when it comes to issues when looking at vulnerabilities and studies). And even if you end up with too many allocations, you only end up with performance issues in the worst case, which are measurable and easy to fix. Issues caused by misusing raw allocations on the other hand can cause UB, which can not only go unnoticed, but they can cause all kinds of issues. Yet, you clowns will claim "new and delete is fine if you use it correctly", which is the same argument you are using for RAII, but for whatever reason, RAII is the stupid/bad thing?
Granny, which Casey worked on, was used in literally thousands of games, so people arguing he's not done any game work are braindead.
Making a tool is not the same as making a game, even if the tool is used in games. He could have made Blender or CMake or whatever else, but that doesn't mean his advice on actual GAME DEVELOPMENT has any merrits. Games are way too unpredictable and an iterative process compared to a tool that has strict requirements and pre-determined output. Being able to quickly test and implement ideas has much more valaue in games. Sorry, but if you can't understand it, then perhaps you are the "braindead" one.
@@kevathiel1913 🤡
@@kevathiel1913 Then, why not mention the game he was making at the time this video was made, HandMade Hero. Or the game he worked on with JBlow, The Witness? Or the other projects he has worked on at RAD that were game engine related. (Hint, making a game engine library for engines as diverse as Granny did means he has more experience in this type of thinking than most game devs, because he saw all the crap code when he had to fix and troubleshoot their engines.
@@NdxtremePro This is actually the 2nd video of this discussion. Granny is used as a counter when people said that Casey only worked on a few systems of The Witness. Other than that, his HMH has been abandoned. You know, the game that was supposed to teach people how to make a COMPLETE game in C, only to abandon it after 10 years without getting even close to any form of gameplay, let alone the promises he made, while taking money for it.. So yeah, it's stupid to listen to the guy for game dev advice, when he in fact fell into the most basic beginner trap and underestimated the required time by probably decades.. Some kid who participated in a game jam has more relevant actual game dev experience than Casey.
@@kevathiel1913 There are thousands of hours of footage of him developing a game on twitch alone, so I don't know what you think has merit, but it's clearly not actual amount of game programming experience, so I don't know, maybe the OP applies?
I kept thinking the creator of the Odin language is probably in the n+2 stage. The language provides by default a "temp_allocator" that you can use/free on a per frame basis, and everything in Odin gets initialized to 0.
Btw, Casey invented (or at least popularized) immediate-mode GUIs.
What does immediate mode GUIs even mean? Know of a good resource on them?
@@LinucNerd I can't remember the resources I used to learn it. But it's a paradigm where the UI is immediate, rather than retained. This refers to data retention and immediate rendering, though it's actually kind of a misnomer, as many immediate UIs do retain quite some data and defer rendering to the end of frame. (I personally prefer calling them dynamic vs static UIs, but I'm no authority on this).
A "retained UI" is where you define all the widgets and layouts during initialization, and they will live statically in memory for the entire execution of the program, and each widget stores some data related to the state of your program, and has to be notified when something changes, or notifies back when interacted with.
An "immediate UI" is where each widget is a function that is called every frame, with the relevant app state as a parameter. The function executes the widget code, returns some result, and discards the widget from memory (at least mostly). If rendering isn't deferred, then that function also draws the widget immediately.
Dear Imgui is the most popular C++ "immediate mode GUI" library that retains some data and defers rendering. Raylib has its own companion gui library, called RayGUI, that iirc doesn't retain any data, and renders immediately.
@@LinucNerd look it up he has an old video from 2005 or something talking about it. basically retained mode alot of the event handling happens in the backend and you get these opaque references to all these internal data structures where you have alot of state where immediate mode you write the logic for the most part and in the backend all you get is whether some thing happened or not. for example in immediate mode you call a function like WasKeyPressed('A') and it returns a bool and you handle the rest. or like a button animation you make that happen rather than it being done for you in the backend and this gives you more control and simplifies the system overall. basically retained mode gives the system more responsibility making it more complex and opaque overall and immediate mode flips that on its head.
@@LinucNerd I can't remember the resources I used, but it's a paradigm where the UI is immediate, rather than retained. This refers to no data retention and immediate rendering, though it's actually kind of a misnomer, as many immediate UIs retain data and defer rendering to the end of frame. (I personally prefer calling them dynamic vs static UIs, but I'm no authority on this).
A "retained UI" is where you define all the widgets and layouts during initialization, and they live statically in memory for the entire program. Each widget stores state data, and must be notified when something changes, or notifies back when interacted with.
An "immediate UI" is where each widget is just a function that is called every frame, with the relevant state data as a parameter. The function executes the widget code, returns some result, and discards the widget from memory (at least mostly). If rendering isn't deferred, then that function also draws the widget immediately.
Dear ImGUI is the most popular C++ "immediate mode GUI" library that retains some data and defers rendering. Raylib has its own RayGUI companion gui library, that iirc doesn't retain any data, and renders immediately.
I tried to reply before but youtube deleted my reply, as it often does. Hopefully this one sticks.
@@LinucNerd youtube just keeps deleting my replies...
Ah so _this_ is why my video got a third spike in views, thanks Cakez.
On the subject of stubs, it is merely a technique you can choose to use, and you'd preferably only choose to use stubs in select areas where they make sense to use, you probably wouldn't want to use them everywhere. Unfortunately some people didn't seem to catch that part, and thought ZII demanded the use of stubs? I'm not sure what lead them to think that.
When it comes to crashing early or using stubs, they're not mutually exclusive, you can use stubs in some areas where it don't matter, and crash early in other areas where it does matter.
While I don't use stubs, I would probably set it up so that in the debug build it just crashes, and on release builds it actually returns the stub (And logs which calling function received the stub); it really depends on what you're doing.
For the record, I did not set the timecodes/timestamps in the video, someone else (I don't know who) did. I'm not sure if that's the right term, but the "sections" on the timeline.
I don't know how to overwrite what they wrote, as I rarely upload.
Fun lil note: After I uploaded that excerpt from Casey's old stream, I got an influx of _incredibly_ angry C++ developers, some left mean comments (lol), and quite a few left some real nasty ones... And here I thought the Rust community had a toxicity problem. Lesson learned, never mention RAII or smart-pointers in any negative light (even slightly), that seems to be an emotional sore-spot for some.
I love how while diehard language fans argue about features, how they should be used and what's best in programming, others get stuff done.
It's really worth agonizing over C++ features, we'll surely find the absolute perfect way to do resource management... any day now...
love the comment reads at the end ❤😂
pools in languages like c# are also a byproduct of slowness imho. creating objects from scratch is expensive.
You can apply the same concept to file IO. Even if you have to open a lot of files, the types of files you must open are not that many and each type probably must be handled the same way on a failure case. Handling failure case is also trivial if you use errors as values instead of exceptions since you must handle it then and there instead of further up the callstack. If you choose to not handle it, it's extremely obvious in the code.
I rock with u mane, definitely draw inspiration from u
IMHO, the shift from n to n+1 is less about code quality and more learning which corner you can cut in your design.
Smart pointer everywhere is inefficient sure, but when you don't have clear idea of the lifecycle of objects or no idea what the next feature would need, atomizing every elements into autonomous lifetime bring some stability to the design.
As for the transition to n+2, ZII is just null object with the assumption that every null object are zeroed (leading to the optimization of overlapping them all). Good to see the wheel still gets reinvented from time to time.
Also as pointed out by many comment, thats a good strat if you want to not crash but not to guarantee that work gets done.
The other issue with smart pointers is that it can give a false sense of understanding of the rule of 5. Just watched an interesting talk about the rule of 5 and why it still exists. the tl;dr is that not everything in C++ is moveable by default and as a result you should still consider how memory is getting from one place to another if you're worried about optimization. Shared and unique do not cover all cases.
Casey never claimed to have invented ZII he just coined that word to describe a commonly used pattern.
He is completely right in the fact that bad programmer's don't use this pattern nearly as much as they should.
Love the last part, take my upvote
Loved the part when he stopped blabbering and played the video. Not bad overall.
Actually TOXIC!
go watch the video urself then? like what
The same is truth for his streams. The best parts are when he stops blabbering.
go watch it yourself then you unintelligent ape
When you are developing something, like a game, in js engine, and you are after performance, after some time you will also come to the same conclusion as Casey. Like I did.
I think Rob Pike has this in Go so we have at least two great programmers recommending it
just make code that does what you set out to implement, ie it works and does what it should do.
you should've banned Inukai from chat xD
I don't think Casey meant nullptr when talking about stub. I think he means a small memory(ie: 4KB) you allocate beforehand and give to people if there is no more memory left in arena.
Also I believe he said you can wrap around and give people memory from beginning of the arena again. Likely people will be done with beginning of the arena when you reach the end
Wrapping around seems a great way to get undefined behaviour. Just because you do something at the start of the frame doesn't mean you won't need it at the end. This can lead to all kinds of difficult to debug problems down the line.
@@kevathiel1913 I'm not sure about this, but I believe with this method, you wouldn't strictly follow C++ object model anyways. I don't believe they call destructors with this method at all. My guess is they use malloc and it doesn't requires C++ object model.
(All of these are guesses, not necessarily correct information)
Also you wouldn't wrap %99 of time, you'd wanna allocate enough space anyways. This is just a failsafe
I felt like Cassey had the 0 page as read-only. This would match the whole reusing the same memory space for _every_ default object and "zeroed object have no behavior".
As long as the code agrees to not touch zeroed values, you would eliminate all error branches but you could end up passing zeroed value around and doing nothing.
Not the worst trade off if you trust the game to eventually get back on track but a strange one.
(At least, if I interpret it correctly)
@@aredrih6723
Note: most important one is number 4
1-) It's not that simple to figure out if you get a read only memory or actually the memory you asked for. Return type should be same.
2-) If you can figure out operation failed so you get a read only memory, you don't need that memory. You can just use nullptr
3-) What I'm gonna do with read only zeroed memory? It doesn't have any value
4-) If you listened carefully he talked about zeroing it every frame or whenever it's required. If it's read only why would you need to zero it so often?
@@paradox8425 ok. I've rewatched Cassey video where he talked about giving reference to a repeating 4k page (4k of physical memory, repeated over logical space "ring mapping [...] repeat the same 4k page forever" (26m30 in original video)) and allowing write to it.
What I'm getting from this is that if Cassey ever allocates more than 4k from the repeating pages, the code would start going banana (overlap of object in memory).
Still, to answer your points.
1) yeah. My take is that a write fault is better that silently corrupting another object.
2) 0-ed struct makes for good stubs. A length of zero means no items and if you use I'd over pointer (which Cassey mentioned doing) id 0 is a reasonable default. Also similar for enum but that's a classic.
3) again 0-ed makes for great stubs, the main concerns are a nullpointer deref and for that, you need a large page of zero.
AFAICT, Cassey did describe code looking at return value as "the n style way" (26m05 in the original video) so this point seems to match.
4) so, I was wrong on that one, I did not expect the ticking time bomb in the "safe" code. Good catch and my bad.
I cannot live without zero by default (ZII) I don’t know what you’re talking about bro.
I think a major issue is people are typically developing for standard PCs. These concepts become significantly more important when dealing with embeded systems, or specialized architecture (Like game consoles).
For example:
n) Makes perfect sense if all you've ever programmed for is a PC. These are really nice if you don't actually trust yourself to write proper code, and you need the language to hold your hand a bit. Also, Because there are so many variations of PCs (Different architectures, endianness, etc) and "generalizing big data collections" becomes a bit more difficult. You allocate one object and the language is responsible for managing where it is and how it works. It does have the added benefit of being clear on what the data's intention is (but so does a using/typedef).
n+1) works wonderfully for embeded/specialized systems, and once you start to get used to it, migrating to PC actually isn't as difficult as it appeared in n. For example a VERY common method of programming games for consoles is to cache everything as it would appear in memory. Simply reinterpret cast everything to a byte array. There is your level, there is all of the data. You don't have to allocate each individual element or parse data. You're working with memory as a block of data instead of a collection of individual elements. You only have to request a block of data from a heap, and then it's up to you to construct a data structure that allows you to parse that data statically. There are methods of doing this on PC, and this is only one example. The block of memory its self is responsible for the lifetime of ALL the data, and it's up to the implementation to maintain it's state. This is significantly faster; however, you're taking control away from the language (and also taking over head away from the language) and it's up to you to delegate everything appropriately. People saying that you can have this stage and still do RAII miss the whole idea of the reason for programming like this. You aren't allocating resources by initializing them. They are initialized separate to the resource allocation process, the already initialized data in memory is just being navigated as a block of data.
N+2) seems to be just a "lazy" version of n+1 with a zero state valid. Making the empty stack allocation a valid initialization in it's self. I'm not a big fan of this personally; however, I have seen the concept used before. I feel this is more of an "anti-pattern"
On RAII for files. In the modern era, 99.9% of file handle related operations are read_entire_file(), write_entire_file(), and append_to_file()
In those 3 functions there might be some annoying calling of fclose() in error conditions.
But it's not the 80's anymore, you usually don't have open file handles just hanging around in your program, and if you do, it's because the scenario is extremely performance sensitive, in which case you're not gonna wanna be doing calling destructors to have RAII.
Defer's pretty good tho, in situation where this comes up.
Also yeah, you'd still put a debug assert when you had to return a stub object. But the point is to keep the semantics simple by not have to track the lifetime or have the caller do a NULL check.
The "job for every tool" idea is one of the worst parts of online programming discourse. Some things are just bad. He is saying RAII is bad - that is worth engaging with.
Maybe you will decide he is wrong! That would be fine! But it is immensely frustrating to watch a video where someone refuses to really engage with the central premise, doubly so when it pretends to be taking some more enlightened view by ignoring it. The commentary here just assumes that Muratori is being unreasonable, RAII and smart pointers are just a matter of taste, maybe they're useful in certain common circumstances, it must be useful because a lot of people use it and recommend it, etc.
But half of what he said was all about how this is a mistake, you almost never want to use it, and the people who recommend it are thinking about the problem space wrong. I wish you had engaged with that question instead of just brushing it off and presupposing that some kind of Enlightened Centrism is obviously superior. I think your response here is actually very disrespectful, much less disrespectful than if you had merely said you thought he was wrong.
Not an i++ programmer?
the most error prone proposition in history
Kinda cringe at the end. Not only did you cut out the part where you saw the point of the Casey hater guy and agreed with him, but it turns out that this Casey video did in fact not provide the answers that were promised. But the most cringe part is where you revealed my secret identity. That I repeat parts of the sentence of the sentence is actually something that happens often, because whenever I watch a Cakez video, I literally get brain lag
Classic Casey: Using a strawman argument to make silly claims. He portraits RAII as bad and stupid, while his only argument is that many small fragmented allocations are bad. Those 2 things are completely unreleated.. However, RAII can be used for the n+1 style, and is actually found in basically any collection.. He never said why RAII is bad or worse than doing it manually. His "very few malloc/free new/delete" could just be RAII and avoid some potential bugs for literally ZERO drawbacks. He is not arguing against RAII, but against using RAII for countless allocations, but this would apply to to new/delete or whatever else as well. He is talking about a programmingstyle as a whole, not a specific technique.
He also did the same with Rusts borrow checker(for whatever reason), but if he had any clue, he would know that the borrow checker pretty much forces you to use n+1. Using indices into arrays/vecs to refer to individual objects is pretty much the default way in Rust. Doing anything but data oriented programming will make you fight the borrow checker, but on the other hand, if you write in a data oriented style, the "Karen compiler" turns into Gordon Ramsay who shouts at you when you are doing something stupid.
Lastly, his ZII is just plain stupid. He isn't doing error handling, he is just ignoring errors. That means his program can be in a faulty state, which then is not immediately obvious. Of course he gives an example that easily supports this claim by just not rendering something, which is as harmless as it gets, but when you are dealing actual logic, his stubs can easily butterfly into some wildly unstable state. A stub that is used for something else will keep producing stubs and so on, until it interacts with non-stubs, which then can do all kinds of unpredictable errors. This is also where his lack of game dev experience kicks in. Imagine storing such a faulty state(literally saving the game), then noticing and fixing the bug. The save state will still be corrupt and potentially unrecoverable.
Watch this: "Handmade Hero | new vs delete | struct vs class | How to get fired". It's not just about reducing allocations, he thinks that tying allocation to initialization is a mistake. Also where did he argue that stubs should be used for literally everything?
He pretty much explains it tho... you just need to pay attention. RAII makes it very easy to implement a system where basically creating any object implies an allocation. All memory associated with an object is allocated when the object is created. You CAN use RAII with buffers, but it is not natural. That's why RAII is bad, because it makes it easier and more intuitive to write the wrong solution. If you use RAII correctly, then you don't have anything to worry about.
@@rohitaug I am not going to watch another video where he won't make any arguments. Either tell me the point, or f off.. I won't sit here watch hours worth of videos to find the shadow of an argument..
@@kevathiel1913 If you want people to do things for you, you should fix the attitude.
@@tiranito2834 He didn't mention it in this video, what are you talking about? Timestamp it if he did.. "making it easier and more intuitive to write the wrong solution" is a bold claim when he there are evidently more issues with misusing raw malloc and new, based on data instead of feelings(e.g. memory issues being the largest offender when it comes to issues when looking at vulnerabilities and studies). And even if you end up with too many allocations, you only end up with performance issues in the worst case, which are measurable and easy to fix. Issues caused by misusing raw allocations on the other hand can cause UB, which can not only go unnoticed, but they can cause all kinds of issues. Yet, you clowns will claim "new and delete is fine if you use it correctly", which is the same argument you are using for RAII, but for whatever reason, RAII is the stupid/bad thing?
"I''m mostly posting devlog / devblog videos where I show off the progress I make on the game."
Toxic!