The explanation is straight to the point, detailed and correct. Without begging for likes and subscribers. And with good aesthetics/editing. This is my new favorite RUclips channel.
@@lauriewired can’t fault you for it, it is the default for Linux. You should look at the “cool-retro-term” project. It might not fit with the theme of your videos but if you want a retro feel to your terminals then you can’t be it.
Does she? She did not mention how exactly does a stack overflow get triggered (page fault exception, which happens when accessing a guard page). But I get that not everybody wants to be truly knowledgeable.
@@TheOnlyJura she just said that the operating system terminates the process which is correct, since the exception vector points into kernel code. Having seen few of her other videos, I'm quite confident that Laurie could explain how the hardware generates the exception and how the kernels handler works, but covering how a program is terminated and unloaded from memory at a detailed level is a whole series of videos by itself.
I like the juxtaposition of low level concepts from a security engineer known for writing ASM, on a Windows XP VM, using an website to do hex calculations, and a Google search to divide by 1000. And the vi/notepad++ use. It's all so random, but it feels very much planned.
7:12 People might get the wrong idea that you are allocating memory in the stack as you push. Instead, you are pushing and poping data in the stack which is already percolated and is of fixed size - it's like a fixed-size array where the stack pointer is like an array index. Despite that, great video!
Stack *is* allocated on demand. The 8MB of stack referenced in this video is not 8MB of physically reserved memory, it is just 8MB worth of unbacked virtual addresses and as each unbacked address is read or written to, there is an exception and the OSs exception handler then allocates (one page at a time) the physical memory to that page (populates the page table).
Great video and I really appreciated both the pace and the consistency of the pace. It was exactly the right rate of information the entire way through and at no point did I feel like speeding the video up or wandering into the comments and missing anything.
Laurie, you have just the right amount of computer nerd, videographer, super awesome programmer, and mad computer scientist to make me wish one of my daughters had turned out like you. Keep it up! I'm a computer scientist and appreciate your well rehearsed explanations of these often hard to understand concepts.
I am developing a homebrew music-making app for the Nintendo 3DS and a stack overflow happened to me recently there. At first I thought my project was doomed and done for, it gave me quite a shock. After googling, I learned that 32kb are allocated by default and that it can be changed, so I allocated 128 kb instead. Phew... :D
You might consider moving something to the heap instead - stack overflows are usually a sign of poor code. For example, you probably don't want to store the user's entire song on the stack, since it's used for the entire duration of the program anyways. Keep in mind that the main benefit of the stack is quick allocations and deallocations. Interesting project btw!
Yo this channel is criminally underrated. You're not only really knowledgeable, but also great at explaining and very articulate. The algorithm should pick this up soon :)
Not gonna lie, I was expecting just a python program that recursed forever. To see actual Assembly code, plus Python for GUI visualization, plus doing the calculation of how much memory each iteration was using, plus the use of an old-fashioned Solaris-like desktop... with nice informational popus at opportune times :D and the use of both vi and NotePad++!! You got a subscriber :).
Very cool! I learned a long time ago about using stacks, and how they overflow, and your explanation is right on point. Also you speak perfectly. Thank you LaurieWired!
The lower level the language, and the harder it is to code, the more efficient the code will be in the end, because the programmer won't want to put the effort in to make it bloated adding extra junk. Now we have high level languages that people that have no idea what's going on behind the scenes and at the lowest levels code in, and layers upon layers of libraries and frameworks, and you end up with the bloated garbage that is pretty much all software these days. Because why write even the simplest thing from scratch in 5 lines of code when you can just use a 500MB library to do the same thing. I've seriously asked people in interviews that supposedly "know how to code" and supposedly have "data science" degrees the simplest most fundamental questions about Big Oh, and complexity, and they have not even the faintest whiff of an idea of what I'm talking about. All they know how to do is write glue code to glue various libraries together, and if it runs too slow, buy a bigger CPU.
@@gorak9000I feel you, but it is simply incorrect to say high level = inefficient. A high level (language) gives the compiler more freedom to optimize. With low level code the programmer has to optimize by hand. Often times the compiler or interpreter can and will do a better job. A good example for this is SQL. It is a very high level language, yet extremely efficient at fetching data. Another example are JIT compilers which have knowledge about the actual architecture the code runs on and incorporate that into the compiled program. This works so well, that gcc added profile guided optimization. Another example is futhark which is a high level GPU computing language. Yes, you can likely write more efficient CUDA code by hand using shared memory and intrinsics but let me tell you that is a pain and a half in the beginning and futhark is plenty fast. Futhark allows you to more declaratively specify what it is you want calculated rather than telling the processor step by step what to do. Another example would be JSON encoding and decoding in Scala for which there are libraries which generate very efficient json (de-)serialization code using pseudo SIMD instructions (they load the data into a 64 bit long and use bit shift to parse the string very efficiently). The developer does absolutely not have to deal with this, the compiler will on their behalf. On that note: protobuf also is a high level language which gets compiled into very efficient code to (de-)serialization and nobody would want to write that by hand. If you factor in correct alignment to efficiently make use of CPU and I/O caches you better leave this task to a well written piece of algorithm than trying to reinvent the low level efficiency wheel all the time. Rust is another great high level example which gets compiled into very efficient code. Even haskell can run really fast even though it definitely is not a low level language. Thanks to referential transparency, the compiler can make assumptions about the code which it could not do otherwise and can produce efficient native code for the target platform. Even though you write haskell as if you couldn't mutate your data, thanks to referential transparency the compiler can turn this into very efficient code which may safely mutate state under the hood. This would not be possible if the developer were to write unsafe low level code because this takes away the guarantee for the compiler that this transformation is safe to do, Of course, just giving the opportunity to optimize to a compiler does not necessarily mean it will, but in general they do a great job. It is true, however, that many developers have only a surface level knowledge about what they are doing and are lost without google and stack overflow and it is those developers who write inefficient code which works ok enough for their small test data but absolutely explodes if fed anything of marginally interesting size.
@@gorak9000I feel you 100% , but it is simply incorrect to say high level = inefficient. A high level (language) gives the compiler more freedom to optimize. With low level code the programmer has to optimize by hand. Often times the compiler or interpreter can and will do a better job. A good example for this is SQL. It is a very high level language, yet extremely efficient at fetching data. Another example are JIT compilers which have knowledge about the actual architecture the code runs on and incorporate that into the compiled program. This works so well, that gcc added profile guided optimization. Another example is futhark which is a high level GPU computing language. Yes, you can likely write more efficient CUDA code by hand using shared memory and intrinsics but let me tell you that is a pain and a half in the beginning and futhark is plenty fast. Futhark allows you to more declaratively specify what it is you want calculated rather than telling the processor step by step what to do. Another example would be JSON encoding and decoding in Scala for which there are libraries which generate very efficient json (de-)serialization code using pseudo SIMD instructions (they load the data into a 64 bit long and use bit shift to parse the string very efficiently). The developer does absolutely not have to deal with this, the compiler will on their behalf. On that note: protobuf also is a high level language which gets compiled into very efficient code to (de-)serialization and nobody would want to write that by hand. If you factor in correct alignment to efficiently make use of CPU and I/O caches you better leave this task to a well written piece of algorithm than trying to reinvent the low level efficiency wheel all the time. Rust is another great high level example which gets compiled into very efficient code. Even haskell can run really fast even though it definitely is not a low level language. Even though you write haskell as if you couldn't mutate your data which if executed naively would be inefficient, thanks to referential transparency the compiler can turn this into very efficient code which may safely mutate state under the hood. This would not be possible if the developer were to write unsafe low level code because this takes away the guarantee for the compiler that this transformation is safe to do, Of course, just giving the opportunity to optimize to a compiler does not necessarily mean it will, but in general they do a great job. It is true, however, that many developers have only a surface level knowledge about what they are doing and are lost without google and stack overflow and it is those developers who write inefficient code which works ok enough for their small test data but absolutely explodes if fed anything of marginally interesting size.
Because the big library is tested and maintained :) the 5 lines of code might have a deep rooted bug in it and if you do data science without tests, Id rather you use a bloated library than buggy code :)))
@@QW3RTYUU If you use Julia, you can check the generated assembly for your function and see if it is "bloated" which is super useful if you want your analysis to run fast.
Your clear articulation is far better than any programming channel I've ever watched. It's clear you have a good handle on the subject matter, but you may have lost touch with those who have a lesser grasp.
Wow, I like the visual theme of this video. Bring back so much memory of my childhood with computer. And the explanation is also very good and comprehensive. I really like it.
Woah im so glad i found this channel!! Super clean descriptions and very well laid out. Will be checking out the rest of your clips for sure!!! Thank you! 🔥❤
Thanks for this it has helped solidify the concept in my brain. Computerfile has a "running a buffer overflow attack" video for the security perspective of this and how it is abused by threat vectors , so it was nice to see your video with more of the code exposed side of things.
Thank you very much, I'm a software programmer but on the high level side, so I've never fully understood the memory part ! It gets cleare after your video.
Wow so glad I stopped to watch, I love your MacOS desktop theme and having your video in a window, looks amazing! Ive always wanted to know what a Stack Overflow was, thank you.
Perhaps a nice idea for a follow up is how does the CPU even detect an overflow? It’s just a pointer that gets lower and lower, what mechanism actually triggers a seg fault? And the stack size is something that can be set a link time; meaning you as a programmer have some control on its size, which can be handy if you expect your program to consume a lot of stack. Could be interesting to dive into as well. Just a thought.
She already explained that in this video, at least that is what I inferred. What you describe (detection of the segfault) is the very reason that the stack counts downward (or at least one of them). There is a "minimum" memory address that it can count down to, and it knows this minimum address in advance based on the initial allocation. The CPU is watching for that lower boundary memory address to become used, which is what triggers the seg fault.
Operating systems have a memory management components that use the MMU in the CPU to detect address violations but devs have to register an exception handler for it to be handled gracefully, otherwise the CPUs or OS send segfault to the process. If os is that process...
In big simplification when running process at user mode level it uses virtual memory instead of physical memory where virtual memory pages are mapped to physical memory addresses. Each virual memory page can be flagged as executable, readable or writable and other things (depending on CPU architecture and operating system). What happens when process creates new stack for thread ? There is allocation of virtual pages which are flagged as read and write and they are mapped linearly in process virtual memory space for example first page of size 4kb at address 0x40000000 second page at address 0x40004096, and so on but the page before the first allocated memory pages is flagged as not allocated or allocated and flagged as not for read and write therefore when CPU tries to access it, page fault exception is generated and cpu switches to execution page fault handler of operating system (some function in kernel mode which is executed when page fault happens). Operating system knows by analyzing active process and memory page address that it belongs to virtual memory page before stack beginning and then it generates stack overflow exception.
@@dixztube "Windows Internals" book Intel® 64 and IA-32 Architectures Software Developer Manuals wikiAosdevAorg // replace A with . Linux kernel source code
Really like the aesthetics here. It's sorta like bill nye but for comp sci. Crazy quality for a channel this size. You are definitely going to grow if you keep it up!
Lucid, thorough and up to the point. Please, use some local calculator and value converter next time around (or at least host your own web instances of the said tools)
Currently taking CS50 to start my programming journey and I really enjoyed this video. Luckily I'm far enough into the course to understand a good portion of what you were talking about! Thanks :)
Homegirl is smarter than most of my teachers when I was a student. Even though I stopped coding a long time ago, this was a very enjoyable video. Your voice sounds a little bit like D.VA from Overwatch, so I feel like this channel is D.VA's side hustle 😂 Subbed!
Fantastic video! I've had to explain stack overflow in the past - you did a much better job. From now on I'll just send people here. I'd love to see a follow-on video about stack smashing and buffer overflow vulnerabilities, and how to write code to avoid them.
I'm exploring the idea of getting back into coding after about 30 years of never having looked at a line of it. Your videos are helping to poke (geddit?) memories loose from the rust that's accumulated over all that time. Thanks for posting and thanks for you!
Great explanation and visual representation. I agree, the dark blue text was hard for me to read against the dark terminal/text editor. But other than that, great content. As someone who is "vintage," I love the retro vibe as well. I def. subscribed for more videos like this. Thanks!
Can I just declare a large array and fill it with bytes to also get a stack overflow? All data not put into the heap just adds to the stack, so a statically declared array should also easily fill up that 8MB of stack size, right?
Correction: a static array won't end up in the stack, it will be in a separate global memory store which is allocated at the startup of the program by the OS. To put the array on the stack, it must be declared within a function so that's what my question refers to.
Ah this is nice, I remember blowing other students minds in Uni when I showed them you can infer buffer sizes in most functions in C during callup since you already initialize the datastream somewhere. I avoided using malloc in my entire uni life and never had a single overflow. People outsource their brain to much to their language
I was guessing how you were going to trigger a stack overflow, like “how many local variables do you need”. Then I was like “oh yeah, recursive function calls, duh”. I have totally done this before when I screwed up writing a proper base case before so I felt kinda dumb
On unix style systems, I believe the SIGSEGV signal is raised, and the default action is to terminate the program. Though if you've exhausted the stack, even if you catch SIGSEGV, it will probably kill the program.
@@normansowth6493 I think he wants to know if memory is cleared when an application crashes due to a stack overflow and it is cleared for sure due to security reasons. It is possible to recover from stack overflow on Windows, on Linux if SIGSEGV handler is executed during stack overflow which means that program somehow managed to put int signal parameter on stack (/*signal-handler*/* signal( int sig, /*signal-handler*/* handler );) then it should be possible to recover by changing value of stack pointer register and jumping somewhere else. I'm not eager to find out but it is possible that signal handler is executed with different value in stack pointer register threrefore execution of signal handler is possible (it works like SetThreadStackGuarantee winapi function in Windows)
When I learned about recursion it was in math, and the fact that it just kept on going was the point...infinite process represented with finite symbolic description.
I was hoping you would show us the actual process of the segment fault. For example by single stepping through the assembly execution right at the point that triggered it. The similar problem of buffer overflow of a variable on a stack that causes curruption of the call stack framework is also interesting. Since the stack grows downward, but arrays typically go upward, if you allocate a small array in a function as the first locsl variable and then write past the end if the array, you will start overwriting the function return address and processor state info. Assuming the function reaches its end without crashing, the process of returning to whoever called the function will cause problems.
The explanation is straight to the point, detailed and correct. Without begging for likes and subscribers. And with good aesthetics/editing. This is my new favorite RUclips channel.
Agreed. The quality is incredibly high.
forgot one very important detail... 😏
@@Zach-p4e6p ...
@@Zach-p4e6p hahahahhaha
Great video. The only problem is the blue font on the terminal is not very visible on a black background.
Yeah I noticed that 🥲 will fix it in future videos
My eyes are black and blue now @@lauriewired
@@lauriewired can’t fault you for it, it is the default for Linux. You should look at the “cool-retro-term” project. It might not fit with the theme of your videos but if you want a retro feel to your terminals then you can’t be it.
upgrade that eye
@@dannfr his vision is not augmented.
Wow, someone who actually knows what they're talking about. Accurate, concise, and to the point. Excellent job.
SIMP
Does she? She did not mention how exactly does a stack overflow get triggered (page fault exception, which happens when accessing a guard page). But I get that not everybody wants to be truly knowledgeable.
@@TheOnlyJura she just said that the operating system terminates the process which is correct, since the exception vector points into kernel code. Having seen few of her other videos, I'm quite confident that Laurie could explain how the hardware generates the exception and how the kernels handler works, but covering how a program is terminated and unloaded from memory at a detailed level is a whole series of videos by itself.
I like the juxtaposition of low level concepts from a security engineer known for writing ASM, on a Windows XP VM, using an website to do hex calculations, and a Google search to divide by 1000. And the vi/notepad++ use. It's all so random, but it feels very much planned.
I think it's WIndows 10 tho :)
Exactly this, it really fucks with my mind. Such a well done video though.
@@bazylicyran7727 it’s probably software from Stardock with an XP theme.
I think this is part of her channel branding. Personally, I could have done away with the whole algebra section but yeah.
And in the closing screens it looks like a MacOS running a windowed game of Burnout!
No Click bait, pure video teaching about stack overflow, well done!
Cool aesthetics. You definitely have that finesse for themes and your speech is so clear. Cool video.
7:12 People might get the wrong idea that you are allocating memory in the stack as you push. Instead, you are pushing and poping data in the stack which is already percolated and is of fixed size - it's like a fixed-size array where the stack pointer is like an array index. Despite that, great video!
"Percolated" 😄
@@theRPGmaster ups :D
Technically there is no such a thing as "memory allocation", it is just a language concept ;)
You must never pope data. Data poping is haram.
Stack *is* allocated on demand. The 8MB of stack referenced in this video is not 8MB of physically reserved memory, it is just 8MB worth of unbacked virtual addresses and as each unbacked address is read or written to, there is an exception and the OSs exception handler then allocates (one page at a time) the physical memory to that page (populates the page table).
Another god-tier level video! Thank you so much everything you do and all the effort you put into these video!
Great video and I really appreciated both the pace and the consistency of the pace. It was exactly the right rate of information the entire way through and at no point did I feel like speeding the video up or wandering into the comments and missing anything.
Wow, your videos are really some of the best out there! I've never seen an educational video have this much personality. I can't wait to watch more!
This young lady's voice is very nice to listen to and learn. Thank you, Laurie.
A year ago her voice was lower then it is now. I am curious why she changed it/what's her normal tone.
@@pepijn_mhuh, i watch her first video, she is struggling to speak and breathe, from that i conclude this is her real voice
@@JoRoBoYo I'm curious if she's a trans. She does seem to have some level of masculine characteristics.
Easy on the eyes too
@@BigAl7976 You're pathetic.
Can we stop a moment to thank Laurie for the amazing theme that she has ? Whether the room or the frame they both are amuzing.
Yet another fantastic video! The stack usage visualizer really nails down the concept and makes it crystal clear. Great job!
With every single video I'm so delighted how well spoken you are. No ehm, like or similar. It feels really good listening to you describing the topic
listening and looking good too!
@@strelkan 1 googol% agree.
It's a script and good production, one should not underestimate the work that goes into making a video like this. It's well produced though, imo.
That's right -- the absence of noise-vocalizations makes the speech easier to hear and understand and shows respect for the audience and self-respect.
Ready to stack this knowledge
until it overflows.
Mine was popping as it pushed. I'm left only with the last 5 seconds of the video :/.
Gonna rewatch with a queue.
Here dawg, you can place it right next to my stack of bread
@@fyoutube2294 I do love bread
Laurie, you have just the right amount of computer nerd, videographer, super awesome programmer, and mad computer scientist to make me wish one of my daughters had turned out like you. Keep it up! I'm a computer scientist and appreciate your well rehearsed explanations of these often hard to understand concepts.
your daughters are probably amazing, daughters are the best
Great visual Laurie! It really helps solidify how the stack grows down as it grows toward the heap great graphic!
I am developing a homebrew music-making app for the Nintendo 3DS and a stack overflow happened to me recently there.
At first I thought my project was doomed and done for, it gave me quite a shock.
After googling, I learned that 32kb are allocated by default and that it can be changed, so I allocated 128 kb instead. Phew... :D
It might just take longer to overflow if you're pushing but not popping.
i read that you were developing hebrew music
@@reeb3687 😆
😂@@reeb3687
You might consider moving something to the heap instead - stack overflows are usually a sign of poor code. For example, you probably don't want to store the user's entire song on the stack, since it's used for the entire duration of the program anyways. Keep in mind that the main benefit of the stack is quick allocations and deallocations.
Interesting project btw!
Yo this channel is criminally underrated. You're not only really knowledgeable, but also great at explaining and very articulate. The algorithm should pick this up soon :)
To be fair, the first video was only a year ago. Give it some time and she will grow very fast if she stays consistent.
your channel is really cool, your editing and teaching skills for low code are the best I've seen on youtube, I'm learning a lot thank you
Not gonna lie, I was expecting just a python program that recursed forever. To see actual Assembly code, plus Python for GUI visualization, plus doing the calculation of how much memory each iteration was using, plus the use of an old-fashioned Solaris-like desktop... with nice informational popus at opportune times :D and the use of both vi and NotePad++!! You got a subscriber :).
OMG one of the very few youtubers that actually tells real content without BS, without showing unrelated stuff. Many tanks.
Excellent video! Very well researched, love that you put the code available and I absolutely adore the whole Serial Experiments Lain theme!
This is one of the top 3 best channels in the Computing/ Programming community here on RUclips.
Very cool! I learned a long time ago about using stacks, and how they overflow, and your explanation is right on point. Also you speak perfectly. Thank you LaurieWired!
Really clear, back in the day Assembly was a bit too much, but I appreciated it more and more. Even though it's not super productive. :)
The lower level the language, and the harder it is to code, the more efficient the code will be in the end, because the programmer won't want to put the effort in to make it bloated adding extra junk. Now we have high level languages that people that have no idea what's going on behind the scenes and at the lowest levels code in, and layers upon layers of libraries and frameworks, and you end up with the bloated garbage that is pretty much all software these days. Because why write even the simplest thing from scratch in 5 lines of code when you can just use a 500MB library to do the same thing. I've seriously asked people in interviews that supposedly "know how to code" and supposedly have "data science" degrees the simplest most fundamental questions about Big Oh, and complexity, and they have not even the faintest whiff of an idea of what I'm talking about. All they know how to do is write glue code to glue various libraries together, and if it runs too slow, buy a bigger CPU.
@@gorak9000I feel you, but it is simply incorrect to say high level = inefficient. A high level (language) gives the compiler more freedom to optimize. With low level code the programmer has to optimize by hand. Often times the compiler or interpreter can and will do a better job. A good example for this is SQL. It is a very high level language, yet extremely efficient at fetching data. Another example are JIT compilers which have knowledge about the actual architecture the code runs on and incorporate that into the compiled program. This works so well, that gcc added profile guided optimization.
Another example is futhark which is a high level GPU computing language. Yes, you can likely write more efficient CUDA code by hand using shared memory and intrinsics but let me tell you that is a pain and a half in the beginning and futhark is plenty fast. Futhark allows you to more declaratively specify what it is you want calculated rather than telling the processor step by step what to do.
Another example would be JSON encoding and decoding in Scala for which there are libraries which generate very efficient json (de-)serialization code using pseudo SIMD instructions (they load the data into a 64 bit long and use bit shift to parse the string very efficiently). The developer does absolutely not have to deal with this, the compiler will on their behalf.
On that note: protobuf also is a high level language which gets compiled into very efficient code to (de-)serialization and nobody would want to write that by hand. If you factor in correct alignment to efficiently make use of CPU and I/O caches you better leave this task to a well written piece of algorithm than trying to reinvent the low level efficiency wheel all the time.
Rust is another great high level example which gets compiled into very efficient code.
Even haskell can run really fast even though it definitely is not a low level language. Thanks to referential transparency, the compiler can make assumptions about the code which it could not do otherwise and can produce efficient native code for the target platform. Even though you write haskell as if you couldn't mutate your data, thanks to referential transparency the compiler can turn this into very efficient code which may safely mutate state under the hood. This would not be possible if the developer were to write unsafe low level code because this takes away the guarantee for the compiler that this transformation is safe to do,
Of course, just giving the opportunity to optimize to a compiler does not necessarily mean it will, but in general they do a great job.
It is true, however, that many developers have only a surface level knowledge about what they are doing and are lost without google and stack overflow and it is those developers who write inefficient code which works ok enough for their small test data but absolutely explodes if fed anything of marginally interesting size.
@@gorak9000I feel you 100% , but it is simply incorrect to say high level = inefficient. A high level (language) gives the compiler more freedom to optimize. With low level code the programmer has to optimize by hand. Often times the compiler or interpreter can and will do a better job. A good example for this is SQL. It is a very high level language, yet extremely efficient at fetching data. Another example are JIT compilers which have knowledge about the actual architecture the code runs on and incorporate that into the compiled program. This works so well, that gcc added profile guided optimization.
Another example is futhark which is a high level GPU computing language. Yes, you can likely write more efficient CUDA code by hand using shared memory and intrinsics but let me tell you that is a pain and a half in the beginning and futhark is plenty fast. Futhark allows you to more declaratively specify what it is you want calculated rather than telling the processor step by step what to do.
Another example would be JSON encoding and decoding in Scala for which there are libraries which generate very efficient json (de-)serialization code using pseudo SIMD instructions (they load the data into a 64 bit long and use bit shift to parse the string very efficiently). The developer does absolutely not have to deal with this, the compiler will on their behalf.
On that note: protobuf also is a high level language which gets compiled into very efficient code to (de-)serialization and nobody would want to write that by hand. If you factor in correct alignment to efficiently make use of CPU and I/O caches you better leave this task to a well written piece of algorithm than trying to reinvent the low level efficiency wheel all the time.
Rust is another great high level example which gets compiled into very efficient code.
Even haskell can run really fast even though it definitely is not a low level language. Even though you write haskell as if you couldn't mutate your data which if executed naively would be inefficient, thanks to referential transparency the compiler can turn this into very efficient code which may safely mutate state under the hood. This would not be possible if the developer were to write unsafe low level code because this takes away the guarantee for the compiler that this transformation is safe to do,
Of course, just giving the opportunity to optimize to a compiler does not necessarily mean it will, but in general they do a great job.
It is true, however, that many developers have only a surface level knowledge about what they are doing and are lost without google and stack overflow and it is those developers who write inefficient code which works ok enough for their small test data but absolutely explodes if fed anything of marginally interesting size.
Because the big library is tested and maintained :) the 5 lines of code might have a deep rooted bug in it and if you do data science without tests, Id rather you use a bloated library than buggy code :)))
@@QW3RTYUU If you use Julia, you can check the generated assembly for your function and see if it is "bloated" which is super useful if you want your analysis to run fast.
Very clearly communicated and well edited video. Cheers Laurie
Really glad i discovered your channel, as its quickly becoming one of my favorites
Awesome explanation Laurie!
Everything about your videos is so cool and so well crafted... Such a great job !
Awesome video I'm finnally understanding memory maping😁
Your clear articulation is far better than any programming channel I've ever watched.
It's clear you have a good handle on the subject matter, but you may have lost touch with those who have a lesser grasp.
Highly informative computer science ASMR, I love it ❤️
By the way this hairstyle looks fabulous, consider wearing it more often!
Let's all love Laurie!
Let Laurie love all!
simp
HUH
🎶 *I am ballin, I am faded* 🎶
Definitely, I did not thought of, but... yes, I am in love! Never thought it could be that good looking at assembly code.
Wow, I like the visual theme of this video. Bring back so much memory of my childhood with computer. And the explanation is also very good and comprehensive. I really like it.
Woah im so glad i found this channel!! Super clean descriptions and very well laid out. Will be checking out the rest of your clips for sure!!! Thank you! 🔥❤
Came for the pigtails. Stayed for the pigtails.
Came for the lit looks, stayed for the computer science
Yeah, I’m completely turned on by her programming in Assembly but I’m learning new things too.
For every thumbs up this comment receives, a thousand white knight gate keepers screech in total agony. lol
fr this chick has crazy energy
My stack is overflowing and she isn't going to pop it off 😩
@@pluto9000That's kinda cringe yo, ngl. Keep it classy kids!
This was an incredible presentation and the UI is amazing!
Thanks, Laurie! Your subjects are very good and you're really good at explaining. Quality content!
This lady needs to be the face and voice of most instructional videos!
Oh my gosh I love what you've done with your hair!!
Thanks for this it has helped solidify the concept in my brain. Computerfile has a "running a buffer overflow attack" video for the security perspective of this and how it is abused by threat vectors , so it was nice to see your video with more of the code exposed side of things.
Thank you very much, I'm a software programmer but on the high level side, so I've never fully understood the memory part ! It gets cleare after your video.
Wow so glad I stopped to watch, I love your MacOS desktop theme and having your video in a window, looks amazing! Ive always wanted to know what a Stack Overflow was, thank you.
It's been decades since I have seen that olive XP taskbar, I mean all the nostalgia videos have the blue one...
Perhaps a nice idea for a follow up is how does the CPU even detect an overflow? It’s just a pointer that gets lower and lower, what mechanism actually triggers a seg fault? And the stack size is something that can be set a link time; meaning you as a programmer have some control on its size, which can be handy if you expect your program to consume a lot of stack. Could be interesting to dive into as well. Just a thought.
She already explained that in this video, at least that is what I inferred.
What you describe (detection of the segfault) is the very reason that the stack counts downward (or at least one of them). There is a "minimum" memory address that it can count down to, and it knows this minimum address in advance based on the initial allocation. The CPU is watching for that lower boundary memory address to become used, which is what triggers the seg fault.
Operating systems have a memory management components that use the MMU in the CPU to detect address violations but devs have to register an exception handler for it to be handled gracefully, otherwise the CPUs or OS send segfault to the process. If os is that process...
In big simplification when running process at user mode level it uses virtual
memory instead of physical memory where virtual memory pages are mapped to physical
memory addresses. Each virual memory page can be flagged as executable, readable or writable and other
things (depending on CPU architecture and operating system). What happens when process creates new stack for thread ?
There is allocation of virtual pages which are flagged as read and write and they are mapped linearly in process virtual memory space
for example first page of size 4kb at address 0x40000000 second page at address 0x40004096, and so on but the page before the first
allocated memory pages is flagged as not allocated or allocated and flagged as not for read and write therefore when CPU tries
to access it, page fault exception is generated and cpu switches to execution page fault handler of operating system (some function
in kernel mode which is executed when page fault happens). Operating system knows by analyzing active process and memory page address
that it belongs to virtual memory page before stack beginning and then it generates stack overflow exception.
@@0x90hawesome. Any books you’d recommend to learn more on this
@@dixztube "Windows Internals" book
Intel® 64 and IA-32 Architectures Software Developer Manuals
wikiAosdevAorg // replace A with .
Linux kernel source code
never expected but getting a deeper explaination than Uni
Beautifully executed visualization of a stack overflow. Simple and clear.
Really like the aesthetics here. It's sorta like bill nye but for comp sci. Crazy quality for a channel this size. You are definitely going to grow if you keep it up!
Gotta say, the Serial Experiment Lain theme you've got going is just amazing
That was an amazing demonstration on how the stack works.
Your channel is 10/10, congrats from Italy, subscribed
Definately my one of fav vids from channel
Lucid, thorough and up to the point.
Please, use some local calculator and value converter next time around
(or at least host your own web instances of the said tools)
Great diction, pronunciation, and voice recording quality. Fantastic job
Currently taking CS50 to start my programming journey and I really enjoyed this video. Luckily I'm far enough into the course to understand a good portion of what you were talking about! Thanks :)
Amazing. How do you get the 90s mac OS vibe?
Homegirl is smarter than most of my teachers when I was a student. Even though I stopped coding a long time ago, this was a very enjoyable video. Your voice sounds a little bit like D.VA from Overwatch, so I feel like this channel is D.VA's side hustle 😂 Subbed!
Cool video! I like the use of assembly to explain “why” SOs occur: it’s the link reg! Not immediately obvious in C.
omg wow, I love your hairstyle 😍
Loving your tutorials.
What a bright star! Very nice, thank you for doing this.
Its good to see younger coders that learn something
phenomenal quality for everything
new fav channel!
Fantastic video! I've had to explain stack overflow in the past - you did a much better job. From now on I'll just send people here.
I'd love to see a follow-on video about stack smashing and buffer overflow vulnerabilities, and how to write code to avoid them.
Even though I didn’t understand everything you said, I learned some things. You got a new subscriber.
that visualization is so sick.
I'm exploring the idea of getting back into coding after about 30 years of never having looked at a line of it. Your videos are helping to poke (geddit?) memories loose from the rust that's accumulated over all that time. Thanks for posting and thanks for you!
Awesome video! Your style is incredible! Also your explanations are great!
Joel Spolsky would approve of this video, nice job :)
Your bangs are so freaking cute!!!!
deviant
Great explanation and visual representation. I agree, the dark blue text was hard for me to read against the dark terminal/text editor. But other than that, great content. As someone who is "vintage," I love the retro vibe as well. I def. subscribed for more videos like this. Thanks!
Great video! Fun fact: at 9:20 the decimal value was automatically presented to you just underneath the hex result :)
Subscribed bc of the Lain intro
also I loved the desktop customization :)
Very clear and I am not even a coder. It was even clear and understandable watching it on 1.25 playback speed (faster).
I loved the video aesthetic, its nostalgic! and of course great content, thanks!
Can I just declare a large array and fill it with bytes to also get a stack overflow? All data not put into the heap just adds to the stack, so a statically declared array should also easily fill up that 8MB of stack size, right?
Correction: a static array won't end up in the stack, it will be in a separate global memory store which is allocated at the startup of the program by the OS. To put the array on the stack, it must be declared within a function so that's what my question refers to.
@@notiashvili Yes.
Your HEX calculator was so kind to inform us of the answer in decimal directly as well 😉
Ah this is nice, I remember blowing other students minds in Uni when I showed them you can infer buffer sizes in most functions in C during callup since you already initialize the datastream somewhere. I avoided using malloc in my entire uni life and never had a single overflow. People outsource their brain to much to their language
Can we also get a video on Buffer overflow ?
Thank you I've learnt something new today. Also I appreciate visual progress bar showing filling up of the stack 👍
this is quality educational material while entertaining as well. it's a travesty you don't have more subs.
Hello Laurie. I am very happy to finally be pointed by the You Tube algorithms towards women who have engineering smarts. Refreshing!
I love Indian people and I owe them a lot as a programmer, but it's just nice to hear a different accent now and then. Also, nice production quality
i like this person's face, clicked cause the thumbnail but i dont code so i can't stay, wish you well on your youtube journey
That was very well reasoned and explained.
Thanks for an amazing explanation!
I was guessing how you were going to trigger a stack overflow, like “how many local variables do you need”. Then I was like “oh yeah, recursive function calls, duh”. I have totally done this before when I screwed up writing a proper base case before so I felt kinda dumb
Nice video, that ulimit -s command you showed seems handy and I like your spin on the Lain aesthetic!
The aesthetics on this video are CRAZY. Do you do your own editing? If so, did you make all the graphics in-house?
Yup, I do all the editing+graphics. It's a mixture of After Effects + Premiere, and I used figma to design some of the OS elements :)
@@lauriewiredfigma 💀
After the app crash segfault due to stack overflow. Would the stack buffer still be filled? Or will this be cleared? If so who would clear it? OS mmu?
On unix style systems, I believe the SIGSEGV signal is raised, and the default action is to terminate the program. Though if you've exhausted the stack, even if you catch SIGSEGV, it will probably kill the program.
@@normansowth6493 I think he wants to know if memory is cleared when an application crashes due to a stack overflow and it is cleared for sure due to security reasons. It is possible to recover from stack overflow on Windows, on Linux if SIGSEGV handler is executed during stack overflow which means that program somehow managed to put int signal parameter on stack (/*signal-handler*/* signal( int sig, /*signal-handler*/* handler );)
then it should be possible to recover by changing value of stack pointer register and jumping somewhere else. I'm not eager to find out but it is possible that signal handler is executed with different value in stack pointer register threrefore execution of signal handler is possible (it works like SetThreadStackGuarantee winapi function in Windows)
Does anybody know what song plays in the intro? I love the 80s feel of it
Need to start a drinking game every time I hear 'actually'.
When I learned about recursion it was in math, and the fact that it just kept on going was the point...infinite process represented with finite symbolic description.
Really awesome explanation!
Thanks for this!
didnt particularly need this information but i enjoyed learning it from you
Thank you for turning the computer brain inside out. I love it.
I was hoping you would show us the actual process of the segment fault.
For example by single stepping through the assembly execution right at the point that triggered it.
The similar problem of buffer overflow of a variable on a stack that causes curruption of the call stack framework is also interesting. Since the stack grows downward, but arrays typically go upward, if you allocate a small array in a function as the first locsl variable and then write past the end if the array, you will start overwriting the function return address and processor state info. Assuming the function reaches its end without crashing, the process of returning to whoever called the function will cause problems.