To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CoreDumped. You’ll also get 20% off an annual premium subscription.
For a 2nd-year university systems paper, the final assignment was to create a multi-tasking kernel. The program would receive a set of parameters for each-sub program, and then use a hardware timer to switch the actively running program every so often (based on some arbitrary time slice value). When the slice expires, all register values would be saved to entries in a pre-allocated block of memory, and then load values for the next scheduled task into registers. It was a fantastic way to learn how a form of concurrency can be achieved!
@@quanmcvn30 You need help with task state switching but you're an osdev? Do you mean you're an osdev student? There are many small kernels on github, for various architectures, that you can look at to see how they handle TSS.
Something to point out: While Multics was important where UNIX was concerned, the first time-sharing OS was the Compatible Time-Sharing System, developed at MIT for their IBM mainframes. The name comes from the fact that it was also capable of batch processing, when needed. Eventually, as that fell out of requirement, the Incompatible Time-sharing System was developed.
This is actually amazing. I enjoy learning how the code I write actually works. I started programming in JavaScript where I had no idea how the computer executed my code. Now I program in C# and I constantly think about how my code runs.
A cool, another developer in C# here, and more languages. Definitely applicable there, as Tasks and async await are very important features in the language. C# was also basically the first language implementing that. Programming language level scheduling systems are basically a solution to move scheduling away from OS level (threads) to the logical level within the application itself, which is especially useful when you have a lot of concurrency, like many IO tasks. Because of using a whole magnitude less resources. OS threads are far more expensive. The scheduling mechanism still maintains multiple threads, but on a queue far more tasks will be scheduled to be consumed by these threads in very similar fashion of how an OS scheduler works.
@@jongeduard This video helped me understand the basis of concurrency and your comment has helped me understand application concurrency in terms of async
Fantastic video. A request. Can you make 3 videos on the following topics? 1. OS paging and virtual memory 2. A computer program (lets say C program) compilation to machine code translation to execution (all on physical hardware level) 3. The entire process of when we type something on keyboard and send a search query to google, then get back the response (everything in between starting from key signal from hardware to os and there after networking concepts on how data travels)
@@camelCase60 No. I have to give a presentation to college freshers who have just joined at the office and I want to make it an animated presentation by simplifying the concepts. I am very bad at creating animations and explaining things in a simplified way. I have been searching up all kinds of materials on RUclips but it's not coming out very well. Such videos will be helpful. Teaching is a skill which I do not possess so I am asking for help.
Low-level stuff has always fascinated me. I was a little afraid but for a couple of months ago I've started to study the x64 encoding because I'm working on a compiler as a fun project. So as the time goes by I'm doing better with low-level stuff and when I see I understand more some concepts behind this video, I'm glad about the progress I made. Your videos are bangers by the way!
Low-level stuff has been my passion since the 1970s. Back then, there were few alternatives other than BASIC. I think the first C compiler I had was late 80s.
I've been working in IT for 34 years and 30 of those years I've been developing software. I started on DOS 3.3 & Windows 3.0. I remember the cooperative multitasking of Windows back then. Anyways, even with all that experience I still learned tons from this fantastic video. Really appreciate the animated graphics which help the concepts to be understandable. Really amazing!
You by far produce the best explanation videos of computing architecture. These are video topics I have dying to know. I personally would really appreciate it if you would post your resources/references in the description so I could learn further on my own. Also, the AI voice was concerning at first, since I figured it was another low effort nonsensical channel, but that's definitely not the case.
Pretty cool. Only thing I think was missing and should have been referenced is instruction-level parallelism and things like instruction pipelining, scalar vs superscalar processors, etc
N64 from 1996 was scalar, in-order. Itanium was in order super scalar. Dec Alpha could execute out of order as did Cray. Other super scalar CPUs are unsafe: spectre . Branch prediction was invented by IBM and slowed down their processor. Now it allows attacks.
Concurrency also helps maximize processor utilization. In general, the CPU is the fastest part of the machine, so it will spend lots of time waiting on I/O if executing only 1 process. With concurrency, other processes may get CPU time while 1 process waits on some I/O component.
Scheduling is such a genius way of handling multitasking. It's been around forever and works amazingly. IMO understanding how execution contexts switch, especially with normally untouched registers such as the segment registers and CR(N) registers can be such a eureka moment in systems software development, and understand how "it all" works.
This was a really helpful video! It would be really nice to have a clarification of process scheduling and how that works in conjunction with threads. Ie. I have a programming language that allows multithreading, do all threads run on the same process as far as the OS is concerned, or does it get it's own process per thread?
Yes, treads are handled very similar to processes, they're also scheduled. There are some differences tho, for example if the main process is terminated, all of its threads are also terminated (unless otherwise specified). I' ll record a video about more details on this.
A "process" is, in a lot of ways, the same thing as a "program". It can have one or many threads, but the crucial thing is that they all share the same memory space, so, for instance, a pointer from one thread can be used by another thread in the same process without any issue beyond the normal multithreading hazards. By contrast, if you tried to dereference a pointer from another process, you'd very likely crash with a segfault, or at the very least, access the wrong data. A process is the top most level in the OS environment. Each process is given its own address space, and they cannot access memory allocated to other process unless both processes explicitly request the OS to allocate a shared memory buffer. This is why on modern operating systems, a program can (usualy) crash without taking down the entire OS and forcing you to push the reset button. Also, when you bring up the task manager in Windows, it gives you a list of all processes running on the system. Note that some programs will spin up multiple processes in addition to multiple threads in each process. One good example is a client-server model, where you want the option of multiple users interacting in some way, but for single-user situations, it can be convenient to run the exact same server program on the same computer as the client, rather than having an entirely seperate single-user mode. It can also be useful when you're dealing with untrusted code, such as anything on the internet. If your browser, for instance, runs each tab in a seperate process, then you've created an additional barrier between a malicious scripting exploit on one website in one tab and your bank's login screen that you left open on another tab.
Spectacular, I am amazed by this quality in the videos. Although this time I didn't find much for my programming language, it explains too many concepts that I barely realized were very vague. Thank you for these spectacular explanations!
Is the "async and await " key words in programing languages use the concurrency? And it will be so great if you make video about this topic... Thanks for your amazing content
Honestly I have to say, your videos have been a really great refresher for the topics I heard in a course on Operating Systems in University. Really love your explanations.
I wasn't expecting this kind of video, but this was nice if someone didn't already understand the basics and the history. I could really see this becoming a general OS series. I don't know what your background is, but if you know anything about both ARM and x86 it would be nice to see a series of videos comparing their different processes for booting up and running an OS, such as what equivalents an ARM processor would have for the IDT's and GDT's on x86.
I was expecting this to be hard to follow and to miss important concepts. Well done, this is an excellent overview with interesting (to me) history about concurrency and parallelism. For those new to the concepts, this video is good enough for a re-watch to make sure these concepts sink in. Thanks!
Great content as usual I'm learning a lot of things even better than in university, kind of sad for the current educative system but a good new for the ones really sant to learn, keep it going!
On Windows 95: it still used cooperative multitasking for legacy applications, which, at its launch, were almost all applications, leading to Windows 95 (and 98, Me) being considered unsafe for bad behaving applications. After NT was merged into the consumer versions from XP onward, only pre-emptive multitasking remained.
A bunch of electrons travel through wires, and hardware/software decode those signals as different things depending on what their purpose is. Tada!!!! Although I'm just joking, this is pretty much what it is lol
I am afraid what you're looking for is not here, this channel is more like a low level way of looking at computer, I would say the programming part of the Operating System, of course networking requires some low level and programming stuff but that wont help you manage cisco, or any networking technology, Start with cisco and their free courses on skills for all platform. Rick Graziani channel is also a good idea for fundamentals, he's maybe a cisco instructor since he was in many skills for all courses, David Bombal has also some free CCNA course
Ben Eater has an excellent series on networking. Her works up maybe the first three or four OSI layers from an electrical engineering perspective. From there, you can find other tutorials to round out your knowledge.
This is one the best animation videos I have seen on how a cpu works. I was never much of a fan of low level computing and how things work under the hood, but this video opened my mind to a whole new world. Keep up the work! :)
One problem with pushing cycle path away from intersections is it makes travel longer for people using their muscles to move, basically to allow motorists keep driving fast (autos don’t get tired though). I am always puzzled by very curvy cycle/pedestrian path. Do engineers consider cycling or walking is only for recreation ?
This is actually a really good technical explanation of the topic, but I would add that specifically what is discussed in the video is the kernel, a component of an operating system. Even the rest of the operating system need to go trough the kernel to access i/o. An operating system itself contains the full usable environment which in case of Unix for example has the shell and coreutilities among other things. They too are user-level programs but still a part of the operating system.
Something that will likely be pointed out in a subsequent video: Another common aspect of cooperative multitasking was memory allocation. Everything that's been described is based on pre-emptive multitasking, where the OS allocates memory on the fly through interrupts. However, earlier OSs could only allocate fixed blocks, often only at load time. Infamously, this crippled Mac OS up to System 8-9, especially compared to Windows and its Virtual Memory based systems.
Many years ago, and luckily for me, just after they switched from using punch cards, I was on a college mini computer. Most of the time, it was very quick, but when there was a midterm or final project due for the advanced courses, the system ground almost to a halt. We had SOROC terminals on a Pr1me Mini.
15:10 AmigaOS had preemptive scheduling which is why it didn't have the tendency that one program would lock up the entire machine - unlike the contemporary Windows variants.
There's two programs: a = 1 print(a) a = 2 print(a) They are written on assembly like this: MOV a, 1 OUT a MOV a, 2 OUT a What about this situation? If we execute by 1 instruction simultaneously, it will be like that, based on video: MOV a, 1 MOV a, 2 OUT a OUT a And CPU will print 2 and 2, which is wrong. Maybe CPU is like taking snapshots like you said once in this video, but before changing every instruction? Otherwise, one program will overwrite other program's registers.
The snapshot includes registers too. What it doesn’t include though is main memory (ram). This is the reason why programs with multiple threads can share memory, but also why programming multiple threads is difficult and has many pitfalls. In reality there’s also something called virtual memory, and it makes it so that threads (or even whole processes) can voluntarily share memory, but a rogue process cannot read/write the memory of another process
It's Fascinating and mind blowing how operating systems works, Could u explain how a piece of software (the operating system (logical entity)) controls and talks to a piece of hardware (The CPU (physical entity)) ?
How does the OS regain control is something I've always kind of wondered but never knew how to even find the answer. This was such an enlightening video!
12:04 so lets say word.exe was waiting for some user input,would that operation first be completed and then the chrome.exe would be executed or would the IO operation be put on hold?
This is actually kind of hard to explain in a single comment. But I'll try anyway. When a program is waiting for some I/O operation, putting it in the same queue with the other process that are waiting for the CPU is not a good idea, because it can happen that the process regain control but the IO operation is not completed yet. So we would be allocating the CPU to a process that cannot even use it, waisting resources. So, instead the queue is used only for the processes that are 'ready' for execution and are just waiting their turn to use the CPU, and a set is used for all of those process that are 'not ready' to use the CPU because they are waiting for some IO operation. When an IO operation is completed, the hardware usually triggers an interruption, this interruption would make the OS to put the process back in the queue of 'ready' processes. So, to answer you question. No, the word.exe process should be hold in the 'not ready' set, giving the other processes chance to use the CPU. Whenever the hardware detects a user input, a interruption is triggered to make the operating system put the word.exe process back in the 'ready' queue. I didn't ilustrase this in the video because that is more related to scheduling.
What if you wanted to do a big operation but OpenOS was like "Too long without Yielding" Context: Minecraft's OpenComputers has a default OS you can craft called OpenOS and for the longest time a program could just flat out freeze your OS if it gets stuck in an infinite loop, forcing you to powercycle your computer. To fix this in an update they added a timer to all programs that if they go too long without allowing input then it would crash the program. Only issue is they made this limit really oppressive and didn't really provide an in-code way to deal with it (there was no way to halt or yield in code, your entire process had to be done within that limit) it was so bad that even the OS itself would run into this limit causing your entire computer to crash every time you tried to do anything even remotely complex, which resulted in an even higher necessity to powercycle.
I really enjoyed the history segment of this video. Understanding history is incredibly insightful and important for understanding why conventions are the way they are. Have you considered making a long in-depth video discussing the history of computers, programming languages, etc? It would be a wonderful watch.
how is the state of the program restored? before, the animation showed a snapshot of the cpu being saved to some portion of the memory (in the OS memory portion specifically), but then afterwards, the animation doesn't show anything there, and the example address also doesn't come close to the previously saved snapshot.
After watching this video I get anxious even swiping windows with my touchpad, thinking how much I'm triggering those memory pointers and how complex the process is
I didn't understand why if is the CPU itself which executes the interruption instructions (such as Read/write) why can it dequeue the next process in the scheduler if it is busy doing those tasks? PS: the quality of the videos is superior, you are really doing a great job. By far, my favorite yt channel.
please can somebody clear my doubt? during context switch the operating system saves the state of CPU so that it can be loaded back when needed. but since the OS itself is a software and needs CPU to run and if it runs on CPU then it will change the state of the CPU. so how does the OS saves the state of CPU without changing it.
The OS gets control via an interrupt. The OS interrupt handler saves the state of the running software to a data structure that the operating system maintains for that running task. Then the OS does whatever it needs to do. Obviously, it is using the CPU registers at this time. When the OS gives control back to whatever running task it decides to do next, the saved state is restored to that of the task that is about to get control using the data structure that holds the tasks saved state.
@@tomc101 but if the interrupt handler gets the cpu and starts executing, wouldn't it change the registers such as program counter, instruction register etc before it could even start saving those.
The CPU (hardware) can save its state in memory automatically when the interruption is triggered. One way of seeing it is: the steps to perform that state-capturing process is embedded directly in the circuitry. So, to capture the state of the registers, no software is needed (hence, registers are not being touched). When the operating system starts using the CPU to handle the interruption, the state of the user program that triggered the interruption is already stored in memory. The role of the operating system here is to read the memory region where the state was stored (by the hardware) and use that information on the scheduling process. Parts of this process is architecture dependent. In some architecture, the memory region where the state is store is directly 'burned' in the hardware. In other architectures the hardware offers 'privileged-instructions' to let the OS decide where in memory the state has to be stored.
@@orangeInk Ah, I see what you are concerned about. The interrupt hardware saves a minimal amount of the running program's state. The minimum would be the program counter and status words. The CPU then starts running the operating system code to handle the interrupt. It can then start by saving everything else such as registers. How much the OS needs to save depends on the CPU, and what the OS plans to do with that particular interrupt. For example, if it only needs to a bit of work and then return to the running program, it might not need to save more than a register or two.
There's some inconsistency around interrupts in this video. First of all, system services nowadays use syscall instruction rather than interrupt in code. But how does the OS escape from? There's context switching mechanism, that can be triggered via hardware interrupt / timer. It allows then to break from the thread anywhere.
I'm not the video's author and I don't play him on TV, but ISTM that you're trying to suggest that the author's explanations are somehow fatally flawed due to the existence of syscall instructions on (AMD) CPUs which is not the case. Aside from the fact that Intel CPUs use sysenter rather than syscall -- thereby making your statement about system services being requested by using the syscall instruction rather than an interrupt not entirely accurate -- I would point out that there are plenty of old programs that use an intx instruction to make a transition into the host OS even if we restrict ourselves to x86 and/or x86-64 systems. I would also point out that even though no interrupt may be generated by the execution of sysenter or syscall instruction, these instructions are low overhead ways of basically jumping into the host OS that used to be done via intx and still may be for sufficiently old programs. So the fact that sysenter and syscall don't actually generate an interrupt is basically irrelevant when it comes to answering the questions that the author posed (at around [10:38]) which was, "When is the OS executed to perform scheduling?" The author correctly points out that the OS can be called upon to do scheduling by trying to access various system services no matter what the mechanism is that is used to get there from here. I would also point out that since the author mentioned interrupts (even if they were mentioned in the context of an intx instruction), the author was implicitly indicating that you can get there from here via other interrupts such as such as timer or other hardware interrupts. Given that he explicitly said that he was omitting a lot of information [10:25], you seem to be nitpicking for no obviously interesting reason.
@@lewiscole5193 I'm not nitpicking, I'm just pointing out the broader context. As for the sysenter instruction, you're absolutely right (names). Interrupt 0x2E is still supported on many architectures. What I wanted to say is that interrupt is now a legacy way to jump between ring3 to ring0. Combined with your clarification it fullfills the topic :) Chears.
I'm excited for a video about threads, please make a video about it, I think I need a video like this, especially how the methods/functions related to it work. Thank you very much anyway
Maaaaan, what an absurd channel! A real hidden gem! Amazing teaching and design skills. You explained 6 months of OS college subject in just few minutes ❤❤❤❤
I used to jiggle the mouse once in a while to make sure one program didn't freeze my windows 95 PC. I wonder if there was something wrong with the hardware timer, so that I had to trigger interrupts manually. Or was it just a placebo.
Dude you're amazing for teaching all this. I don't like AI if it's just about shitposting a lot to make money. But this is top content and it's not about fake topics or anything. Just pure gold.
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CoreDumped. You’ll also get 20% off an annual premium subscription.
Could you make a video for floating point?? how is it work??
You son of a B, I'm in.
Really, your videos just make me want to know more 🎉
This channel is like reading an OS textbook, but it's actually intriguing and doesn't put you to sleep.
Wait what are you doing here😂😂
os.sleep
Try reading Remzi's "OS: Three Easy Pieces".
For a 2nd-year university systems paper, the final assignment was to create a multi-tasking kernel. The program would receive a set of parameters for each-sub program, and then use a hardware timer to switch the actively running program every so often (based on some arbitrary time slice value). When the slice expires, all register values would be saved to entries in a pre-allocated block of memory, and then load values for the next scheduled task into registers. It was a fantastic way to learn how a form of concurrency can be achieved!
Can I get a peek at your code? I'm also in osdev but I'm quite loss atm.
Sorry guys, won’t be able to help you there.
Yes can we get the code please
Thank god I didn't go into computing
@@quanmcvn30 You need help with task state switching but you're an osdev? Do you mean you're an osdev student? There are many small kernels on github, for various architectures, that you can look at to see how they handle TSS.
Something to point out: While Multics was important where UNIX was concerned, the first time-sharing OS was the Compatible Time-Sharing System, developed at MIT for their IBM mainframes.
The name comes from the fact that it was also capable of batch processing, when needed. Eventually, as that fell out of requirement, the Incompatible Time-sharing System was developed.
Thanks
Great stuff, thanks!
The 6809 had OS-9 which was an awesome preemptive multitasking OS for the time.
This is actually amazing. I enjoy learning how the code I write actually works. I started programming in JavaScript where I had no idea how the computer executed my code. Now I program in C# and I constantly think about how my code runs.
I'm working on a video about how computers execute code, you'll love it!
@@CoreDumpped This was a great video and I subscribed after watching it. This comment is worthy of hitting the bell.
And I'm thinking of how there are many observable asynchronous codes didn't collide with each other
A cool, another developer in C# here, and more languages. Definitely applicable there, as Tasks and async await are very important features in the language. C# was also basically the first language implementing that.
Programming language level scheduling systems are basically a solution to move scheduling away from OS level (threads) to the logical level within the application itself, which is especially useful when you have a lot of concurrency, like many IO tasks. Because of using a whole magnitude less resources. OS threads are far more expensive.
The scheduling mechanism still maintains multiple threads, but on a queue far more tasks will be scheduled to be consumed by these threads in very similar fashion of how an OS scheduler works.
@@jongeduard This video helped me understand the basis of concurrency and your comment has helped me understand application concurrency in terms of async
Fantastic video.
A request. Can you make 3 videos on the following topics?
1. OS paging and virtual memory
2. A computer program (lets say C program) compilation to machine code translation to execution (all on physical hardware level)
3. The entire process of when we type something on keyboard and send a search query to google, then get back the response (everything in between starting from key signal from hardware to os and there after networking concepts on how data travels)
lol do you have an SRE interview coming up?
1 and 2 for a short video.
3 is a massive undertaking.
@@camelCase60 No. I have to give a presentation to college freshers who have just joined at the office and I want to make it an animated presentation by simplifying the concepts. I am very bad at creating animations and explaining things in a simplified way. I have been searching up all kinds of materials on RUclips but it's not coming out very well. Such videos will be helpful. Teaching is a skill which I do not possess so I am asking for help.
@@deanvangreunen6457 agree.
@@phoneix24886so you want him to do your job for free?
Low-level stuff has always fascinated me. I was a little afraid but for a couple of months ago I've started to study the x64 encoding because I'm working on a compiler as a fun project. So as the time goes by I'm doing better with low-level stuff and when I see I understand more some concepts behind this video, I'm glad about the progress I made.
Your videos are bangers by the way!
I love it too!
Low-level stuff has been my passion since the 1970s. Back then, there were few alternatives other than BASIC. I think the first C compiler I had was late 80s.
I've been working in IT for 34 years and 30 of those years I've been developing software. I started on DOS 3.3 & Windows 3.0. I remember the cooperative multitasking of Windows back then. Anyways, even with all that experience I still learned tons from this fantastic video. Really appreciate the animated graphics which help the concepts to be understandable. Really amazing!
My brain was at the start like: say it, say it SAY IT DAMN (scheduler was the word I waited for)
I was like "Concurrency? Well, context switching, of course..."
Thank you so much. As a no computer science degree programmer, this helps a lot and fills the needed gaps to understand programming better.
easily one of the best comp sci channels on youtube. anoter banger and a half!
You by far produce the best explanation videos of computing architecture. These are video topics I have dying to know. I personally would really appreciate it if you would post your resources/references in the description so I could learn further on my own. Also, the AI voice was concerning at first, since I figured it was another low effort nonsensical channel, but that's definitely not the case.
The AI voice is really high quality tbh
I completely missed it until the voice pointed it out 😮
i genuinely didnt realize it was ai
Omgg i wanted to find out how they run simultaneously and i searched but those stackoverflow posts were confusing and you posted a video :D
Pretty cool. Only thing I think was missing and should have been referenced is instruction-level parallelism and things like instruction pipelining, scalar vs superscalar processors, etc
N64 from 1996 was scalar, in-order. Itanium was in order super scalar. Dec Alpha could execute out of order as did Cray. Other super scalar CPUs are unsafe: spectre . Branch prediction was invented by IBM and slowed down their processor. Now it allows attacks.
I took OS class last semester and this channel really helped me consolidate what I learned and make me appreciate it a lot more, thank you!!
Concurrency also helps maximize processor utilization. In general, the CPU is the fastest part of the machine, so it will spend lots of time waiting on I/O if executing only 1 process. With concurrency, other processes may get CPU time while 1 process waits on some I/O component.
Scheduling is such a genius way of handling multitasking. It's been around forever and works amazingly. IMO understanding how execution contexts switch, especially with normally untouched registers such as the segment registers and CR(N) registers can be such a eureka moment in systems software development, and understand how "it all" works.
This was a really helpful video! It would be really nice to have a clarification of process scheduling and how that works in conjunction with threads. Ie. I have a programming language that allows multithreading, do all threads run on the same process as far as the OS is concerned, or does it get it's own process per thread?
Yes, treads are handled very similar to processes, they're also scheduled. There are some differences tho, for example if the main process is terminated, all of its threads are also terminated (unless otherwise specified). I' ll record a video about more details on this.
A "process" is, in a lot of ways, the same thing as a "program". It can have one or many threads, but the crucial thing is that they all share the same memory space, so, for instance, a pointer from one thread can be used by another thread in the same process without any issue beyond the normal multithreading hazards. By contrast, if you tried to dereference a pointer from another process, you'd very likely crash with a segfault, or at the very least, access the wrong data.
A process is the top most level in the OS environment. Each process is given its own address space, and they cannot access memory allocated to other process unless both processes explicitly request the OS to allocate a shared memory buffer. This is why on modern operating systems, a program can (usualy) crash without taking down the entire OS and forcing you to push the reset button. Also, when you bring up the task manager in Windows, it gives you a list of all processes running on the system.
Note that some programs will spin up multiple processes in addition to multiple threads in each process. One good example is a client-server model, where you want the option of multiple users interacting in some way, but for single-user situations, it can be convenient to run the exact same server program on the same computer as the client, rather than having an entirely seperate single-user mode. It can also be useful when you're dealing with untrusted code, such as anything on the internet. If your browser, for instance, runs each tab in a seperate process, then you've created an additional barrier between a malicious scripting exploit on one website in one tab and your bank's login screen that you left open on another tab.
love this channel, hands down my favorite channel on youtube
Computer science is so fascinating
Love your videos. They are always high quality and easy to understand.
This channel is incredible. Please never stop.
@CoreDumpped typo in the title .. change "your" to -> "you".
Fixed, thanks!
Great content
Amazing work George!
The animations really help
This is way better than reading the textbook
Spectacular, I am amazed by this quality in the videos. Although this time I didn't find much for my programming language, it explains too many concepts that I barely realized were very vague.
Thank you for these spectacular explanations!
Is the "async and await " key words in programing languages use the concurrency?
And it will be so great if you make video about this topic...
Thanks for your amazing content
Honestly I have to say, your videos have been a really great refresher for the topics I heard in a course on Operating Systems in University.
Really love your explanations.
Seems like compiling back then wasn't that much slower than compiling big projects in Rust.
Have never seen this topic broken down so well and visually pleasing at the same time! Keept the good content coming :)
I wasn't expecting this kind of video, but this was nice if someone didn't already understand the basics and the history. I could really see this becoming a general OS series. I don't know what your background is, but if you know anything about both ARM and x86 it would be nice to see a series of videos comparing their different processes for booting up and running an OS, such as what equivalents an ARM processor would have for the IDT's and GDT's on x86.
I was expecting this to be hard to follow and to miss important concepts. Well done, this is an excellent overview with interesting (to me) history about concurrency and parallelism. For those new to the concepts, this video is good enough for a re-watch to make sure these concepts sink in. Thanks!
Thanks!
Great content as usual I'm learning a lot of things even better than in university, kind of sad for the current educative system but a good new for the ones really sant to learn, keep it going!
Your explainations are really great! Accurate, to the point, great comparisons... Well done, once again!
the best channel i've seen in this topic, greetings from Brazil 🇧🇷 🇧🇷
These AI voices are so terrifying i cant get through the video giving me the creepies
Fantastic lecture! I loved it! Are there any books on the history of this stuff you (or anyone else) can recommend?
Wow what an amazing introduction to Operate System it is..... Waiting For the next videos...
❤ From Bangladesh 🇧🇩
Fabulous video and underated channel! I'm hooked for the CPU Scheduling Video already. Keep it up
On Windows 95: it still used cooperative multitasking for legacy applications, which, at its launch, were almost all applications, leading to Windows 95 (and 98, Me) being considered unsafe for bad behaving applications. After NT was merged into the consumer versions from XP onward, only pre-emptive multitasking remained.
your up there with Ben Easter on how well you explain all these processes damn, keep it up! :D
can you explain networking from scratch?..(completely)
In the begining, the bing bang happend 13.8 billion years ago.... then humans invented data, and then the ip protocol
A bunch of electrons travel through wires, and hardware/software decode those signals as different things depending on what their purpose is. Tada!!!!
Although I'm just joking, this is pretty much what it is lol
fasterthanlime's video on explaining the internet might just be somewhat the video you're looking for in the meantime
I am afraid what you're looking for is not here, this channel is more like a low level way of looking at computer, I would say the programming part of the Operating System, of course networking requires some low level and programming stuff but that wont help you manage cisco, or any networking technology, Start with cisco and their free courses on skills for all platform. Rick Graziani channel is also a good idea for fundamentals, he's maybe a cisco instructor since he was in many skills for all courses, David Bombal has also some free CCNA course
Ben Eater has an excellent series on networking. Her works up maybe the first three or four OSI layers from an electrical engineering perspective. From there, you can find other tutorials to round out your knowledge.
This is one the best animation videos I have seen on how a cpu works. I was never much of a fan of low level computing and how things work under the hood, but this video opened my mind to a whole new world. Keep up the work! :)
You are one of the best channels that explains OS concept perfectly, please don't stop making these gems, thank you.
One problem with pushing cycle path away from intersections is it makes travel longer for people using their muscles to move, basically to allow motorists keep driving fast (autos don’t get tired though).
I am always puzzled by very curvy cycle/pedestrian path. Do engineers consider cycling or walking is only for recreation ?
I feel privileged to be able to watch this level of quality videos and such sublime explanation for free. Thank you so much!
Glad you enjoy it!
This is actually a really good technical explanation of the topic, but I would add that specifically what is discussed in the video is the kernel, a component of an operating system.
Even the rest of the operating system need to go trough the kernel to access i/o. An operating system itself contains the full usable environment which in case of Unix for example has the shell and coreutilities among other things. They too are user-level programs but still a part of the operating system.
Something that will likely be pointed out in a subsequent video: Another common aspect of cooperative multitasking was memory allocation. Everything that's been described is based on pre-emptive multitasking, where the OS allocates memory on the fly through interrupts. However, earlier OSs could only allocate fixed blocks, often only at load time. Infamously, this crippled Mac OS up to System 8-9, especially compared to Windows and its Virtual Memory based systems.
This channel is a treasure! Keep up the good work!
Thanks, will do!
your videos are extremely informative, can't wait for future instalments of this series
This is amazing. You have sparked my interest in CS.
The small pauses between some bits of information was really helpful. 👍
These videos are so good. Keep up the great work!
another great video! ty
Im currently learning python but I want to learn low level programming and your videos help understanding things that are hidden away
Many years ago, and luckily for me, just after they switched from using punch cards, I was on a college mini computer. Most of the time, it was very quick, but when there was a midterm or final project due for the advanced courses, the system ground almost to a halt. We had SOROC terminals on a Pr1me Mini.
Incredibly well explained, keep up with these amazing videos !
15:10 AmigaOS had preemptive scheduling which is why it didn't have the tendency that one program would lock up the entire machine - unlike the contemporary Windows variants.
There's two programs:
a = 1
print(a)
a = 2
print(a)
They are written on assembly like this:
MOV a, 1
OUT a
MOV a, 2
OUT a
What about this situation? If we execute by 1 instruction simultaneously, it will be like that, based on video:
MOV a, 1
MOV a, 2
OUT a
OUT a
And CPU will print 2 and 2, which is wrong.
Maybe CPU is like taking snapshots like you said once in this video, but before changing every instruction?
Otherwise, one program will overwrite other program's registers.
The snapshot includes registers too. What it doesn’t include though is main memory (ram). This is the reason why programs with multiple threads can share memory, but also why programming multiple threads is difficult and has many pitfalls. In reality there’s also something called virtual memory, and it makes it so that threads (or even whole processes) can voluntarily share memory, but a rogue process cannot read/write the memory of another process
It's Fascinating and mind blowing how operating systems works, Could u explain how a piece of software (the operating system (logical entity)) controls and talks to a piece of hardware (The CPU (physical entity)) ?
It runs on it? Ben Eater might offer you a more hands down explanation. Real breadboards instead of blackboards.
How does the OS regain control is something I've always kind of wondered but never knew how to even find the answer. This was such an enlightening video!
It is amazing the approach that you used in the video. Thanks :)
12:04 so lets say word.exe was waiting for some user input,would that operation first be completed and then the chrome.exe would be executed or would the IO operation be put on hold?
This is actually kind of hard to explain in a single comment. But I'll try anyway.
When a program is waiting for some I/O operation, putting it in the same queue with the other process that are waiting for the CPU is not a good idea, because it can happen that the process regain control but the IO operation is not completed yet. So we would be allocating the CPU to a process that cannot even use it, waisting resources.
So, instead the queue is used only for the processes that are 'ready' for execution and are just waiting their turn to use the CPU, and a set is used for all of those process that are 'not ready' to use the CPU because they are waiting for some IO operation.
When an IO operation is completed, the hardware usually triggers an interruption, this interruption would make the OS to put the process back in the queue of 'ready' processes.
So, to answer you question. No, the word.exe process should be hold in the 'not ready' set, giving the other processes chance to use the CPU. Whenever the hardware detects a user input, a interruption is triggered to make the operating system put the word.exe process back in the 'ready' queue.
I didn't ilustrase this in the video because that is more related to scheduling.
@@CoreDumppedthanks i think i understood most of your comment, btw the video was wonderful.
Thank You Man... By Far the Best Explanation with all the Pre-Historic Knowledge... I just loved it... Subscribing instantly☺️
I absolutely LOVE your channel, man. Thanks for doing this. I crave the low level knowledge!
What if you wanted to do a big operation
but OpenOS was like "Too long without Yielding"
Context: Minecraft's OpenComputers has a default OS you can craft called OpenOS and for the longest time a program could just flat out freeze your OS if it gets stuck in an infinite loop, forcing you to powercycle your computer. To fix this in an update they added a timer to all programs that if they go too long without allowing input then it would crash the program. Only issue is they made this limit really oppressive and didn't really provide an in-code way to deal with it (there was no way to halt or yield in code, your entire process had to be done within that limit) it was so bad that even the OS itself would run into this limit causing your entire computer to crash every time you tried to do anything even remotely complex, which resulted in an even higher necessity to powercycle.
I really enjoyed the history segment of this video. Understanding history is incredibly insightful and important for understanding why conventions are the way they are. Have you considered making a long in-depth video discussing the history of computers, programming languages, etc? It would be a wonderful watch.
Your explanation was brilliant. You gained a new subscriber
i was going to search for a video explaining concurrency and youtube recommended this. 10/10 video, loved it
congrats on getting sponsored, great quality vids as always
how is the state of the program restored? before, the animation showed a snapshot of the cpu being saved to some portion of the memory (in the OS memory portion specifically), but then afterwards, the animation doesn't show anything there, and the example address also doesn't come close to the previously saved snapshot.
After watching this video I get anxious even swiping windows with my touchpad, thinking how much I'm triggering those memory pointers and how complex the process is
Great vídeo, one of the bests about de subject. Simple but with a lot of animations and history. Please keep doing this content!!
I didn't understand why if is the CPU itself which executes the interruption instructions (such as Read/write) why can it dequeue the next process in the scheduler if it is busy doing those tasks?
PS: the quality of the videos is superior, you are really doing a great job. By far, my favorite yt channel.
One after another
I never leave comments but when i see new video from you i become very happy❤❤
please can somebody clear my doubt?
during context switch the operating system saves the state of CPU so that it can be loaded back when needed.
but since the OS itself is a software and needs CPU to run and if it runs on CPU then it will change the state of the CPU.
so how does the OS saves the state of CPU without changing it.
The OS gets control via an interrupt. The OS interrupt handler saves the state of the running software to a data structure that the operating system maintains for that running task. Then the OS does whatever it needs to do. Obviously, it is using the CPU registers at this time. When the OS gives control back to whatever running task it decides to do next, the saved state is restored to that of the task that is about to get control using the data structure that holds the tasks saved state.
@@tomc101 but if the interrupt handler gets the cpu and starts executing, wouldn't it change the registers such as program counter, instruction register etc before it could even start saving those.
The CPU (hardware) can save its state in memory automatically when the interruption is triggered. One way of seeing it is: the steps to perform that state-capturing process is embedded directly in the circuitry. So, to capture the state of the registers, no software is needed (hence, registers are not being touched).
When the operating system starts using the CPU to handle the interruption, the state of the user program that triggered the interruption is already stored in memory.
The role of the operating system here is to read the memory region where the state was stored (by the hardware) and use that information on the scheduling process.
Parts of this process is architecture dependent. In some architecture, the memory region where the state is store is directly 'burned' in the hardware. In other architectures the hardware offers 'privileged-instructions' to let the OS decide where in memory the state has to be stored.
@@CoreDumpped thanks for clarifying my doubt, please keep making such awesome content.
@@orangeInk Ah, I see what you are concerned about. The interrupt hardware saves a minimal amount of the running program's state. The minimum would be the program counter and status words. The CPU then starts running the operating system code to handle the interrupt. It can then start by saving everything else such as registers. How much the OS needs to save depends on the CPU, and what the OS plans to do with that particular interrupt. For example, if it only needs to a bit of work and then return to the running program, it might not need to save more than a register or two.
There's some inconsistency around interrupts in this video. First of all, system services nowadays use syscall instruction rather than interrupt in code.
But how does the OS escape from? There's context switching mechanism, that can be triggered via hardware interrupt / timer. It allows then to break from the thread anywhere.
I'm not the video's author and I don't play him on TV, but ISTM that you're trying to suggest that the author's explanations are somehow fatally flawed due to the existence of syscall instructions on (AMD) CPUs which is not the case.
Aside from the fact that Intel CPUs use sysenter rather than syscall -- thereby making your statement about system services being requested by using the syscall instruction rather than an interrupt not entirely accurate -- I would point out that there are plenty of old programs that use an intx instruction to make a transition into the host OS even if we restrict ourselves to x86 and/or x86-64 systems.
I would also point out that even though no interrupt may be generated by the execution of sysenter or syscall instruction, these instructions are low overhead ways of basically jumping into the host OS that used to be done via intx and still may be for sufficiently old programs.
So the fact that sysenter and syscall don't actually generate an interrupt is basically irrelevant when it comes to answering the questions that the author posed (at around [10:38]) which was, "When is the OS executed to perform scheduling?"
The author correctly points out that the OS can be called upon to do scheduling by trying to access various system services no matter what the mechanism is that is used to get there from here.
I would also point out that since the author mentioned interrupts (even if they were mentioned in the context of an intx instruction), the author was implicitly indicating that you can get there from here via other interrupts such as such as timer or other hardware interrupts.
Given that he explicitly said that he was omitting a lot of information [10:25], you seem to be nitpicking for no obviously interesting reason.
@@lewiscole5193 I'm not nitpicking, I'm just pointing out the broader context. As for the sysenter instruction, you're absolutely right (names). Interrupt 0x2E is still supported on many architectures. What I wanted to say is that interrupt is now a legacy way to jump between ring3 to ring0. Combined with your clarification it fullfills the topic :) Chears.
謝謝!
What about time slices though? Or is that going to be in the scheduler video?
You mean quantums?
@@CoreDumppedyep
Don't you have to store the programs registers too? Won't the registers get mixed up if you just switch willy nilly between each process?
I'm pretty sure I mentioned and explained the state of each process needs to be captured before allocating the cpu to other process.
@@CoreDumpped Ah yeah, upon rewatch, you did.
Great video George! 🖤
Please talk about virtualization next.
+ thanks for the interesting video!
I'm excited for a video about threads, please make a video about it, I think I need a video like this, especially how the methods/functions related to it work. Thank you very much anyway
Another masterpiece dropped.... Thanks man as always, your vids are amazing.
is the narration done by AI?
Yes
if you fully watched you would know, in case you dont want to (yes, it says so at the end)
Core dumped doing great job again ❤
Your vidoes are super entertaining to watch! I look forward to your nexxt banger!
Glad you like them!
was looking for such a video kudos
Amazing video, thanks a lot for the explanations! Very informative
Maaaaan, what an absurd channel! A real hidden gem! Amazing teaching and design skills. You explained 6 months of OS college subject in just few minutes ❤❤❤❤
Very informative, thanks💯
new core dumped vid, lets goooo
Beautiful explanation. Loved it
I used to jiggle the mouse once in a while to make sure one program didn't freeze my windows 95 PC.
I wonder if there was something wrong with the hardware timer, so that I had to trigger interrupts manually. Or was it just a placebo.
this is way better explained than my courses at college
congrats and thank you
Wow, just found your channel, insanely entertaining and nice visualizations. Keep up the good work ❤
amazing ❤ thanks for the awesome video ❤
If there is a long queue of instructions, how does the interrupt "instruction" or whatever go in between them? Am I missing something here?
Dude you're amazing for teaching all this. I don't like AI if it's just about shitposting a lot to make money. But this is top content and it's not about fake topics or anything. Just pure gold.