Hyperthreading would be interesting. I guess that the "Power" of the core (the operations/threads they can handle) is split in half and so instead of 1 thread per core 2 threads per core can be run simultaneously as if there were 2 actual cores. The limitation is just that 2 GHz would cut down to 2x 1GHz?
That's concurrency sadly, he's doing two tasks at the same time but not splitting up individual tasks to speed them up so each task still take the same amount of time he just does both at once (concurrency) where as if he was able to have an extra arm or mouth he could type or speak multiple words at once (parallelism)
There is an important nuance between "process" and "thread". Modern computer operating systems (because very old operating systems didn't do this) support a concept of memory protection. That is to say that one process is allocated some of the main memory (RAM) for it's "code" (instructions -- in some operating systems they refer to these as "text" pages rather than "code" pages -- they are the same idea) and "data" (things like variables -- not code). If a second process attempts to access memory that was not allocated to it, that process will get a segmentation violation (attempt to access data outside it's assigned memory segment). This prevents two different "processes" from overwriting each other's data. But things are different in threads. Here, two threads (forked from the same process) "share" the same memory. The smallest chunk of memory that a program can read or write is a "page" (this is architecture dependent ... 4k is a common size ... but it could be 8k, etc.). Suppose I have a variable ... an integer ... it's value is "5". This only requires one byte of memory (although on a modern OS the integer might occupy several bytes ..... but even a 64-bit integer would only occupy 8 bytes). Assume that is 8 bytes on a "pagesize" of 4k (4096 bytes). There are a LOT of other variables that occupy the SAME page in RAM. This creates a problem. Suppose there are two completely different variables but they happen to reside within the same "page". Two different threads could "read" the value of their variable ... but they really have to read the entire page and then disregard everything *except* the few bytes they care about. Meanwhile suppose they both modify their respective (but unique) variables ... whichever thread "writes" last (because a "write" back to main memory ALSO writes the ENTIRE page) wipes out everything *except* the few bytes that were changed. This means whichever thread "wrote" the page last ... will wipe out the change of the other thread. This is a problem. To solve this, modern operating systems have a concept of a "mutual exclusion lock" on a memory page (the same way that a multi-user database might do "row locking" or "column locking"). Unix operating systems handle this with something called a "mutual exclusion" lock (mtx). If two "threads" try to access the same page (for purposes of modification) then when the FIRST thread reads the page, it will ALSO apply a "mutual exclusion lock" (typically the first "bit" on the page is reserved for this purpose). If that bit is set to '1' then the page is "locked". If a second thread attempts to access the page, it will be told to wait until the mutex bit is cleared (a "spin on mutex" or smtx condition). In multi-processing (two different processes) this isn't a problem because they don't share the same memory segments. But in multi-threading, the threads DO share the same memory segments ... so it is possible for different threads to "step on each other's toes" and the OS is designed to protect against this). BTW, the whole point of multi-threading had to do with efficiency. Processors are VERY fast if the data they have to manipulate is already in the processor registers. But if the data resides elsewhere (if it has to "fetch" the data from RAM ... or storage) then the amount of clock cycles until that fetch or read completes is a veritable ETERNITY for the CPU. So it may as well be put to good use doing something else. Multi-threading vastly made programs more efficient because they could do *something* while long-running steps (such as memory IO or storage IO were being performed) was completed. A single processor core that is capable of multi-threading (aka hyperthreading) even within the *same* core cannot technically execute BOTH threads at the same time (if scheduled to execute on the same core), They can technically execute at the same time if scheduled to run on different cores.
Dude you can write a blog with just this comment. Please do it and share the link, I recently started with the whole architecture think. Would appreciate if I get to learn from you too 🙏❤️
Thanks. There's a lot to wrap my head around in there. I'd like to add that you can access memory in another process manually, using readProcessMemory & writeProcessMemory.
Tim I just wanted to thank you for taking the time to make this content. I’ve struggled to understand threading for the longest until I came across this video. You rock 🤟🏼
So this 11 minute video explains the brief of threading which takes my university 2 hours to comply but still doesn't made me understand. Thank you Tim!
That's a very easy to understand and helpful explanation. I found your video after looking for articles do better understand this topic, then I decided to look on youtube and found you. I've already subscribed.
🎯 Key Takeaways for quick navigation: 00:01 *🧵 Introduction to Threading and Processor Cores* - Understanding the basics of threading and processor cores. - Processor cores determine the maximum parallel operations possible. - Clock speed of cores and its significance in operation execution. 03:32 *⚙️ Threads and CPU Execution* - Explanation of threads as individual sets of operations. - Threads are assigned to processor cores for execution. - Threading allows scheduling of different operations on the same CPU core. 06:03 *🔄 Concurrent Programming with Threads* - Concurrent programming involves executing threads in different timing sequences. - Threads enable efficient CPU core utilization by switching between operations. - Threads are beneficial for handling tasks asynchronously, preventing program hang-ups. 07:13 *🔄 Single Core vs. Multi-threaded Execution* - Comparison of single-threaded and multi-threaded execution on a single core. - Multi-threading allows overlapping of operations, reducing overall execution time. - Use cases for multi-threading include web applications and gaming for uninterrupted user experience. Made with HARPA AI
I have been battling to grasp this-thread and-process concept for too long until I watched this video. This information is very informative and straightforward, I hope you will share videos like this one in the future.
Thanks a lot for this, because I needed it for my online pygame game which also I learnt from your playlist. I rely more on your videos than the official documentation, lol. Thanks a ton again.
Hello, Tim! I guess, you explained threads better than most of resources I've read before! And now I have some kind of understanding, thank you very much!:)
Excellent description! i was curious that how do you decide whether your code/function needs multi threading(concurrent) or mult processing(parallel execution)?
Can we say this ? => If a program loads into a memory. It become active (process). A single process is assign to a single physical core. And that program is written in such a way ( different logical parts) that it can run its different logical (threads) part independently. If 1 part (thread) is waiting the other part(thread) can run to increase efficiency. This is called concurrency (max throughput).
So if I run a program using threads, all the threads of the same program are going to get distributed to different cores? Or are they going to stay in the same core?
Excellent explanation. But I have some doubts. If I had 20 functions to run at the same time. Should I use multithreading or multiprocessing? What are the pros and cons of each?
What other threading topics or examples would you like to see? Let me know!
Thread Safe.
How does GIL work
Simultaneous process
exatcly
Hyperthreading would be interesting.
I guess that the "Power" of the core (the operations/threads they can handle) is split in half and so instead of 1 thread per core 2 threads per core can be run simultaneously as if there were 2 actual cores. The limitation is just that 2 GHz would cut down to 2x 1GHz?
Pretty impressive parallel processing with him drawing and talking at the same time
That's concurrency sadly, he's doing two tasks at the same time but not splitting up individual tasks to speed them up so each task still take the same amount of time he just does both at once (concurrency) where as if he was able to have an extra arm or mouth he could type or speak multiple words at once (parallelism)
@@asmithdev2162ruined the joke😢
Tim knows what he's talking about, this is just not another RUclips tutorial. Just mind-blowing explanation.
There is an important nuance between "process" and "thread". Modern computer operating systems (because very old operating systems didn't do this) support a concept of memory protection. That is to say that one process is allocated some of the main memory (RAM) for it's "code" (instructions -- in some operating systems they refer to these as "text" pages rather than "code" pages -- they are the same idea) and "data" (things like variables -- not code). If a second process attempts to access memory that was not allocated to it, that process will get a segmentation violation (attempt to access data outside it's assigned memory segment).
This prevents two different "processes" from overwriting each other's data.
But things are different in threads. Here, two threads (forked from the same process) "share" the same memory.
The smallest chunk of memory that a program can read or write is a "page" (this is architecture dependent ... 4k is a common size ... but it could be 8k, etc.).
Suppose I have a variable ... an integer ... it's value is "5". This only requires one byte of memory (although on a modern OS the integer might occupy several bytes ..... but even a 64-bit integer would only occupy 8 bytes). Assume that is 8 bytes on a "pagesize" of 4k (4096 bytes). There are a LOT of other variables that occupy the SAME page in RAM.
This creates a problem. Suppose there are two completely different variables but they happen to reside within the same "page". Two different threads could "read" the value of their variable ... but they really have to read the entire page and then disregard everything *except* the few bytes they care about. Meanwhile suppose they both modify their respective (but unique) variables ... whichever thread "writes" last (because a "write" back to main memory ALSO writes the ENTIRE page) wipes out everything *except* the few bytes that were changed. This means whichever thread "wrote" the page last ... will wipe out the change of the other thread. This is a problem.
To solve this, modern operating systems have a concept of a "mutual exclusion lock" on a memory page (the same way that a multi-user database might do "row locking" or "column locking").
Unix operating systems handle this with something called a "mutual exclusion" lock (mtx). If two "threads" try to access the same page (for purposes of modification) then when the FIRST thread reads the page, it will ALSO apply a "mutual exclusion lock" (typically the first "bit" on the page is reserved for this purpose). If that bit is set to '1' then the page is "locked". If a second thread attempts to access the page, it will be told to wait until the mutex bit is cleared (a "spin on mutex" or smtx condition).
In multi-processing (two different processes) this isn't a problem because they don't share the same memory segments. But in multi-threading, the threads DO share the same memory segments ... so it is possible for different threads to "step on each other's toes" and the OS is designed to protect against this).
BTW, the whole point of multi-threading had to do with efficiency. Processors are VERY fast if the data they have to manipulate is already in the processor registers. But if the data resides elsewhere (if it has to "fetch" the data from RAM ... or storage) then the amount of clock cycles until that fetch or read completes is a veritable ETERNITY for the CPU. So it may as well be put to good use doing something else. Multi-threading vastly made programs more efficient because they could do *something* while long-running steps (such as memory IO or storage IO were being performed) was completed.
A single processor core that is capable of multi-threading (aka hyperthreading) even within the *same* core cannot technically execute BOTH threads at the same time (if scheduled to execute on the same core), They can technically execute at the same time if scheduled to run on different cores.
Dude you can write a blog with just this comment. Please do it and share the link, I recently started with the whole architecture think. Would appreciate if I get to learn from you too 🙏❤️
Excellent explanation thank you
Great comment!
For io intensive task, you dont need multithreading as you can use asynchronous methods. If it's cpu intensive task, multi threading is necessary
Thanks. There's a lot to wrap my head around in there. I'd like to add that you can access memory in another process manually, using readProcessMemory & writeProcessMemory.
Tim I just wanted to thank you for taking the time to make this content. I’ve struggled to understand threading for the longest until I came across this video. You rock 🤟🏼
believe me guys! This is the single most simple and insightful explanation of the Threads.
I am literally learning this at my college right now, great explanation Tim.
it's pretty hard to find someone who explains coding well to beginners,thank you for helping us newbies start out!!
wow, i watched this video for 3 minutes, i couldn't help it but like the video and subscribe right away, that's how good of a teacher you are.
The balance of precision and simplicity is just laser-sharp. What a talented instructor!
Coding day 1: “Hello World”
Coding day 2: *creates parallel universe”
#Parallel Universe with Infinity thoughts...... Hell yeah man..
69 like very cool
2.6GHZ = 2 600 000 000 instructions per second , thank u so much tim
A bunch of my friends is also using your useful contents, you know what , thank you so much!
So this 11 minute video explains the brief of threading which takes my university 2 hours to comply but still doesn't made me understand. Thank you Tim!
That's a very easy to understand and helpful explanation. I found your video after looking for articles do better understand this topic, then I decided to look on youtube and found you. I've already subscribed.
Thanks for subscribing!
This is the best high level explanation. I was trying to figure out why multithreading isn't the same as parallelism
That explanation was beautiful. The whiteboard really helped too for actually seeing what is going on
You're seriously underrated man
Great : Explaining how it happens from "panoramic view" instead of coding as a first step to programming. With a mind mapping, much better !!!
🎯 Key Takeaways for quick navigation:
00:01 *🧵 Introduction to Threading and Processor Cores*
- Understanding the basics of threading and processor cores.
- Processor cores determine the maximum parallel operations possible.
- Clock speed of cores and its significance in operation execution.
03:32 *⚙️ Threads and CPU Execution*
- Explanation of threads as individual sets of operations.
- Threads are assigned to processor cores for execution.
- Threading allows scheduling of different operations on the same CPU core.
06:03 *🔄 Concurrent Programming with Threads*
- Concurrent programming involves executing threads in different timing sequences.
- Threads enable efficient CPU core utilization by switching between operations.
- Threads are beneficial for handling tasks asynchronously, preventing program hang-ups.
07:13 *🔄 Single Core vs. Multi-threaded Execution*
- Comparison of single-threaded and multi-threaded execution on a single core.
- Multi-threading allows overlapping of operations, reducing overall execution time.
- Use cases for multi-threading include web applications and gaming for uninterrupted user experience.
Made with HARPA AI
What a rockstar. Thank you so much for such an easy-to-understand explanation of this
I was very lucky to come across this video. Great explanation and illustrations
Bro you are amazing. For a while now I struggled to understand this concept but you realllllly broke it down and made it easy to grasp!
Your drawing skills are amazing
I have been battling to grasp this-thread and-process concept for too long until I watched this video. This information is very informative and straightforward, I hope you will share videos like this one in the future.
Thank you m8! Very easy explanation ... I'm struggled to understand, you make it izi!
best explanation as thread suppose to be explain using figure before the code. kudos to you bro!
The explanation is so good that I feel compelled to join the channel membership. Thanks for the helpful material
A very clear explanation, thank you so much
You explain this topic in very easier way bro
Every beginner in Python should subscribe to this channel..
Thanks a lot for this, because I needed it for my online pygame game which also I learnt from your playlist. I rely more on your videos than the official documentation, lol. Thanks a ton again.
Thank you! Simple, concise explanation. Love your channel.
Thanks for watching!
Best Explanation so far
You explained this better than my professor. Thanks!
Enhanced my understanding of many concepts and added more great stuff
Wow your explanation was incredible clear, thank you!
Clear and straight to the point. Great explanation!
Mr. Tim , Great Explainer
Really clear and concise explanation - thank you so much Tim!
thank u so much this really what i was searching for
very useful video to get into the topic, thank you very much sir
I was really looking for this
Thank you
Hello, Tim!
I guess, you explained threads better than most of resources I've read before! And now I have some kind of understanding, thank you very much!:)
YOU CRUSHED THIS! Thank you!
Wow this is an amazing tutorial and so interesting. Thank you!
This is such a good explanation, really helped me understand the concept
Great explanation. You earned another subscriber!
Graet as usual. I don't make this kind of content, but this can be so useful. Thanks!
you are gold I was looking for this
great example. thank you!!
Great explanation, super clear. Thanks Tim!
Impressive! Outstanding explanation.
Amazing explanation
Awesome explanation, thank you so much
Dude, you are awesome !!!!!
YOU ARE AMAZING!!!!!!! and I love you!!! thank u for your videos!!!!!!!!!!!
perfect explanation
This tutorial is really clear. Thanks
Great content! Thanks a lot Tim :)
YES FINALLY THANK YOU SOOOO MUCH!!
Great explanation, thank you you so much
Great teacher
Nice explanation. Thanks a bunch.
good explained video .. thanks bro
Impressive explanation.. really liked it..
Yes! Was waiting on a great threading tutorial. Thanks tim!!! Can you possibly get into multi threading with socket programming later on perhaps?
Freaking A! Thanks so much for breaking this down for us. I really do have a better understanding of threading in Python now
So good explaination ❤️
Awesome! I am very interested in concurrency and parallelism in python
That was very helpful, thanks!
Great explanation!
thank you very much for these threads videos
veryyyyyy good tim
excellent very good explanation
Great explanation Tim
Great video, thank you!
First view
Notifications ftw
Excellent description! i was curious that how do you decide whether your code/function needs multi threading(concurrent) or mult processing(parallel execution)?
Really good explanation! keep it up!
great explanation! thx
Excellent
yea, I wanted to learn this topic!!
Good content.
Thanks man !
Can we say this ? => If a program loads into a memory. It become active (process). A single process is assign to a single physical core. And that program is written in such a way ( different logical parts) that it can run its different logical (threads) part independently. If 1 part (thread) is waiting the other part(thread) can run to increase efficiency. This is called concurrency (max throughput).
weww explained greatly
well explaine Tim!
you're saving my ass in college... thank you!
nice explanation man
good job
Hi Tim, thanks for the videos! Something I would really love to see is integrating threading with PyQt5 gui applications.
thanks
great video
Awesome!
thank you
awesome....
If I can recall correctly, 1 Ghz means that core can do 1 billion operations per second.
Очень полезное видео)))
So if I run a program using threads, all the threads of the same program are going to get distributed to different cores? Or are they going to stay in the same core?
Excellent explanation. But I have some doubts. If I had 20 functions to run at the same time. Should I use multithreading or multiprocessing? What are the pros and cons of each?