I'd really love a full example showing all those three methods. maybe an example with two groups of tasks (IO bound and CPU bound), then showing how we could improve the code performance starting by multithreading, asyncio, and then multiprocessing
asyncio is for when the bottlenecks in your code lie in the I/O, the network, or in a GUI waiting for user responses, that kind of thing. Threads and processes are for handling CPU-intensive activities: you’re not going to take advantage of multiple CPUs without them. Currently threads are mainly useful if the heavy work is being done in an extension module, since native Python code itself is hampered by the GIL. But this might improve in future, thanks to things like the “nogil” project.
But i thought you got around the GIL with multi-processing in python as most libraries creates an entirely new process with an associated interpreter? It might not be a "true" multiprocess, but i do believe you are able to utilize the functionality to the same degree
@@GOTHICforLIFE1You can bypass the GIL if you spawn multiple other processes, but the overhead of doing so is quite large. C-extension libraries like numpy or numba can release the GIL if needed, but the logics are abstracted away, so you rarely have to do it manually. That said, releasing the GIL incurs less overhead, and thus is often more performant.
Highly appreciate your content. Thanks! Those shutterstock snippets are more distracting than entertaining. I would leave it - simplify your edit and makes it better digestible for the viewer.
Concurrency, on a high level, is concerned with performing multiple tasks at once. So multi-processing or multi-threading fall into this category because they create worker processes/threads to handle concurrent tasks. In asyncio, theoretically, we don’t execute multiple tasks concurrently but WAIT for multiple IO/network requests simultaneously. And no, concurrency, at least in Python, has little things to do with memory.
👷 Join the FREE Code Diagnosis Workshop to help you review code more effectively using my 3-Factor Diagnosis Framework: www.arjancodes.com/diagnosis
I'd really love a full example showing all those three methods. maybe an example with two groups of tasks (IO bound and CPU bound), then showing how we could improve the code performance starting by multithreading, asyncio, and then multiprocessing
Me too!!
asyncio is for when the bottlenecks in your code lie in the I/O, the network, or in a GUI waiting for user responses, that kind of thing. Threads and processes are for handling CPU-intensive activities: you’re not going to take advantage of multiple CPUs without them.
Currently threads are mainly useful if the heavy work is being done in an extension module, since native Python code itself is hampered by the GIL. But this might improve in future, thanks to things like the “nogil” project.
But i thought you got around the GIL with multi-processing in python as most libraries creates an entirely new process with an associated interpreter? It might not be a "true" multiprocess, but i do believe you are able to utilize the functionality to the same degree
@@GOTHICforLIFE1You can bypass the GIL if you spawn multiple other processes, but the overhead of doing so is quite large. C-extension libraries like numpy or numba can release the GIL if needed, but the logics are abstracted away, so you rarely have to do it manually. That said, releasing the GIL incurs less overhead, and thus is often more performant.
Threads are singlehandedly using one core in concurrent state in Python until today.
Highly appreciate your content. Thanks!
Those shutterstock snippets are more distracting than entertaining. I would leave it - simplify your edit and makes it better digestible for the viewer.
You should do a video on asyncio, subprocess, threading, and subinterpretors
I don't understand asyncio, since it needs async def to be defined. since I have large code base with sync it becomes difficult to use asyncio.
I thought concurrency is a memory thing, not necessarily a parallel thread concept with multiple threads.
Concurrency, on a high level, is concerned with performing multiple tasks at once. So multi-processing or multi-threading fall into this category because they create worker processes/threads to handle concurrent tasks. In asyncio, theoretically, we don’t execute multiple tasks concurrently but WAIT for multiple IO/network requests simultaneously. And no, concurrency, at least in Python, has little things to do with memory.
But switching between multiple processes isn’t that efficient in comparison to threads switching.
All this is coming at the cost of efficiency 🤔
What is Hear is only ShshshshshhSsssss