Fun nerd trivia: - A single CPU core runs multiple instructions concurrently, the CPU core just guarantees that it will appear AS IF the instructions were run serially within the context of a single thread. This is achieved primarily via instruction pipelining. - A single CPU core often executes instructions totally out of order, this is unimaginatively named "Out Of Order (OOO) execution". - A single core also executes instructions simultaneously from two DIFFERENT threads, only guaranteeing that each thread will appear AS IF it ran serially, all on the same shared hardware, all in the same core. This is called Hyperthreading. And we haven't even gotten to multi-core yet lol. I love you content Jeff, the ending was gold!
But in a Hyperthreaded systems tasks just do not appear to be executed serially, they actually are executed serially ... the only difference is that the system is going to coordinate the execution of other tasks/threads while waiting for the previous one, that is probably blocked waiting for a I/O response ... If you have a 16 core processor with 32 logical processors, it doesn't mean it can execute 32 thread simultaneously ...
@@RicardoSilvaTripcall hyperthreads are in many cases parallel by most meaningful definitions, due to interleaved pipelined operations on the cpu, and the observability problem of variable length operations. For an arbitrary pair of operations on two hyperthreads, without specifying what the operations are, and the exact cpu and microcode patch level you can not say which operation completes first even if you know the order in which they started.
Keep in mind the JS world also calls any higher order function "callback" (like the function you'd pass to Array.map), whereas elsewhere afaik it only refers to the function you pass to something non-blocking.
Also when we say "one-core," that means "one-core at a *time*" -- computer kernels are concurrent by default, and the program's code will actually be constantly shifting to different CPUs, as the kernel manages a queue of things for the processor to do. Not too unlike the asynchronous system that javascript has, kernel will break each program you're running into executable chunks, and has a way to manage which programs and code get more priority.
L1 is the fastest so having data available there is pretty significant. its also grown much in size to the point that it can basically cache all the memory a longer running task will need now. if L1 was so insignificant it wouldn't cause there data desync issues across threads
@@orbyfiedit could be more inefficient if only one process took all the CPU core for himself during all his life time. Probably the process isn't switched between cores, but it is being swaped in and out with others on the same core for the sake of concurrency. Also take in account the hit rate that a cache may have.
It's a pretty good overview on how much more of a clusterfuck the code becomes once you add workers to it. And it didn't even get to the juice of doing fs/database/stream calls within workers and error handling for all of that.
@@angryman9333 Promise will run user written function in main thread blocking manner. Async function is just syntactic sugar for easier creation of promises. WIthout browser asynchronous api's or web workers it doesn't run code in parallel mode.
6:13 To see how all of your cores utilizing, you can change the graph from 'Overall utilization' to 'Logical Processor' just by right clicking on the graph -> Change graph to -> Logical Processor.
Little known fact, you can also do DOM related operations on another thread. You have to serve it from a separate origin and use the Origin-Agent-Cluster header, and load the script in an . But you can still communicate with it using postMessage, and avoid thread blocking with large binary transfers using chunking. This is great for stuff that involves video elements and cameras. I use it to move canvas animations (that include video textures) off the UI thread, and calculating motion vectors of webcams.
less useful than you might think. the operating system's scheduler may bounce a thread around on any number of cores. doesn't make it faster but spreads the utilization around.
although it's called concurrent, schedulers still can only work on one task at a time. It will delegate a certain amount of time to each task and switch between them (context switching). The switch Is just fast enough to make it seem truly "concurrent". If a task takes longer than the delegated time, the scheduler will still switch and come back to it to finish.
I have had hours long lectures in college level programming classes on the differences between concurrency and parallelism and the first 3 minutes of this video did a better job of explaining it. Shout outs to my bois running the us education system for wasting my money and my time 💀
It's probably not their fault you failed to understand something so simple. Literally 1 minute on google would have cleared up any misunderstanding you had
@@maskettaman1488if you have to pay to study and then you have to sell yourself to a tech corp to learn something is not that great of a system and it should not exist IMHO
@maskettaman1488 lmao im not saying i misunderstood it im saying fireship is much more consice and still gets all the relevant information across compared to college despite the fact that i dont have to pay fireship anything
just a heads up for your CPU; the 12900K doesn't have 8 physical cores, it indeed has 16, 8 performance and 8 efficiency cores, the performance cores have hyperthreading enabled but not the efficiency cores so you have 24 threads in total
My brain: dont run it 8 years of programming: dont run it the worker thread registering my inputs to the console as I type it: dont run it Jeff: run it. **RUNS IT**
I achieved something similar in TS, but rather than locking the queue, I ensured that the jobs that could cause a race condition had a predictable unique ID. By predictable, I mean a transaction reference/nonce...
Well multiprocessing is much more mature than workers thread since multiprocessing has been the primary methods for concurrency in python, but for js it’s always been async.
each value should be a random value, and you should sum them in the end to ensure the compiler / interpreter does not optimize all the work away because it detected that you never used the values
Pretty sure compiler won't be able to optimize side effects like this, since worker and the main thread only interact indirectly through events on message channel.
Aren't they? My understanding is that they are threads managed by the runtime, which in turn is responsible for allocating the appropriate amount of real threads on the O.S.
Adding more cores might still provide gains in a VM scenario depending on the hypervisor. As long a your VM isn't provisioned all physical cores the hypervisor is at liberty to utilize more cores and even up to all physical cores for a short amount of time resulting in increased performance for bursting tasks
I remember when I first learned workers, I didn’t realize k could use a separate js file so I wrote all of my code in a string, it was just a giant string that I coded with no ide help. That was fun.
Hang on… wouldn’t a volume-renderer in three.js be doing things like interpolation between z-slices in the fragment shader? Could certainly see workers being useful for some data processing (although texture data still needs to be pushed to the gpu in the main thread). Care to elucidate? Was it maybe interpolating XYZ over time, like with fMRI data or something? That would certainly benefit…
I did use this back in 2018. I don't know how much it improved, but error handling was painful. Also, when you call postMessage(), v8 will serialize your message, meaning big payloads will kill any advantage you want. And also, remember that functions are not serializable. On the UI, I completely killed my ThreeJS app in production when I tried to offload some of its work to other threads :D Apart from that, you should NEVER share data between threads, that's an anti-pattern.
JavaScript is referred to high level, single threaded, garbage collected, interpreted || jit compiled, prototype based, multi-paradigm, dynamic language with a, non-blocking event loop
You can also pass initial data without needing to message the thread to start working, however, that one I feel like its better to use for initialization like connecting to a database.
ok, just: - JS is single thread , but Node is multithreading - for who don't have knowledge, node starts at least with 4 threads by default - Node.js is often associated with the term "single thread" because its I/O model is based on a single execution thread. - This statement is often misunderstood, and the confusion arises from the fact that Node.js is more accurately described as "single-threaded, event-driven."
Fireship, the "S" sounds in your video sound really harsh. Consider using a de-esser plugin or a regular compressor plugin and your stuff will sound fantastic. Cheers.
a small detail at 3:17 your i9 has 16 physical cores not 8. Only half of them have hyperthreading (because there are 2 types of physical cores in that cpu). That's why it has 24 threads instead of 32
CPU had multi threading ability years before multicore it called instruction Pipeline & Superscalar without it even with 4 cores CPU it will struggle to do the simplest tasks
Our teacher asked use to to a question using worker threads, i searched on yt and found fireship. Great 😄 I thought. but the example used here is same as my question 😆
This is the work pool pattern, in go it’s look more smoothly the implement 4 using 100% is not a good time. We should compare with Java with virtual threads
No clue if this could be an interesting video, but teach us about how to deploy on different environment ( ex: testing, production), as a junior i always don't know what this implies. Also show us tools to handle it. Thanks :)
I could feel my brain trying to stop me writing what I knew was an infinite loop but I did it anyway. I trusted you Jeff!
0:31 concurrency incorporates parallelism
what you should is asynchronism
@@ko-Daegu who are you talking to, schizo?
me too!
@@ko-Daegu can I have some of what you're having
@@ko-Daegu you really thought you are being smart with that remark, didnt you? Only problem is that you are wrong
Fun nerd trivia:
- A single CPU core runs multiple instructions concurrently, the CPU core just guarantees that it will appear AS IF the instructions were run serially within the context of a single thread. This is achieved primarily via instruction pipelining.
- A single CPU core often executes instructions totally out of order, this is unimaginatively named "Out Of Order (OOO) execution".
- A single core also executes instructions simultaneously from two DIFFERENT threads, only guaranteeing that each thread will appear AS IF it ran serially, all on the same shared hardware, all in the same core. This is called Hyperthreading.
And we haven't even gotten to multi-core yet lol. I love you content Jeff, the ending was gold!
in the spectre and meltdown era, we like to say “guarantees”
But in a Hyperthreaded systems tasks just do not appear to be executed serially, they actually are executed serially ... the only difference is that the system is going to coordinate the execution of other tasks/threads while waiting for the previous one, that is probably blocked waiting for a I/O response ...
If you have a 16 core processor with 32 logical processors, it doesn't mean it can execute 32 thread simultaneously ...
@@RicardoSilvaTripcall hyperthreads are in many cases parallel by most meaningful definitions, due to interleaved pipelined operations on the cpu, and the observability problem of variable length operations. For an arbitrary pair of operations on two hyperthreads, without specifying what the operations are, and the exact cpu and microcode patch level you can not say which operation completes first even if you know the order in which they started.
@@ragggs Lol! Maybe guarantee* (unless you're Intel)
@@RicardoSilvaTripcall Uhhhh. No. Sorry.
Love to see jeff going in depth on this channel, would love more videos like this one.
That's why I made this channel. I've got a long list of ideas.
@@beyondfireship wonderful. Keep it up
@@beyondfireship then do them! PLEASEEEE
That chef analogy about concurrency and parallelism was genius. Makes it SO much easier to understand the differences.
Moments like 0:52, the short memorable description of callback functions, is what makes you a great teacher. Thanks man!
Keep in mind the JS world also calls any higher order function "callback" (like the function you'd pass to Array.map), whereas elsewhere afaik it only refers to the function you pass to something non-blocking.
@@kisaragi-hiu a fact that caused me much grief coming into JS from systems level.
Also when we say "one-core," that means "one-core at a *time*" -- computer kernels are concurrent by default, and the program's code will actually be constantly shifting to different CPUs, as the kernel manages a queue of things for the processor to do. Not too unlike the asynchronous system that javascript has, kernel will break each program you're running into executable chunks, and has a way to manage which programs and code get more priority.
wouldnt that be kind of ineffective though, it wouldnt be able to take full advantage of the CPU cache, so i hope it does it as rarely as possible
@@orbyfieduhh, different CPU cores use the same L2-L3 cache. L1 Cache is per core but they’re small and meant for minor optimisations.
L1 is the fastest so having data available there is pretty significant. its also grown much in size to the point that it can basically cache all the memory a longer running task will need now. if L1 was so insignificant it wouldn't cause there data desync issues across threads
Then…why do I only see one core active when running simple Python code…?
@@orbyfiedit could be more inefficient if only one process took all the CPU core for himself during all his life time. Probably the process isn't switched between cores, but it is being swaped in and out with others on the same core for the sake of concurrency. Also take in account the hit rate that a cache may have.
It's a pretty good overview on how much more of a clusterfuck the code becomes once you add workers to it. And it didn't even get to the juice of doing fs/database/stream calls within workers and error handling for all of that.
"Clusterfuck", I had the same word in mind 😭😂
0:31 concurrency incorporates parallelism
what you should is asynchronism
just use Promises, it'll process all your asynchronous functions concurrently (very similar to parallel)
@@angryman9333 Promise will run user written function in main thread blocking manner. Async function is just syntactic sugar for easier creation of promises. WIthout browser asynchronous api's or web workers it doesn't run code in parallel mode.
@@angryman9333a what?
IM STILL STUCK OVER HERE, HELP!?!?!?!?
MY PC WONT SHUTDOWN, ITS BEEN 5 MONTH'S...
keep up the great work, love your vid's!
6:13 To see how all of your cores utilizing, you can change the graph from 'Overall utilization' to 'Logical Processor' just by right clicking on the graph -> Change graph to -> Logical Processor.
Thanks for shouting out code with ryan! That channel is criminally underrated
That ending was possibly one of your best pranks ever, a new high watermark. Congratulations 😂
Lots of comments about memorable descriptions, shoutout to the thread summary at 3:30. Your conciseness is excellent.
Task Manager --> Performance tab --> CPU --> Right click on graph --> Change graph to --> Logical Processors
THIS!⬆⬆⬆⬆. Legit was grinding my gears watching it like that
I'd like to see a video on JavaScript generators and maybe even coroutines.
For sure, this is a really cool thing and I'm not sure how to actually use it.
generics maybe ?
garbage collector in more details ?
benchmarkign agiants pythonic code, just to get people triggered ?
i watched a similar video early this year, but your way to deliver content is amazing, keep going
Really cool, I actually saw the other video about nodejs taking it up a notch when it came out.
the `threads` package makes working with threads much more convenient. it also works well w/ typescript.
This man just explained a lot within 8mins! Getting your pro soon.
Little known fact, you can also do DOM related operations on another thread. You have to serve it from a separate origin and use the Origin-Agent-Cluster header, and load the script in an . But you can still communicate with it using postMessage, and avoid thread blocking with large binary transfers using chunking. This is great for stuff that involves video elements and cameras.
I use it to move canvas animations (that include video textures) off the UI thread, and calculating motion vectors of webcams.
that looks handy! thanks for sharing
that might just help with a few of my projects
do you have any examples on github?
@@matheusvictor9629 yes
Sounds very interesting! I have a project where I think this would be useful.
aside from the outstanding quality, this ending was quite funny and hilarious! keep it up, your content is TOP 🙇🚀
It would be nice if you right click on the cpu graph and *Change graph to > Logical Processors*, so we can see each thread separately.
Thanks!
less useful than you might think. the operating system's scheduler may bounce a thread around on any number of cores. doesn't make it faster but spreads the utilization around.
@@crackwitz Do you mean that we will not see each core graph plotting one thread?
Thanks, now I know what script I should include in my svgs
although it's called concurrent, schedulers still can only work on one task at a time. It will delegate a certain amount of time to each task and switch between them (context switching). The switch Is just fast enough to make it seem truly "concurrent". If a task takes longer than the delegated time, the scheduler will still switch and come back to it to finish.
Spawning workers in Node is not new, but support for web workers in browsers is comparatively new. Good shit man.
best comic relief at the end ever, love you Jeff
with the amount of time I've spent on this video because of the while loop, even the algorithm knows who my favourite youtuber is
Good vídeo!
Next time change the CPU graph with right click to see each threat graph.
Hope it helps!
Wow, didn't know that. Thanks!
Yes Yes Yes, and exactly extra Yes! Thank you Bro for this contribution! You are speaking out of my brain! Best Regards!
seeing my cpu throttle and core usage rise in realtime was impresive :)
the cook analogy was great and i now understand
I have had hours long lectures in college level programming classes on the differences between concurrency and parallelism and the first 3 minutes of this video did a better job of explaining it. Shout outs to my bois running the us education system for wasting my money and my time 💀
It's probably not their fault you failed to understand something so simple. Literally 1 minute on google would have cleared up any misunderstanding you had
@@maskettaman1488if you have to pay to study and then you have to sell yourself to a tech corp to learn something is not that great of a system and it should not exist IMHO
@maskettaman1488 lmao im not saying i misunderstood it im saying fireship is much more consice and still gets all the relevant information across compared to college despite the fact that i dont have to pay fireship anything
¡Wow! Just yesterday I was watching some videos about worker threads because I will use them to speed up the UI in my current development 😄
- what could be better than an infinite loop?
- infinite loop on 16 threads
It's like you read my client's requirement and came into support
just a heads up for your CPU; the 12900K doesn't have 8 physical cores, it indeed has 16, 8 performance and 8 efficiency cores, the performance cores have hyperthreading enabled but not the efficiency cores so you have 24 threads in total
😮
Oh yeah, right. So that's why the CPU didn't go to 100% after using 8 cores.
You forgot the: 🤓
But at 6:57 his cpu did go to a 100% with 16
because hyperthreading is shit
And remember, don't make promises you can't keep
Greate video! But please do change the view to logical processors in your task manager!
My brain: dont run it
8 years of programming: dont run it
the worker thread registering my inputs to the console as I type it: dont run it
Jeff: run it.
**RUNS IT**
In python, handling Race Condition is easy,
Use Queue, and Lock 😊
I achieved something similar in TS, but rather than locking the queue, I ensured that the jobs that could cause a race condition had a predictable unique ID. By predictable, I mean a transaction reference/nonce...
Well multiprocessing is much more mature than workers thread since multiprocessing has been the primary methods for concurrency in python, but for js it’s always been async.
I'm still amazed at how you find such accurate images as the one at 0:32 🤔
each value should be a random value, and you should sum them in the end to ensure the compiler / interpreter does not optimize all the work away because it detected that you never used the values
Pretty sure compiler won't be able to optimize side effects like this, since worker and the main thread only interact indirectly through events on message channel.
A single x86 core can actually run more than one command at a time. And the n64 can run 1.5 commands at a time when it uses a branch delay slot.
man i been wanting something about workers for so long
Elixir is faster than I thought and getting faster with the new JIT compiler improvements.
I thought worker threads were virtual threads. you learn something new everyday!
Aren't they? My understanding is that they are threads managed by the runtime, which in turn is responsible for allocating the appropriate amount of real threads on the O.S.
Adding more cores might still provide gains in a VM scenario depending on the hypervisor. As long a your VM isn't provisioned all physical cores the hypervisor is at liberty to utilize more cores and even up to all physical cores for a short amount of time resulting in increased performance for bursting tasks
I remember when I first learned workers, I didn’t realize k could use a separate js file so I wrote all of my code in a string, it was just a giant string that I coded with no ide help. That was fun.
3:17 Dude the 12900k has 16 physical cores (8p+8e) and a total of 24 threads since only the p cores have hyper-threading ❗
love how you tell us to leave a comment if it's locked like we can even do that
0:31 concurrency incorporates parallelism
what you should is asynchronism
I am stuck step programmer 😂😂
rule #34 is calling
break;
I would watch out or youll het multi threaded
I once made a volume rendering thingie with Three.JS and it really, REALLY benefited from Web Workers, especially interpolation between Z slices.
Hang on… wouldn’t a volume-renderer in three.js be doing things like interpolation between z-slices in the fragment shader? Could certainly see workers being useful for some data processing (although texture data still needs to be pushed to the gpu in the main thread). Care to elucidate? Was it maybe interpolating XYZ over time, like with fMRI data or something? That would certainly benefit…
Don't need proof I believe you bro!
I did use this back in 2018. I don't know how much it improved, but error handling was painful. Also, when you call postMessage(), v8 will serialize your message, meaning big payloads will kill any advantage you want. And also, remember that functions are not serializable. On the UI, I completely killed my ThreeJS app in production when I tried to offload some of its work to other threads :D
Apart from that, you should NEVER share data between threads, that's an anti-pattern.
I executed the while-loop on the orange youtube and I couldn't change the volume.... Thanks.
Niiiice we have the exact same machine! (And thanks for the video!)
pro tip: create a loop like this.
for (let i = 0; i < 2; i++) {
i--;
}
This will make you pass the interview no-more questions asked.
😂
Hyperthreading generally gives a 30% bump in performance, your test demonstrated that handily.
6:42 bro really doubled it and gave it to the next thread
0:15 I already know this and already using this
BLOB to create new Worker and going I use max 4 to 8 as one for each core
Love it, we are already doing that with our Lambdas - cause why not use the vCores when you got them 😍
You are a youtube genius man
JavaScript is referred to high level, single threaded, garbage collected, interpreted || jit compiled, prototype based, multi-paradigm, dynamic language with a, non-blocking event loop
And you can still program with multiple threads... 😂
Wow, Love this tick - tock snippet
You can also pass initial data without needing to message the thread to start working, however, that one I feel like its better to use for initialization like connecting to a database.
great topic, thanks 👍
I tried the while loop thing and somehow my computer became sentient. Y'all should try that out.
i wish you showed the CPU usage on each logical processor on task manager instead of the overview
3:11 Notice how in that graph, the only languages faster than Java are all systems languages, with no VM based languages capable of beating it
people watching on phone:
“that level of genjutsu doesn’t work on me”
Ending is the moment you are glad you watched it on a mobile device
UNLIMITED VIEW TIMES!! AWESOME!! What a great video!
7:40 I knew the joke coming from mile away
nice one 😂😂😂😂
Just wanted to point out the memory usage for worker threads is crazy high.
I think because of multiple nodejs runtimes required now, maybe? I don't know...
ok, just:
- JS is single thread , but Node is multithreading
- for who don't have knowledge, node starts at least with 4 threads by default
- Node.js is often associated with the term "single thread" because its I/O model is based on a single execution thread.
- This statement is often misunderstood, and the confusion arises from the fact that Node.js is more accurately described as "single-threaded, event-driven."
Exactly. I don't know why I keep hearing otherwise
Woulda been cool if you set it to show core usage on taskmgr
I just recently experimented with the Offscreen Canvas handling rendering on a separate worker thread. Pretty cool.
Fireship, the "S" sounds in your video sound really harsh. Consider using a de-esser plugin or a regular compressor plugin and your stuff will sound fantastic. Cheers.
I saw this video of Code with Ryan.
a small detail at 3:17 your i9 has 16 physical cores not 8. Only half of them have hyperthreading (because there are 2 types of physical cores in that cpu). That's why it has 24 threads instead of 32
I think he just said that so people would comment, increasing the algorithm rizz
@@somedooby I wouldn't be surprised TBH, you certainly can't get that audience so quickly without knowing all the tricks
Also right click the cpu graph and choose logical processors to show the threads in individual graphs. Makes it easier to visualize IMHO.
The one time I look up something, fireship uploads a video about it lol
@1:57, uptime 4:20:00
Node js will greatly enhance C languages, their performance increases in proportion to the number of cores and memory consumption does not increase
Awesome video ending
Soooo much effort for something the BEAM gives you for free. No amount of bloat is going to make JS not suck.
CPU had multi threading ability years before multicore it called instruction Pipeline & Superscalar without it even with 4 cores CPU it will struggle to do the simplest tasks
To do this magically in c/c++ use openmp, in rust use rayon.
So weird, this was an interview question yesterday.
Our teacher asked use to to a question using worker threads, i searched on yt and found fireship. Great 😄 I thought. but the example used here is same as my question 😆
mindblowing intro
BRO LITERALLY KIDNAPPED ME TO WATCH HIS VIDEO
This is the work pool pattern, in go it’s look more smoothly the implement 4 using 100% is not a good time. We should compare with Java with virtual threads
No clue if this could be an interesting video, but teach us about how to deploy on different environment ( ex: testing, production), as a junior i always don't know what this implies. Also show us tools to handle it. Thanks :)
Amazing🔥
That trick to force us to hear the sponsor block could only come from you 🤣🤣🤣
Hi from Vietnam, where the kitchen image was taken.
Just in time for my new browser game🎉
Day 5, I'm still stuck with the window open, I tried exit the house and get back in. Rick is still singing.
Just brilliant