PROOF JavaScript is a Multi-Threaded language
HTML-код
- Опубликовано: 3 авг 2023
- Learn the basics of parallelism and concurrency in JavaScript by experimenting with Node.js Worker Threads and browser Web Workers.
#javascript #programming #computerscience
Upgrade to Fireship PRO fireship.io/pro
Node.js Worker Threads nodejs.org/api/worker_threads...
Check out @codewithryan • Node.js is a serious t...
I could feel my brain trying to stop me writing what I knew was an infinite loop but I did it anyway. I trusted you Jeff!
0:31 concurrency incorporates parallelism
what you should is asynchronism
@@ko-Daegu who are you talking to, schizo?
me too!
@@ko-Daegu can I have some of what you're having
@@ko-Daegu you really thought you are being smart with that remark, didnt you? Only problem is that you are wrong
Fun nerd trivia:
- A single CPU core runs multiple instructions concurrently, the CPU core just guarantees that it will appear AS IF the instructions were run serially within the context of a single thread. This is achieved primarily via instruction pipelining.
- A single CPU core often executes instructions totally out of order, this is unimaginatively named "Out Of Order (OOO) execution".
- A single core also executes instructions simultaneously from two DIFFERENT threads, only guaranteeing that each thread will appear AS IF it ran serially, all on the same shared hardware, all in the same core. This is called Hyperthreading.
And we haven't even gotten to multi-core yet lol. I love you content Jeff, the ending was gold!
in the spectre and meltdown era, we like to say “guarantees”
But in a Hyperthreaded systems tasks just do not appear to be executed serially, they actually are executed serially ... the only difference is that the system is going to coordinate the execution of other tasks/threads while waiting for the previous one, that is probably blocked waiting for a I/O response ...
If you have a 16 core processor with 32 logical processors, it doesn't mean it can execute 32 thread simultaneously ...
@@RicardoSilvaTripcall hyperthreads are in many cases parallel by most meaningful definitions, due to interleaved pipelined operations on the cpu, and the observability problem of variable length operations. For an arbitrary pair of operations on two hyperthreads, without specifying what the operations are, and the exact cpu and microcode patch level you can not say which operation completes first even if you know the order in which they started.
@@ragggs Lol! Maybe guarantee* (unless you're Intel)
@@RicardoSilvaTripcall Uhhhh. No. Sorry.
That chef analogy about concurrency and parallelism was genius. Makes it SO much easier to understand the differences.
Love to see jeff going in depth on this channel, would love more videos like this one.
That's why I made this channel. I've got a long list of ideas.
@@beyondfireship wonderful. Keep it up
Also when we say "one-core," that means "one-core at a *time*" -- computer kernels are concurrent by default, and the program's code will actually be constantly shifting to different CPUs, as the kernel manages a queue of things for the processor to do. Not too unlike the asynchronous system that javascript has, kernel will break each program you're running into executable chunks, and has a way to manage which programs and code get more priority.
wouldnt that be kind of ineffective though, it wouldnt be able to take full advantage of the CPU cache, so i hope it does it as rarely as possible
@@orbyfieduhh, different CPU cores use the same L2-L3 cache. L1 Cache is per core but they’re small and meant for minor optimisations.
L1 is the fastest so having data available there is pretty significant. its also grown much in size to the point that it can basically cache all the memory a longer running task will need now. if L1 was so insignificant it wouldn't cause there data desync issues across threads
Then…why do I only see one core active when running simple Python code…?
@@orbyfiedit could be more inefficient if only one process took all the CPU core for himself during all his life time. Probably the process isn't switched between cores, but it is being swaped in and out with others on the same core for the sake of concurrency. Also take in account the hit rate that a cache may have.
Moments like 0:52, the short memorable description of callback functions, is what makes you a great teacher. Thanks man!
Keep in mind the JS world also calls any higher order function "callback" (like the function you'd pass to Array.map), whereas elsewhere afaik it only refers to the function you pass to something non-blocking.
@@kisaragi-hiu a fact that caused me much grief coming into JS from systems level.
It's a pretty good overview on how much more of a clusterfuck the code becomes once you add workers to it. And it didn't even get to the juice of doing fs/database/stream calls within workers and error handling for all of that.
"Clusterfuck", I had the same word in mind 😭😂
0:31 concurrency incorporates parallelism
what you should is asynchronism
just use Promises, it'll process all your asynchronous functions concurrently (very similar to parallel)
@@angryman9333 Promise will run user written function in main thread blocking manner. Async function is just syntactic sugar for easier creation of promises. WIthout browser asynchronous api's or web workers it doesn't run code in parallel mode.
@@angryman9333a what?
I am stuck step programmer 😂😂
rule #34 is calling
break;
I would watch out or youll het multi threaded
6:13 To see how all of your cores utilizing, you can change the graph from 'Overall utilization' to 'Logical Processor' just by right clicking on the graph -> Change graph to -> Logical Processor.
I'd like to see a video on JavaScript generators and maybe even coroutines.
For sure, this is a really cool thing and I'm not sure how to actually use it.
generics maybe ?
garbage collector in more details ?
benchmarkign agiants pythonic code, just to get people triggered ?
Thanks for shouting out code with ryan! That channel is criminally underrated
talking about multi-threading
data oriented design always helps
i watched a similar video early this year, but your way to deliver content is amazing, keep going
That ending was possibly one of your best pranks ever, a new high watermark. Congratulations 😂
Really cool, I actually saw the other video about nodejs taking it up a notch when it came out.
Lots of comments about memorable descriptions, shoutout to the thread summary at 3:30. Your conciseness is excellent.
just a heads up for your CPU; the 12900K doesn't have 8 physical cores, it indeed has 16, 8 performance and 8 efficiency cores, the performance cores have hyperthreading enabled but not the efficiency cores so you have 24 threads in total
😮
Oh yeah, right. So that's why the CPU didn't go to 100% after using 8 cores.
You forgot the: 🤓
But at 6:57 his cpu did go to a 100% with 16
because hyperthreading is shit
the `threads` package makes working with threads much more convenient. it also works well w/ typescript.
ayo , was learning event loop and had a bit of confusion about performance b/w single and multithreading and jeff just posted the video at the right time.
My brain: dont run it
8 years of programming: dont run it
the worker thread registering my inputs to the console as I type it: dont run it
Jeff: run it.
**RUNS IT**
the cook analogy was great and i now understand
aside from the outstanding quality, this ending was quite funny and hilarious! keep it up, your content is TOP 🙇🚀
although it's called concurrent, schedulers still can only work on one task at a time. It will delegate a certain amount of time to each task and switch between them (context switching). The switch Is just fast enough to make it seem truly "concurrent". If a task takes longer than the delegated time, the scheduler will still switch and come back to it to finish.
¡Wow! Just yesterday I was watching some videos about worker threads because I will use them to speed up the UI in my current development 😄
Thanks, now I know what script I should include in my svgs
It's like you read my client's requirement and came into support
It would be nice if you right click on the cpu graph and *Change graph to > Logical Processors*, so we can see each thread separately.
Thanks!
less useful than you might think. the operating system's scheduler may bounce a thread around on any number of cores. doesn't make it faster but spreads the utilization around.
@@crackwitz Do you mean that we will not see each core graph plotting one thread?
Yes Yes Yes, and exactly extra Yes! Thank you Bro for this contribution! You are speaking out of my brain! Best Regards!
Task Manager --> Performance tab --> CPU --> Right click on graph --> Change graph to --> Logical Processors
I have had hours long lectures in college level programming classes on the differences between concurrency and parallelism and the first 3 minutes of this video did a better job of explaining it. Shout outs to my bois running the us education system for wasting my money and my time 💀
It's probably not their fault you failed to understand something so simple. Literally 1 minute on google would have cleared up any misunderstanding you had
@@maskettaman1488if you have to pay to study and then you have to sell yourself to a tech corp to learn something is not that great of a system and it should not exist IMHO
@maskettaman1488 lmao im not saying i misunderstood it im saying fireship is much more consice and still gets all the relevant information across compared to college despite the fact that i dont have to pay fireship anything
great topic, thanks 👍
You can also pass initial data without needing to message the thread to start working, however, that one I feel like its better to use for initialization like connecting to a database.
Amazing🔥
man i been wanting something about workers for so long
Just brilliant
Niiiice we have the exact same machine! (And thanks for the video!)
with the amount of time I've spent on this video because of the while loop, even the algorithm knows who my favourite youtuber is
Woulda been cool if you set it to show core usage on taskmgr
Wow, Love this tick - tock snippet
Little known fact, you can also do DOM related operations on another thread. You have to serve it from a separate origin and use the Origin-Agent-Cluster header, and load the script in an . But you can still communicate with it using postMessage, and avoid thread blocking with large binary transfers using chunking. This is great for stuff that involves video elements and cameras.
I use it to move canvas animations (that include video textures) off the UI thread, and calculating motion vectors of webcams.
that looks handy! thanks for sharing
that might just help with a few of my projects
do you have any examples on github?
@@matheusvictor9629 yes
Sounds very interesting! I have a project where I think this would be useful.
Awesome video ending
seeing my cpu throttle and core usage rise in realtime was impresive :)
Adding more cores might still provide gains in a VM scenario depending on the hypervisor. As long a your VM isn't provisioned all physical cores the hypervisor is at liberty to utilize more cores and even up to all physical cores for a short amount of time resulting in increased performance for bursting tasks
And remember, don't make promises you can't keep
I tried the while loop thing and somehow my computer became sentient. Y'all should try that out.
Good vídeo!
Next time change the CPU graph with right click to see each threat graph.
Hope it helps!
Wow, didn't know that. Thanks!
Spawning workers in Node is not new, but support for web workers in browsers is comparatively new. Good shit man.
I remember when I first learned workers, I didn’t realize k could use a separate js file so I wrote all of my code in a string, it was just a giant string that I coded with no ide help. That was fun.
love how you tell us to leave a comment if it's locked like we can even do that
I'm still amazed at how you find such accurate images as the one at 0:32 🤔
each value should be a random value, and you should sum them in the end to ensure the compiler / interpreter does not optimize all the work away because it detected that you never used the values
Pretty sure compiler won't be able to optimize side effects like this, since worker and the main thread only interact indirectly through events on message channel.
people watching on phone:
“that level of genjutsu doesn’t work on me”
mindblowing intro
Epic!
It's FLAT!
I did use this back in 2018. I don't know how much it improved, but error handling was painful. Also, when you call postMessage(), v8 will serialize your message, meaning big payloads will kill any advantage you want. And also, remember that functions are not serializable. On the UI, I completely killed my ThreeJS app in production when I tried to offload some of its work to other threads :D
Apart from that, you should NEVER share data between threads, that's an anti-pattern.
I just recently experimented with the Offscreen Canvas handling rendering on a separate worker thread. Pretty cool.
Love it, we are already doing that with our Lambdas - cause why not use the vCores when you got them 😍
i wish you showed the CPU usage on each logical processor on task manager instead of the overview
You are a youtube genius man
I executed the while-loop on the orange youtube and I couldn't change the volume.... Thanks.
That trick to force us to hear the sponsor block could only come from you 🤣🤣🤣
The one time I look up something, fireship uploads a video about it lol
6:42 bro really doubled it and gave it to the next thread
Ending is the moment you are glad you watched it on a mobile device
0:15 I already know this and already using this
BLOB to create new Worker and going I use max 4 to 8 as one for each core
Hyperthreading generally gives a 30% bump in performance, your test demonstrated that handily.
Exactly. I don't know why I keep hearing otherwise
A single x86 core can actually run more than one command at a time. And the n64 can run 1.5 commands at a time when it uses a branch delay slot.
Just in time for my new browser game🎉
Day 5, I'm still stuck with the window open, I tried exit the house and get back in. Rick is still singing.
No clue if this could be an interesting video, but teach us about how to deploy on different environment ( ex: testing, production), as a junior i always don't know what this implies. Also show us tools to handle it. Thanks :)
In python, handling Race Condition is easy,
Use Queue, and Lock 😊
I achieved something similar in TS, but rather than locking the queue, I ensured that the jobs that could cause a race condition had a predictable unique ID. By predictable, I mean a transaction reference/nonce...
Well multiprocessing is much more mature than workers thread since multiprocessing has been the primary methods for concurrency in python, but for js it’s always been async.
That's hillarious
What are some of the useful libraries which help or use workers? Like Partytown or Comlink
0:31 concurrency incorporates parallelism
what you should is asynchronism
Elixir is faster than I thought and getting faster with the new JIT compiler improvements.
Fireship, the "S" sounds in your video sound really harsh. Consider using a de-esser plugin or a regular compressor plugin and your stuff will sound fantastic. Cheers.
So weird, this was an interview question yesterday.
Mr jeff will you do one on creating a websocket server in node js?
I thought worker threads were virtual threads. you learn something new everyday!
Aren't they? My understanding is that they are threads managed by the runtime, which in turn is responsible for allocating the appropriate amount of real threads on the O.S.
UNLIMITED VIEW TIMES!! AWESOME!! What a great video!
I once made a volume rendering thingie with Three.JS and it really, REALLY benefited from Web Workers, especially interpolation between Z slices.
Hang on… wouldn’t a volume-renderer in three.js be doing things like interpolation between z-slices in the fragment shader? Could certainly see workers being useful for some data processing (although texture data still needs to be pushed to the gpu in the main thread). Care to elucidate? Was it maybe interpolating XYZ over time, like with fMRI data or something? That would certainly benefit…
thanks for hanging
I've used the web worker API to filter multiple arrays at once and it's okay but it is very unintuitive to use and it could definitely be improved upon. Ideally for multiple Dom manipulation at once too not just data processing.
Now use the web worker where webpack is involved XD
@@Steel0079vite is the future
7:40 I knew the joke coming from mile away
nice one 😂😂😂😂
I wonder if there are things like mutex locks to help with the synchronisations of shared resources?
it's been a whole day and i'm still stuck in this page lol.
He definitely used an Al voice tool for the intro of this video
I saw this video of Code with Ryan.
IM STILL STUCK OVER HERE, HELP!?!?!?!?
MY PC WONT SHUTDOWN, ITS BEEN 5 MONTH'S...
keep up the great work, love your vid's!
I recently did a little side project where I needed to use a worker in a web app. The gist of the project is given a winning lottery number, how many “quick picks” or random tickets would it take to finally hit.
Hi from Vietnam, where the kitchen image was taken.
Dude he is helarious
Can you cover native threads vs green threads?
Every async thing you do goes to micro task or task queue, and every single one of them is executed on different thread, and once it’s done it goes back as message queue to micro task queue or task queue - when callstack is empty event loop takes firstly from microtask queue to callstack, later from task queue.
Web workes once are done also goes to main thread to the callstack
Js is still One thread
Where did you find this info? As I know each thread has their own event loop, where micro and macro executes
@@alexanderpedenko6669 when you make some Async operation like promises, timers (settineout setinterval) js engine (v8/ whatever is in node) notice that it’s async operation and delegate it to proper web/node api which is write in cpp Lang. And there it executes it code and once it’s done those api return result of that operation to some queue and later event loop moves it to main thread
"and I'll try to troubleshoot from there"
I'm thinking out loud here, but have a genuine question - Could you use workers combined with something like husky to do all pre-commit/push/etc checks at once?
For example, I may have a large unit/integration test suite, followed by a large e2e test suite, along with code quality checks and so on... All of which are ran in sequence potentially taking upwards of a few minutes to complete.
Could workers be used to run these jobs together at once?
E2E will bottleneck regardless, because of quadrillion OS APIs it has to interact on start, majority of them are synchronous.
JavaScript is referred to high level, single threaded, garbage collected, interpreted || jit compiled, prototype based, multi-paradigm, dynamic language with a, non-blocking event loop
And you can still program with multiple threads... 😂
It’s been 4 hours and my computer has now caught fire and is playing the interstellar theme song, help!
Well last party was fun😂