i think there is some confusion as to the actual meaning of async. Async does not really have anything to do with threads. They are closely related but fundamentally separate. You can have async execution within a single thread, sometimes this is even more performant than using a thread pool. Async execution just means that the event loop can be passed off to other tasks.
Your take on async and threads is completely wrong just saying. JavaScript has a asynchronous non-blocking I/O thread, a single thread that does not block execution. It’s more easy to explain in a metaphor. If Node was a server at a restaurant it would take that order. Give it to the chef come back out and take another order aka HTTP request. On the other hand PHP would grab the order. Wait for the Cook to complete that order before they ever took another order from another customer. Basic synchronous operations verse asynchronous operations. Research the JS event loop. What do you think callback hell is? Why did ES6 give us the Promise API? Just a fun fact synchronous languages don’t have a call stack. ✌️
@@thedelanyo Sorry you’re right I read it again. I just caught caught more with “threads” in the video. Maybe someone can explain to me how this is a “game changer” in 2023 being that the JS event loop has always been this way.
This isn't like the multithreaded nightmare that is Java. You explicitly need to define a chunk of memory or data to be shared and be mutable, it's not "anything goes".
People often confuse concurrency with parallelism, concurrent tasks run at the same time, but only one is executed at a given time, parallel tasks are executed at the same time, thus requiring synchronization
@@danielvega646 here's a easy way to understand. concurrent: cpu will spend x time on a task, but once the timer is up it goes to another; parallelism: cpu will process all the tasks at the same time. in this case, worker_threads execute parallel instead of concurrent. this is a big simplification so if you want to know more how concurrency and parallelism work you should look it up and also try to understand how the OS schedule these tasks to multiple cores at the same time.
People get confused by this because they use the phrase "at the same time." You used that phrase to describe both concepts. I feel like the industry chose poorly with the term concurrency, since the meaning of the English word is "at the same time," and more accurately describes parallelism. I'd have offered something like interspersed, interleaved, multitasking, or something like that. If computing concurrency is occuring, then the different tasks are very much not being performed "at the same time." I had been using the computing terms essentially interchangeably because of this misunderstanding, luckily without ever being in a context where it mattered.
I had to stop watching after around 1:17. The Javascript always run on a single thread, but all of the IO, file handling, etc isn't happening in Javascript. All of those things are managed in a thread pool. By default, NodeJS maintains a thread pool of 4. This can be overridden by setting the UV_THREADPOOL_SIZE env variable.
Node isn't always single-threaded, as its core is written in C/C++ and certain modules will run in separate threads. But Node's single-threaded nature means we avoid having to deal with potentially difficult to debug problems related to memory sharing / data synchronization between threads. I/O, network, and CRUD operations against a database are usually the bottlenecks, not long-running code blocking the main thread.
Well, not quite. In Node, both IO operations and long-running code can be bottlenecks. Yes, if you don't have long-running code, then your application will run fine. But as soon as you have long-running code, that'll bottleneck you way faster than in other languagues because it's that much harder to execute that code in another thread. In that sense, long-running code is the Node.js-specific bottleneck and is what should be avoided if you want to have low latency.
NodeJS is single-threaded (CPU handles tasks sequentially), but the I/O is asynchronous. So, the I/O devices - like networks, storage, and other connected devices, do not have to hold on to the CPU. Other languages are multi-threaded, but their I/O were mostly synchronous. This is why you get HTTP timeouts with Java and Python, but you never get it with Node.
@@josephmariealba8483 both Java and Python has asynchronous. HTTP timeouts are caused by many reasons (and Node application can return HTTP timeout too), sometimes its due to bad code implement, sometime it's security mechanism (to avoid some kind of attacking method - which try to hold connection as long as possible).
@@DavidsKanal > that'll bottleneck you way faster than in other languagues because it's that much harder to execute that code in another thread How easy it is in other langs?
@@hl7297 for c#, more or less: new Thread(fib).Start() new Thread(fib).Start(singleParamPassedAsObject) new Thread(() => fib(param1, param2, param3)).Start() to create a completely new thread or Task.Run(fib) Task.Run(async () => { fib(param1, param2, param3); }) to run your code on one of pooled threads (generally better idea, unless the code is long running or does synchronous io) you also have PLINQ where you can for example do ' var x = someArray.AsParallel().Select(i => i*2 + 2).ToArray();' which will distribute the calculations over all cores and merge the result automagically. Note that while it's easy to start code on new thread, it's not that easy to make sure the code runs correctly in parallel (think locks, deadlocks, race conditions, compiler and cpu optimizitations causing writes to var on thread 2 not visible to thread , multiple threads accessing same var under coarse-grained lock effectively making your multithreading slower than single thread and many many more)
Hey, I had some experience with this feature while doing PWA application. And sadly I should say, that Workers aren't "magic stick", that can fix any performance issue by just wrapping a code into it. One big limitation is that data, transferred between worker and main thread should be serialized. Serialization can cost more than performance you potentially got by splitting algorithm into threads.
@@EuSouAnonimoCara Yes and no :D. Anyway you can only serialized data into shared buffer, so performance issue about serialization doesn't went away. It can be managed in early stage of developing module, but sadly you can't just wrap smth into web-worker to make it faster. My point was about it :D
You can use transfer lists to move memory to the other thread downside is you can’t use it in the thread that sent it but you can have the worker thread re transfer the same buffer back after doing whatever it needed to do. Nice thing is you can still use postMessage but can avoid the overhead of deep cloning
@@sawinjerwhere do you learn all this stuff? I'm taking a node js course and I'm not sure if I'd be able to keep up with the people in the comments when discussing this stuff. I think I've heard a few speakers talk about this, but that was a long time ago and I can't remember who or what they said. (Kyle Simpson maybe)? Did you study CS?
Glad you discovered it! ~3 years ago I failed a junior interview because the interviewer asked me "why javascript was single threaded". I explained her I felt that the question was a "gotcha question" because the initial affirmation was false and depending on the enviroment you could make use of multiple processes or threads with workers or clusters to achieve multithreading. She laughed at me and told me I was wrong because javascript used the event loop and couldn't run on multiple threads. I don't think she was an engineer but to this day I'm not sure and I think she was following a script, a bad one as you can see... Anyways, thanks for the video! Keep pushing!
@@josedallasta JS used to be single threaded. However modern JS has features like SharedArrayBuffers and Atomics, which are definitely multitreading features.
Nodejs is outdated BE framework. Why would anybody except for some "fullstack" devs use it nowadays? Libraries are absolute mess with tons of vulnerabilities and close to zero support, let alone half of them dies over 1-2 years. Why would you use it over python, c#, kotlin, go, java and etc... Those languages have frameworks that are way more mature with way more reliable libraries and way less security concerns. People claim that starting with nodejs is simple, while it is not true. It is infinitely easier to build a dotnet7 api for a newcomer. All the tooling works out of the box, no mess, actual code standards, no clickbait content, no confusion with libraries(what orm should I use? MikroORM, Objection, TypeORM, sequelize etc...). You just download SDK and IDE. No need to download nvm. No need to fight over npm, pnpm, yarn. No need for nvm either. Node is just a mess. Absolute nightmare to work with 3+ people on the same project. It is also absolute nightmare if you need to maintain the project for 5+ years. If I learned anything in the past decade, staying away from node is the best thing...
2:08 Small tip: You may use "console.time" which will log time automatically in ms. Like this: console.time("Fib"); const result = fibonacci(iterations); console.timeEnd("Fib");
Thanks for this video! I am running a nextjs app, where each request takes around 10-20 seconds to complete (some GPU related stuff). The whole app used to freeze when more than 4 users used to connect at a time. It was hard to debug because I thought it should be async, turns out as you said if a blocking operation is performed in an async context, the whole event loop freezes and the users see a 502. I created a redis queue and a separate service that does the blocking requests and everything is working fine now. Would love to try out worker threads, looks promising.
I wish to make a nextjs app too but after watching this video and comments, I think I have not fully understood nodejs yet... I also want to use nodejs but ofcourse, I don't want frozen app with just a few users! Any tips on learning and understanding it in the real world sense?
@@ritik-patel05 nextjs api backend puts json stringified data on redis queue. Node script conitinuously looks for new value in queue.. When it gets a value from it, the value is parsed and gets sent to the GPU server. After, the GPU server is done fulfilling the request the node script gets the data and this data is again stringified and put in Redis . While this was happening, the client polls the nextjs app every few seconds to see the status of the job in queue. After processing of the job, nextjs app gets the output from Redis and sends it back to the client.
Context switching ! If you spawn more threads than the available number, they're going to take much longer, That's way you should create a thread pool first and assign tasks to it.
@@anothermouth7077 No one is going back to anything, worker threads are even present in thr rust + actix for example. You need to understand what you are trying to do, in this case if you have a long lock it's interesting to invest into a worker thread.
It is so easier to do in Go though. It is essentially the same amount of code as in the first (not working) example, but it works asynchronously as you would expect. You can create async functions just by adding "go" keyword to a function declaration (or right beside the execution also, if you want). Use a waitgroup instead of Promise.All and it just magically works as you would expect. Crazy, right?
@Name I think you don't know basic English. The other guy was talking about go being better than JavaScript and you said that go is a bad language, meaning in context that is no better than JavaScript, V doesn't have absolutely anything to do with this. Plus you look like a complete moron if you go around shoving your favourite language down everyones throats when its not asked for.
@@twothreeoneoneseventwoonefour5 it all good until that GC cycle kick in and grab your ball and you have nothing to do to even reduce it a bit. Go is as good as simple but when you start adding more memory related to it, then GC will happily blow all your work. i/e: personally i prefer Jai more, but since god know when it got release, Zig is fine
heads up. Code in the Promise constructor runs SYNC and is blocking! There is no such thing as an async constructor. not even for a promise. Regarding to 3:18 at Line 7. To make this happen in a JS async nature you should use Promise.resolve().then(...your code...) in calling Promise.all([doFib()]) is executing the function doFib() - however, the code runs sync because its calculated in the constructor of the promise which is immediately solving and resolving the task - doing the work immediately, rather than creating a promise that should return once the task is solved. all the functions in Promise.all() are executed one after another before the constructed array is handed to the Promis.all() function - so Promise.all() is receiving an array where all the promises are already resolved and there's nothing to wait for. what you would want to see in the Promise.all() test is not everything being calculated in sync - but printed kinda simultaneous as all of them are actually processed in "fake" async - time-slice parallelism. this is probably affecting the test - i didnt try it myself though because i'm too lazy to type all that code. And one more thing - if you're using workers, you can ease your life with the "comlink" npm packaged. written by former google dev advocate Surma and its reaaaally taking the pain out of worker thread communication.
> the code runs sync because its calculated in the constructor of the promise the constructor does not run immediately the callback function, it will be executed (synchronously) only when there is no more to compute in main thread, try something like this: const a = 'first'; const p = new Promise(resolve => resolve(1)).then(res => console.log(res)) console.log(a) // first // 1 I do agree that a better way to explain this could have been using setTimeouts
That's not how it works. The computation doesn't run on a function while it's in the queue. You would waste more time pushing them in the queue because you can only one the computation one at a time as each one is resolved immediately.
Год назад
@@jc-depre to replicate the OP's point, change your example to: const p = new Promise(r => { console.log(1); r(); }); You will see the result as: // 1 // First Also, glad the OP mentioned comlink. I love it!
The continuations of all those promises will run as microtasks on that same thread though so it will still run each in order and block the thread, it will just not do it until the next tick of the event loop. To be properly parallel they need to happen on different threads which requires using worker_threads.
It is also possible to use the cluster built-in module is well-suited for scaling network applications, the cluster module is more suitable for scaling network applications, while the worker_threads module is more suitable for parallelizing CPU-bound tasks within a single process.
i hit the subscribe button HAAARD im going to explore the rest of your videos and hope they are just as well done! thank you for the information and high quality production!
What is spinning up more processes for you? Server Workers/Childs waiting for job forever in HOT, Apache/Nginx/Lightppd for example - workers already in ram ready and waiting. If you did server config well of course. Yours first example wont use web server or what ?
Hey Ryan, while I understand this video is to demonstrate multicore execution I think it's important to point out that you can achieve asynchronous execution inside the event loop without workers. It would have been good to see you include the following async version of the code in your demonstration so that users understand there is a middle ground that allows asynchronous execution without requiring worker management. async function fibonacci(n) { return n < 1 ? 0 : n
Async and promises do the same thing. They are just syntactically different. The video is highlighting that you need multiple cores in order to execute things in parallel. You are not able to execute things in parallel in JS because it only has access to one CPU which can only handle one process at a time. JS Async is not handling tasks at the same time, it is scheduling tasks to be handled at a later time. This is concurrency. This video is about parallelism not concurrency.
if you've ever used assembly language before, Its like putting a jump statement at the end of a block of code to go to a previous (or future)spot in the code, the asynchronous portion of the code, then have another jump statement to jump to the next part of the code the dev wishes to execute when the asynchronous code finishes.
@@terrencemoore8739 If you read my comment you should see that I said that async/await is not multi core execution and my main point was that his example used promises in a way that didn't really make sense, he could have just put the calls to the functions as direct calls as he wasn't really making use of promises in a meaningful way. If the code is going to use promises it should demonstrate how they can execute code asynchronously.
@@John.Whitson The video is not about asynchronous code, its about parallelism. He acknowledges that JS is async but if you want to do two things at the same time, its impossible. CPUs handle one process at a time. If you want two things to run at once , you need two CPUs
wow. just impressive how direct and well executed this video is. And it takes time and work. and you make it looks like simple. Just top of the line Job. Congratulations. The sad part is that I can only give a like once.
Cluster module is better suited for distributing load horizontally on a node server, while worker threads module is better suited for parallely performing cpu bound tasks. You started this video with the problem of load distribution (whose solution is cluster module) and then suggested worker threads with an example of cpu intensive job.
NodeJS (or JS) doesn't have a lock for volatile variables, so how can we ensure that two (or more) processes are not changing the same buffer in the memory at the same time?
isn't worker_threads already a thing since NodeJS 10? Still, in most cases, a compiled language will outperform in performance than interpreted language.
Your point is valid it'll outperform it but javascript is not entirely interpreted, it's JIT compiled, which does a lot of under the hood optimisations that brings in some performance gains in some use cases.
@@fullstack_journey A compiler is nothing more than an interpreter that translates the code into machine code before execution. Being JIT Compiled really only saves time when you call multiple times the same bit of code.
@@fullstack_journey it is not entirely JIT compiled either, as the code needs to be run and the runtime needs to gather usage patterns such as which types are used...
@@FalB27 its more complicated than that. the JIT compiler does more than just compile it can also analyse and optimize code paths better than a program that is "normal" compiled. Besides being able to compact heap memory in a VM, making it more accessible to the CPU and its cache.
This example seems to be about parallelization rather than concurrency. The first time the calculations ran on a single core and it wouldn't matter if they ran asynchronously or blocking, because the rest of the cores were idle. The second time the job finished faster because of mutithreading, but the result would be the same even it would run in a blocking fashion.
I think you have concurrency and parallelism mixed up. As well as the definition for asynchronous function. When you ran the worker threads they were running in parallel, that is, multiple threads running the same program on the same tick. When you ran asynchronous, those are concurrent.
I think you're right: "Concurrency is the task of running and managing the multiple computations at the same time. While parallelism is the task of running multiple computations simultaneously."
@3:02 That is why you don't do CPU intensive execution in Node.JS. But for input-output operation it is very optimum. It is still faster than other non-async language backend runtime.
This is the best video ever. First of all it finally explained to me what nodejs is, and secondly such a clear explanation of worker threads and multi threading. 🔥🔥🔥
One of the issues while using worker threads in node js is the memory requirement per thread, every thread has it's own node runtime. So the amount of ram usage is way more, compared to threads in languages such as C# or C++.
Worker threads are not a new thing in nodejs. It even works in browsers lol. JS ecosystem is still slow and no one writing fast apps in it. Nodejs designed for IO, not cpu-bound tasks
YOOOOO, your intro was amazinggg I thought whole video was like that for a sec hahaha tho if it was ngl it would be hella fun and exciting to watch it sure will take ton of time tho-;''
I don't know the performance difference between fork and worker thread, but i wanted to point to the fact that worker thread create v8 instance which have the overheard of allocating memory and all that stuff because it's fundamentally creating new process
actually, everything that's being called with async is in fact separate thread but it's doing some OS stuff in a background, what you also shown us is concurrency not the parallel execution so in fact it's not fast, it's just handling executions frequently between them, but it's not increasing speed, concurrency only increases speed when it comes to IO bound operations, but this is true for different techs like in Python or Ruby, Java can tho truly run in parallel and utilise all cores, so there are many misconceptions in net about this.
worker threads are using new processes to accomplish theirs task on separate threads (one thread by process in a single thread fashion) you can get the PID of each worker_threads I guess the only multi-thread capabilities is with IO via LibUv using callbacks, promisses and async await on those...
Node는 System call로 던질 수 있는 I/O 작업은 이를 통해 os로 던지고. Crypto나 기타 os로 못 던지는 작업들은 node의 스레드풀을 이용한다고 알고 있습니다. 결국 worker thread를 사용하지 않는다 하더라도 node는 멀티 스레딩을 활용하는 것이지요.
저도 그렇게 알고 있습니다만, 그 멀티 스레드 활용이 제한적이라는 게 여기서의 논쟁거리인 것 같습니다. Ryan 의 피보나치 테스트에서 수치가 나왔으니까요... 만약, Ryan 의 코드에서 피보나치 연산을 외부의 C 스크립트로 처리했고, Node 는 그 연산결과를 받아오는 방식이었다면, Node 의 멀티스레딩을 신뢰하는 사람들의 기대에 부합하는 결과가 나왔을 수도 있겠죠. 적어도, C/Python 으로 처리해야 할 작업과, Node 가 직접 연산해야하는 작업의 판단은 있어야 하고, worker 로 스레드를 분산해 준다면, 그에 따르는 위험도 관리할 수 있어야 한다는 정도로 저는 정리해봅니다.
Thank you very much, I did lost an amazing job opportunity because I could not explain exactly it. You did it sou clearly that next time I will no fail 😅
That worker thread API is incredible. I had SO MUCH trouble making an efficient web crawler in NodeJS that I switched to Java and then Python. No joke, I ended up as a Python dev because I didn't understand NodeJS haha.
If you do understand NodeJS, you will eventually end up as Python and that's a good choice. NodeJS worker thread is not that great. Performance is bad and it takes a lot of time to transfer messages between the main thread to workers and vice versa. In this video example, you can see that each doFib() in the main thread is done in 480ms but when using worker thread, it needs around 970ms to finish. Let's choose a real multi-thread language when we need it for CPU intensive tasks. NodeJS event-loop is good for IO tasks only.
I already have a difficult enough time keeping node workers alive for more than a few months without PM2. Can't imagine how bad the reliability would be with that responsibility also being pushed directly into 1 node process, lol.
Is this a right approach ? Correct me 1. Assign cpu intensive sync tasks to worker threads 2. Let node event loop/thread pool handle async ops 3. Spawn multiple processes with 1&2 to balance the traffic load I always thought this is the way, as I said open to corrections
I swear cpu is barely a bottleneck for most applications. Performance is overrated and the language is rarely the issue. If you want extra performance write a module in rust or something.
does it share the loaded lib at runtime ? What if a package has a singleton and it is imported by both workers and master ?? Do they share a ref ? What s the overhead using it ? Besides the sharedbuffer interface, it looks likes a neat improvement of past ipc systems. looks good overall.
Workers are really cool if you understand them. I use them for some front-end applications, happy to see they are in node now. They are expensive to create though. They're not equivalent to doing like a goroutine or something.
@@matth23e2 Yeah pretty much. I run a reasonably cpu intensive search algorithm in a worker, and it just returns the array of references. Keeps the main thread unblocked. If you give workers some thought you can use them for a lot of things though.
Thanks for sharing such valuable information! Could you help me with something unrelated: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). What's the best way to send them to Binance?
Excellent vid as always... (and you should probably do voice-over work too, lol) - Have you had any experience with Caddyserver which is written in Go? - I am thinking of using it primarily for its reverse proxy features and it would be interesting to get your take on it...
Caddy is nice! But nginx and apache are pretty good too though. If you're not familiar with any of them and you don't care about battle testedness then caddy for sure
I tried to use workers - but I have not seen any improvements. After this I tried clusters - there was improvement. I tested with the Apache ab tool - and there was improvement in performance.
Great job with the explanation Ryan 👏If anyone’s looking for more Node js videos, we’ve released agenda scheduling and app logging to help the community too 💪
I was asked in a technical interview if Nodejs can do multithreading or multiprocessing, and the interviewer said it can’t do either. Unfortunately at the time I didn’t know any better, but that’s wrong.
i think there is some confusion as to the actual meaning of async. Async does not really have anything to do with threads. They are closely related but fundamentally separate. You can have async execution within a single thread, sometimes this is even more performant than using a thread pool. Async execution just means that the event loop can be passed off to other tasks.
I totally agree 👍
💯
Your take on async and threads is completely wrong just saying. JavaScript has a asynchronous non-blocking I/O thread, a single thread that does not block execution. It’s more easy to explain in a metaphor.
If Node was a server at a restaurant it would take that order. Give it to the chef come back out and take another order aka HTTP request. On the other hand PHP would grab the order. Wait for the Cook to complete that order before they ever took another order from another customer.
Basic synchronous operations verse asynchronous operations. Research the JS event loop.
What do you think callback hell is? Why did ES6 give us the Promise API? Just a fun fact synchronous languages don’t have a call stack. ✌️
@@Vorenus875 both explanations are understandable and I think 100% correct 😅
@@thedelanyo Sorry you’re right I read it again. I just caught caught more with “threads” in the video. Maybe someone can explain to me how this is a “game changer” in 2023 being that the JS event loop has always been this way.
Ah yes, multithreaded mutable data. What could go wrong?
data race. data race!!
Sure helps that you have to explicitly state what bit of data should cause you headaches, at least it helps me think more clearly about it.
sounds like a good way to go to hell to me
You talk like you can only code by example.
This isn't like the multithreaded nightmare that is Java. You explicitly need to define a chunk of memory or data to be shared and be mutable, it's not "anything goes".
People often confuse concurrency with parallelism, concurrent tasks run at the same time, but only one is executed at a given time, parallel tasks are executed at the same time, thus requiring synchronization
And this is concurrency or parallelism? sorry I'm confused.
@@danielvega646 here's a easy way to understand.
concurrent: cpu will spend x time on a task, but once the timer is up it goes to another;
parallelism: cpu will process all the tasks at the same time.
in this case, worker_threads execute parallel instead of concurrent.
this is a big simplification so if you want to know more how concurrency and parallelism work you should look it up and also try to understand how the OS schedule these tasks to multiple cores at the same time.
People get confused by this because they use the phrase "at the same time." You used that phrase to describe both concepts. I feel like the industry chose poorly with the term concurrency, since the meaning of the English word is "at the same time," and more accurately describes parallelism. I'd have offered something like interspersed, interleaved, multitasking, or something like that. If computing concurrency is occuring, then the different tasks are very much not being performed "at the same time." I had been using the computing terms essentially interchangeably because of this misunderstanding, luckily without ever being in a context where it mattered.
@@donkeyy8331 concurrent looks like ____-----____---____---___ parallel looks like =========== ? Is that's right?
@@bryanlee5522yup! That's exactly it
I had to stop watching after around 1:17.
The Javascript always run on a single thread, but all of the IO, file handling, etc isn't happening in Javascript.
All of those things are managed in a thread pool. By default, NodeJS maintains a thread pool of 4. This can be overridden by setting the UV_THREADPOOL_SIZE env variable.
Thanks for sharing this really useful way of using worker threads. I'll have to give this a try.
Node isn't always single-threaded, as its core is written in C/C++ and certain modules will run in separate threads. But Node's single-threaded nature means we avoid having to deal with potentially difficult to debug problems related to memory sharing / data synchronization between threads.
I/O, network, and CRUD operations against a database are usually the bottlenecks, not long-running code blocking the main thread.
Well, not quite. In Node, both IO operations and long-running code can be bottlenecks. Yes, if you don't have long-running code, then your application will run fine. But as soon as you have long-running code, that'll bottleneck you way faster than in other languagues because it's that much harder to execute that code in another thread. In that sense, long-running code is the Node.js-specific bottleneck and is what should be avoided if you want to have low latency.
NodeJS is single-threaded (CPU handles tasks sequentially), but the I/O is asynchronous. So, the I/O devices - like networks, storage, and other connected devices, do not have to hold on to the CPU. Other languages are multi-threaded, but their I/O were mostly synchronous. This is why you get HTTP timeouts with Java and Python, but you never get it with Node.
@@josephmariealba8483 both Java and Python has asynchronous. HTTP timeouts are caused by many reasons (and Node application can return HTTP timeout too), sometimes its due to bad code implement, sometime it's security mechanism (to avoid some kind of attacking method - which try to hold connection as long as possible).
@@DavidsKanal
> that'll bottleneck you way faster than in other languagues because it's that much harder to execute that code in another thread
How easy it is in other langs?
@@hl7297 for c#, more or less:
new Thread(fib).Start()
new Thread(fib).Start(singleParamPassedAsObject)
new Thread(() => fib(param1, param2, param3)).Start()
to create a completely new thread
or
Task.Run(fib)
Task.Run(async () => { fib(param1, param2, param3); })
to run your code on one of pooled threads (generally better idea, unless the code is long running or does synchronous io)
you also have PLINQ where you can for example do ' var x = someArray.AsParallel().Select(i => i*2 + 2).ToArray();' which will distribute the calculations over all cores and merge the result automagically.
Note that while it's easy to start code on new thread, it's not that easy to make sure the code runs correctly in parallel (think locks, deadlocks, race conditions, compiler and cpu optimizitations causing writes to var on thread 2 not visible to thread , multiple threads accessing same var under coarse-grained lock effectively making your multithreading slower than single thread and many many more)
The way you speak and explain things is perfect and pleasing to listen to. Cheers
Hey, I had some experience with this feature while doing PWA application. And sadly I should say, that Workers aren't "magic stick", that can fix any performance issue by just wrapping a code into it. One big limitation is that data, transferred between worker and main thread should be serialized. Serialization can cost more than performance you potentially got by splitting algorithm into threads.
Shouldnt buffer sharing like in the example solves this issue?
@@EuSouAnonimoCara Yes and no :D. Anyway you can only serialized data into shared buffer, so performance issue about serialization doesn't went away. It can be managed in early stage of developing module, but sadly you can't just wrap smth into web-worker to make it faster. My point was about it :D
You can use transfer lists to move memory to the other thread downside is you can’t use it in the thread that sent it but you can have the worker thread re transfer the same buffer back after doing whatever it needed to do. Nice thing is you can still use postMessage but can avoid the overhead of deep cloning
@@sawinjerwhere do you learn all this stuff? I'm taking a node js course and I'm not sure if I'd be able to keep up with the people in the comments when discussing this stuff. I think I've heard a few speakers talk about this, but that was a long time ago and I can't remember who or what they said. (Kyle Simpson maybe)? Did you study CS?
@@thelegendofzelda187 I have CS major, but this knowledges I gained in a MDN articles. Also in particular this topic I had commercial experience
Glad you discovered it!
~3 years ago I failed a junior interview because the interviewer asked me "why javascript was single threaded".
I explained her I felt that the question was a "gotcha question" because the initial affirmation was false and depending on the enviroment you could make use of multiple processes or threads with workers or clusters to achieve multithreading.
She laughed at me and told me I was wrong because javascript used the event loop and couldn't run on multiple threads. I don't think she was an engineer but to this day I'm not sure and I think she was following a script, a bad one as you can see...
Anyways, thanks for the video! Keep pushing!
just a reminder for everyone that js _is_ single threaded
@@josedallasta yes the language itself is single threaded,
But the runtime can run multiple instances of it and share some memory Between them.
@@josedallasta JS used to be single threaded. However modern JS has features like SharedArrayBuffers and Atomics, which are definitely multitreading features.
"She laughed at me" - trust me, you don't want to work for those people.
@@blizzy78 Happily I learned the game pretty early on!
NodeJS works exceptionally well for I/O Bound Tasks.
Worker has been part of NodeJS since version 12.
And Web Workers initially started in 2009.
Nodejs is outdated BE framework. Why would anybody except for some "fullstack" devs use it nowadays? Libraries are absolute mess with tons of vulnerabilities and close to zero support, let alone half of them dies over 1-2 years. Why would you use it over python, c#, kotlin, go, java and etc... Those languages have frameworks that are way more mature with way more reliable libraries and way less security concerns. People claim that starting with nodejs is simple, while it is not true. It is infinitely easier to build a dotnet7 api for a newcomer. All the tooling works out of the box, no mess, actual code standards, no clickbait content, no confusion with libraries(what orm should I use? MikroORM, Objection, TypeORM, sequelize etc...). You just download SDK and IDE. No need to download nvm. No need to fight over npm, pnpm, yarn. No need for nvm either.
Node is just a mess. Absolute nightmare to work with 3+ people on the same project. It is also absolute nightmare if you need to maintain the project for 5+ years. If I learned anything in the past decade, staying away from node is the best thing...
I think the shared memory is new for node
Ryan, this is very helpful, actually you helped me understand Go, and node more as a professional developer who mains Typescript, Thank you!
Glad this video popped up on my feed, love the way your videos are made! Subscribed :)
Came here from Beyond Fireship. Nice video! Subscribed!
Very good video, straight to the point, no BS, thanks!
2:08 Small tip:
You may use "console.time" which will log time automatically in ms.
Like this:
console.time("Fib");
const result = fibonacci(iterations);
console.timeEnd("Fib");
Thx man I never knew about this
Thanks for this video!
I am running a nextjs app, where each request takes around 10-20 seconds to complete (some GPU related stuff).
The whole app used to freeze when more than 4 users used to connect at a time. It was hard to debug because I thought it should be async, turns out as you said if a blocking operation is performed in an async context, the whole event loop freezes and the users see a 502.
I created a redis queue and a separate service that does the blocking requests and everything is working fine now.
Would love to try out worker threads, looks promising.
I wish to make a nextjs app too but after watching this video and comments, I think I have not fully understood nodejs yet... I also want to use nodejs but ofcourse, I don't want frozen app with just a few users! Any tips on learning and understanding it in the real world sense?
@@shantanukulkarni8883 start making this nextjs app and learn what you need on the go
Can you explain redis queue and separate service in detail?
@@ritik-patel05 nextjs api backend puts json stringified data on redis queue. Node script conitinuously looks for new value in queue.. When it gets a value from it, the value is parsed and gets sent to the GPU server. After, the GPU server is done fulfilling the request the node script gets the data and this data is again stringified and put in Redis . While this was happening, the client polls the nextjs app every few seconds to see the status of the job in queue. After processing of the job, nextjs app gets the output from Redis and sends it back to the client.
just gonna ignore that each worker took more than 2x the time
thats true but it looks like the total running time was 1/5th of the time
Context switching !
If you spawn more threads than the available number, they're going to take much longer,
That's way you should create a thread pool first and assign tasks to it.
Initialization of threads also adds up. This is a bad way to benchmark.
These whole techniques seem like going back to regular multi threaded programing languages
@@anothermouth7077 No one is going back to anything, worker threads are even present in thr rust + actix for example.
You need to understand what you are trying to do, in this case if you have a long lock it's interesting to invest into a worker thread.
It is so easier to do in Go though. It is essentially the same amount of code as in the first (not working) example, but it works asynchronously as you would expect. You can create async functions just by adding "go" keyword to a function declaration (or right beside the execution also, if you want). Use a waitgroup instead of Promise.All and it just magically works as you would expect. Crazy, right?
@Name I myself haven't used go that much yet, but what's wrong with it?
@Name are you serious? Go a bad language? Compared to.... JavaScript? Bro, you are killing me stop.
@Name I think you don't know basic English.
The other guy was talking about go being better than JavaScript and you said that go is a bad language, meaning in context that is no better than JavaScript, V doesn't have absolutely anything to do with this.
Plus you look like a complete moron if you go around shoving your favourite language down everyones throats when its not asked for.
@Name try v?
@@twothreeoneoneseventwoonefour5 it all good until that GC cycle kick in and grab your ball and you have nothing to do to even reduce it a bit. Go is as good as simple but when you start adding more memory related to it, then GC will happily blow all your work. i/e: personally i prefer Jai more, but since god know when it got release, Zig is fine
heads up.
Code in the Promise constructor runs SYNC and is blocking!
There is no such thing as an async constructor. not even for a promise.
Regarding to 3:18 at Line 7.
To make this happen in a JS async nature you should use Promise.resolve().then(...your code...)
in calling Promise.all([doFib()]) is executing the function doFib() - however, the code runs sync because its calculated in the constructor of the promise which is immediately solving and resolving the task - doing the work immediately, rather than creating a promise that should return once the task is solved.
all the functions in Promise.all() are executed one after another before the constructed array is handed to the Promis.all() function - so Promise.all() is receiving an array where all the promises are already resolved and there's nothing to wait for.
what you would want to see in the Promise.all() test is not everything being calculated in sync - but printed kinda simultaneous as all of them are actually processed in "fake" async - time-slice parallelism.
this is probably affecting the test - i didnt try it myself though because i'm too lazy to type all that code.
And one more thing - if you're using workers, you can ease your life with the "comlink" npm packaged. written by former google dev advocate Surma and its reaaaally taking the pain out of worker thread communication.
> the code runs sync because its calculated in the constructor of the promise
the constructor does not run immediately the callback function, it will be executed (synchronously) only when there is no more to compute in main thread, try something like this:
const a = 'first';
const p = new Promise(resolve => resolve(1)).then(res => console.log(res))
console.log(a)
// first
// 1
I do agree that a better way to explain this could have been using setTimeouts
That's not how it works. The computation doesn't run on a function while it's in the queue. You would waste more time pushing them in the queue because you can only one the computation one at a time as each one is resolved immediately.
@@jc-depre to replicate the OP's point, change your example to:
const p = new Promise(r => {
console.log(1);
r();
});
You will see the result as:
// 1
// First
Also, glad the OP mentioned comlink. I love it!
The continuations of all those promises will run as microtasks on that same thread though so it will still run each in order and block the thread, it will just not do it until the next tick of the event loop. To be properly parallel they need to happen on different threads which requires using worker_threads.
I don't know what is true anymore. Everyone's just speaking their own theory 😢
It is also possible to use the cluster built-in module is well-suited for scaling network applications, the cluster module is more suitable for scaling network applications, while the worker_threads module is more suitable for parallelizing CPU-bound tasks within a single process.
i hit the subscribe button HAAARD
im going to explore the rest of your videos and hope they are just as well done!
thank you for the information and high quality production!
What is spinning up more processes for you? Server Workers/Childs waiting for job forever in HOT, Apache/Nginx/Lightppd for example - workers already in ram ready and waiting. If you did server config well of course. Yours first example wont use web server or what ?
Hey Ryan, while I understand this video is to demonstrate multicore execution I think it's important to point out that you can achieve asynchronous execution inside the event loop without workers. It would have been good to see you include the following async version of the code in your demonstration so that users understand there is a middle ground that allows asynchronous execution without requiring worker management.
async function fibonacci(n) {
return n < 1 ? 0
: n
Async and promises do the same thing. They are just syntactically different. The video is highlighting that you need multiple cores in order to execute things in parallel. You are not able to execute things in parallel in JS because it only has access to one CPU which can only handle one process at a time. JS Async is not handling tasks at the same time, it is scheduling tasks to be handled at a later time. This is concurrency. This video is about parallelism not concurrency.
if you've ever used assembly language before, Its like putting a jump statement at the end of a block of code to go to a previous (or future)spot in the code, the asynchronous portion of the code, then have another jump statement to jump to the next part of the code the dev wishes to execute when the asynchronous code finishes.
@@terrencemoore8739 If you read my comment you should see that I said that async/await is not multi core execution and my main point was that his example used promises in a way that didn't really make sense, he could have just put the calls to the functions as direct calls as he wasn't really making use of promises in a meaningful way. If the code is going to use promises it should demonstrate how they can execute code asynchronously.
I've never tried writing asynchronous Assembly language code though, so this part may be a bit inaccurate
@@John.Whitson The video is not about asynchronous code, its about parallelism. He acknowledges that JS is async but if you want to do two things at the same time, its impossible. CPUs handle one process at a time. If you want two things to run at once , you need two CPUs
Man, congrats! Awesome video quality and content.
wow. just impressive how direct and well executed this video is. And it takes time and work. and you make it looks like simple. Just top of the line Job. Congratulations. The sad part is that I can only give a like once.
you also do commercials or radio announcements, by chance?
(if not, you should) 😁
Cluster module is better suited for distributing load horizontally on a node server, while worker threads module is better suited for parallely performing cpu bound tasks.
You started this video with the problem of load distribution (whose solution is cluster module) and then suggested worker threads with an example of cpu intensive job.
NodeJS (or JS) doesn't have a lock for volatile variables, so how can we ensure that two (or more) processes are not changing the same buffer in the memory at the same time?
I like the format of this video. Subscribed.
Got insights about load balancing , orchestration and cloud computing . Thanx great for a start !
Nicely crafted video ryan, really good
Thanks for introducing me to worker threads in Node! :D
i know you try to convince me to back to Nodejs, but don't try, i decided to stick to GO xD
go is ❤
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
yeah, let alone js on the browser
isn't worker_threads already a thing since NodeJS 10?
Still, in most cases, a compiled language will outperform in performance than interpreted language.
Your point is valid it'll outperform it but javascript is not entirely interpreted, it's JIT compiled, which does a lot of under the hood optimisations that brings in some performance gains in some use cases.
@@fullstack_journey A compiler is nothing more than an interpreter that translates the code into machine code before execution.
Being JIT Compiled really only saves time when you call multiple times the same bit of code.
@@fullstack_journey it is not entirely JIT compiled either, as the code needs to be run and the runtime needs to gather usage patterns such as which types are used...
@@FalB27in a server, most of the code will be called multiple times.
@@FalB27 its more complicated than that. the JIT compiler does more than just compile it can also analyse and optimize code paths better than a program that is "normal" compiled. Besides being able to compact heap memory in a VM, making it more accessible to the CPU and its cache.
Very glad I saw this video! Makes me excited to do some more NodeJS development
This example seems to be about parallelization rather than concurrency. The first time the calculations ran on a single core and it wouldn't matter if they ran asynchronously or blocking, because the rest of the cores were idle. The second time the job finished faster because of mutithreading, but the result would be the same even it would run in a blocking fashion.
Agreed
Really enjoyed it, duuno if it's the voice, but I was glued till the end...Very informative video also
I think you have concurrency and parallelism mixed up. As well as the definition for asynchronous function. When you ran the worker threads they were running in parallel, that is, multiple threads running the same program on the same tick. When you ran asynchronous, those are concurrent.
I think you're right:
"Concurrency is the task of running and managing the multiple computations at the same time. While parallelism is the task of running multiple computations simultaneously."
First time on this channel. Dude! You have excellent voice.
Great content. Sub & like.
Nice video. I finally understand the shared buffer data type.
Congratulations, explained with mastery and it resulted to be an historicall meaniungful video to understand JS
Thanks Ryan!
Saw this video and subscribed.
Liking the vibe.
Well done. Clean. Something I will use.
Thanks.
@3:02 That is why you don't do CPU intensive execution in Node.JS. But for input-output operation it is very optimum. It is still faster than other non-async language backend runtime.
Friend, this is the first time I join your channel, I'm happy that RUclips brought me here, your voice is a gift from God. Keep up the excellent work.
This is the best video ever. First of all it finally explained to me what nodejs is, and secondly such a clear explanation of worker threads and multi threading. 🔥🔥🔥
One of the issues while using worker threads in node js is the memory requirement per thread, every thread has it's own node runtime. So the amount of ram usage is way more, compared to threads in languages such as C# or C++.
That was pretty cool. Thanks for the video!
Worker threads are not a new thing in nodejs. It even works in browsers lol. JS ecosystem is still slow and no one writing fast apps in it. Nodejs designed for IO, not cpu-bound tasks
YOOOOO, your intro was amazinggg I thought whole video was like that for a sec hahaha tho if it was ngl it would be hella fun and exciting to watch it sure will take ton of time tho-;''
Ryan, you have a next level narrative voice. Get a talent agent and audition for the documentary producers at BBC and PBS and Netflix. Dude!
The fastest way for two threads to communicate is via a fixed size shared mutable buffer that is allocated once.
Thanks for this appreciate the podclass.
use pm2 instead of workers. pm2 counts your cores and will add threads on the fly.
pm2 creates multiple processes instead of threads.
This
Good bass voice! I don't understand what you're talking about but I will watch this video when I go to bed.
that is a golden voice for tutorials ❤ i am big fan right now
I don't know the performance difference between fork and worker thread, but i wanted to point to the fact that worker thread create v8 instance which have the overheard of allocating memory and all that stuff because it's fundamentally creating new process
actually, everything that's being called with async is in fact separate thread but it's doing some OS stuff in a background, what you also shown us is concurrency not the parallel execution so in fact it's not fast, it's just handling executions frequently between them, but it's not increasing speed, concurrency only increases speed when it comes to IO bound operations, but this is true for different techs like in Python or Ruby, Java can tho truly run in parallel and utilise all cores, so there are many misconceptions in net about this.
so in short if it comes utilising all cores it's parallelism, when it's just concurrency it's just once core which quickly swaps context executions.
Great Demo! Thanks.
I'm a Korean junior developer. Your pronunciation is so good that I didn't have a problem understanding your explanation.
I would love to see more posts / videos like this
Very informative. Thank you!
Same. Just found out the power of Worker Thread in Nodejs recently.
Ryan, you popped up a few times now -> i subbed
worker threads are using new processes to accomplish theirs task on separate threads (one thread by process in a single thread fashion) you can get the PID of each worker_threads I guess the only multi-thread capabilities is with IO via LibUv using callbacks, promisses and async await on those...
Node는 System call로 던질 수 있는 I/O 작업은 이를 통해 os로 던지고. Crypto나 기타 os로 못 던지는 작업들은 node의 스레드풀을 이용한다고 알고 있습니다.
결국 worker thread를 사용하지 않는다 하더라도 node는 멀티 스레딩을 활용하는 것이지요.
저도 그렇게 알고 있습니다만, 그 멀티 스레드 활용이 제한적이라는 게 여기서의 논쟁거리인 것 같습니다.
Ryan 의 피보나치 테스트에서 수치가 나왔으니까요...
만약, Ryan 의 코드에서 피보나치 연산을 외부의 C 스크립트로 처리했고, Node 는 그 연산결과를 받아오는 방식이었다면,
Node 의 멀티스레딩을 신뢰하는 사람들의 기대에 부합하는 결과가 나왔을 수도 있겠죠.
적어도, C/Python 으로 처리해야 할 작업과, Node 가 직접 연산해야하는 작업의 판단은 있어야 하고,
worker 로 스레드를 분산해 준다면, 그에 따르는 위험도 관리할 수 있어야 한다는 정도로 저는 정리해봅니다.
If you can read between lines then this is definitely a fantastic video. Thanks
Very well done Ryan.
great video man! internet needs more guys like you. thank you
I learned a lot from this video , thank you so much
Thank you very much, I did lost an amazing job opportunity because I could not explain exactly it. You did it sou clearly that next time I will no fail 😅
Expressively understandable!
Dude you have an amazing voice. thanks for the video.
Could you please do a follow up video on how to incorporate shared buffer into this implementation?
Huh, node js can run on multiple threads using clusters as well. On server apps it will actually share threads and it can allow for failover.
the intro is awesome!
The guy just discovered a technology from 2015! Now I understand why people dislike working with JS codebases.
Async runtimes/green threads are like from around 2010, and finally things like Go and Rust Tokio and Kotlin Coroutine makes it popular.
Wait until you discover Python.
That worker thread API is incredible. I had SO MUCH trouble making an efficient web crawler in NodeJS that I switched to Java and then Python. No joke, I ended up as a Python dev because I didn't understand NodeJS haha.
If you do understand NodeJS, you will eventually end up as Python and that's a good choice.
NodeJS worker thread is not that great. Performance is bad and it takes a lot of time to transfer messages between the main thread to workers and vice versa.
In this video example, you can see that each doFib() in the main thread is done in 480ms but when using worker thread, it needs around 970ms to finish.
Let's choose a real multi-thread language when we need it for CPU intensive tasks. NodeJS event-loop is good for IO tasks only.
I already have a difficult enough time keeping node workers alive for more than a few months without PM2. Can't imagine how bad the reliability would be with that responsibility also being pushed directly into 1 node process, lol.
Hey Ryan, thats a cool video, can you please make a video around JS buffers and how to use them please?
Is this a right approach ? Correct me
1. Assign cpu intensive sync tasks to worker threads
2. Let node event loop/thread pool handle async ops
3. Spawn multiple processes with 1&2 to balance the traffic load
I always thought this is the way, as I said open to corrections
I swear cpu is barely a bottleneck for most applications. Performance is overrated and the language is rarely the issue. If you want extra performance write a module in rust or something.
and why would I use this cryptic mess when I can write go func() and done in GO or new Task in c#
because nowadays the world run on JS 🤣🤣🤣🤣🤣😁😁😁😁
Because people are too easily swept away by hype trains😂
Absolutely amazing
Can you make video on Worker threads crash course demonstrating with express as well please
That is a very useful video. Thanks a lot!
does it share the loaded lib at runtime ? What if a package has a singleton and it is imported by both workers and master ?? Do they share a ref ? What s the overhead using it ? Besides the sharedbuffer interface, it looks likes a neat improvement of past ipc systems. looks good overall.
Really interesting, great video.
Workers are really cool if you understand them. I use them for some front-end applications, happy to see they are in node now. They are expensive to create though. They're not equivalent to doing like a goroutine or something.
I'm curious, what would you use them for on the front end? Is it some kind of cpu intensive task that runs on the client?
@@matth23e2 Yeah pretty much. I run a reasonably cpu intensive search algorithm in a worker, and it just returns the array of references. Keeps the main thread unblocked. If you give workers some thought you can use them for a lot of things though.
thanks bro , that was helpful !!
Thanks for sharing such valuable information! Could you help me with something unrelated: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). What's the best way to send them to Binance?
Using the SharedArrayBuffered requires some particular HTTP header?
Yep, this is the last straw, I am learning Go, good video.
nice and sharp explanation. Thank you. For a turn-based strategy what should one use?
0:20 was that flashbang really necessary?
Now you got me thinking about a framework for semantic thread-management 🤔Would love to see more about this
Elixir?
Excellent vid as always... (and you should probably do voice-over work too, lol) - Have you had any experience with Caddyserver which is written in Go? - I am thinking of using it primarily for its reverse proxy features and it would be interesting to get your take on it...
Caddy is nice! But nginx and apache are pretty good too though. If you're not familiar with any of them and you don't care about battle testedness then caddy for sure
I tried to use workers - but I have not seen any improvements. After this I tried clusters - there was improvement. I tested with the Apache ab tool - and there was improvement in performance.
Great job with the explanation Ryan 👏If anyone’s looking for more Node js videos, we’ve released agenda scheduling and app logging to help the community too 💪
I was asked in a technical interview if Nodejs can do multithreading or multiprocessing, and the interviewer said it can’t do either. Unfortunately at the time I didn’t know any better, but that’s wrong.