Node.js is a serious thing now… (2023)
HTML-код
- Опубликовано: 28 май 2023
- Javascript is single-threaded, so we normally have to do weird tricks to have Node.js fully utilize multicore CPUs. With worker threads, things have changed…
Follow me on Twitter: / ryancodez
i think there is some confusion as to the actual meaning of async. Async does not really have anything to do with threads. They are closely related but fundamentally separate. You can have async execution within a single thread, sometimes this is even more performant than using a thread pool. Async execution just means that the event loop can be passed off to other tasks.
I totally agree 👍
💯
Your take on async and threads is completely wrong just saying. JavaScript has a asynchronous non-blocking I/O thread, a single thread that does not block execution. It’s more easy to explain in a metaphor.
If Node was a server at a restaurant it would take that order. Give it to the chef come back out and take another order aka HTTP request. On the other hand PHP would grab the order. Wait for the Cook to complete that order before they ever took another order from another customer.
Basic synchronous operations verse asynchronous operations. Research the JS event loop.
What do you think callback hell is? Why did ES6 give us the Promise API? Just a fun fact synchronous languages don’t have a call stack. ✌️
@@Vorenus875 both explanations are understandable and I think 100% correct 😅
@@thedelanyo Sorry you’re right I read it again. I just caught caught more with “threads” in the video. Maybe someone can explain to me how this is a “game changer” in 2023 being that the JS event loop has always been this way.
Ah yes, multithreaded mutable data. What could go wrong?
data race. data race!!
Sure helps that you have to explicitly state what bit of data should cause you headaches, at least it helps me think more clearly about it.
sounds like a good way to go to hell to me
You talk like you can only code by example.
This isn't like the multithreaded nightmare that is Java. You explicitly need to define a chunk of memory or data to be shared and be mutable, it's not "anything goes".
Glad this video popped up on my feed, love the way your videos are made! Subscribed :)
Ryan, this is very helpful, actually you helped me understand Go, and node more as a professional developer who mains Typescript, Thank you!
Thanks for sharing this really useful way of using worker threads. I'll have to give this a try.
i hit the subscribe button HAAARD
im going to explore the rest of your videos and hope they are just as well done!
thank you for the information and high quality production!
A few of your videos have been dropped into my homepage, you look like have some big balls man. The videos you make are mostly things I already know but haven't used yet but it's like you're introducing them nicely.
Very good video, straight to the point, no BS, thanks!
People often confuse concurrency with parallelism, concurrent tasks run at the same time, but only one is executed at a given time, parallel tasks are executed at the same time, thus requiring synchronization
And this is concurrency or parallelism? sorry I'm confused.
@@danielvega646 here's a easy way to understand.
concurrent: cpu will spend x time on a task, but once the timer is up it goes to another;
parallelism: cpu will process all the tasks at the same time.
in this case, worker_threads execute parallel instead of concurrent.
this is a big simplification so if you want to know more how concurrency and parallelism work you should look it up and also try to understand how the OS schedule these tasks to multiple cores at the same time.
People get confused by this because they use the phrase "at the same time." You used that phrase to describe both concepts. I feel like the industry chose poorly with the term concurrency, since the meaning of the English word is "at the same time," and more accurately describes parallelism. I'd have offered something like interspersed, interleaved, multitasking, or something like that. If computing concurrency is occuring, then the different tasks are very much not being performed "at the same time." I had been using the computing terms essentially interchangeably because of this misunderstanding, luckily without ever being in a context where it mattered.
@@donkeyy8331 concurrent looks like ____-----____---____---___ parallel looks like =========== ? Is that's right?
@@bryanlee5522yup! That's exactly it
Glad you discovered it!
~3 years ago I failed a junior interview because the interviewer asked me "why javascript was single threaded".
I explained her I felt that the question was a "gotcha question" because the initial affirmation was false and depending on the enviroment you could make use of multiple processes or threads with workers or clusters to achieve multithreading.
She laughed at me and told me I was wrong because javascript used the event loop and couldn't run on multiple threads. I don't think she was an engineer but to this day I'm not sure and I think she was following a script, a bad one as you can see...
Anyways, thanks for the video! Keep pushing!
just a reminder for everyone that js _is_ single threaded
@@josedallasta yes the language itself is single threaded,
But the runtime can run multiple instances of it and share some memory Between them.
@@josedallasta JS used to be single threaded. However modern JS has features like SharedArrayBuffers and Atomics, which are definitely multitreading features.
"She laughed at me" - trust me, you don't want to work for those people.
@@blizzy78 Happily I learned the game pretty early on!
Man, congrats! Awesome video quality and content.
The way you speak and explain things is perfect and pleasing to listen to. Cheers
Now you got me thinking about a framework for semantic thread-management 🤔Would love to see more about this
It is also possible to use the cluster built-in module is well-suited for scaling network applications, the cluster module is more suitable for scaling network applications, while the worker_threads module is more suitable for parallelizing CPU-bound tasks within a single process.
Very informative. Thank you!
Very glad I saw this video! Makes me excited to do some more NodeJS development
Node isn't always single-threaded, as its core is written in C/C++ and certain modules will run in separate threads. But Node's single-threaded nature means we avoid having to deal with potentially difficult to debug problems related to memory sharing / data synchronization between threads.
I/O, network, and CRUD operations against a database are usually the bottlenecks, not long-running code blocking the main thread.
Well, not quite. In Node, both IO operations and long-running code can be bottlenecks. Yes, if you don't have long-running code, then your application will run fine. But as soon as you have long-running code, that'll bottleneck you way faster than in other languagues because it's that much harder to execute that code in another thread. In that sense, long-running code is the Node.js-specific bottleneck and is what should be avoided if you want to have low latency.
NodeJS is single-threaded (CPU handles tasks sequentially), but the I/O is asynchronous. So, the I/O devices - like networks, storage, and other connected devices, do not have to hold on to the CPU. Other languages are multi-threaded, but their I/O were mostly synchronous. This is why you get HTTP timeouts with Java and Python, but you never get it with Node.
@@josephmariealba8483 both Java and Python has asynchronous. HTTP timeouts are caused by many reasons (and Node application can return HTTP timeout too), sometimes its due to bad code implement, sometime it's security mechanism (to avoid some kind of attacking method - which try to hold connection as long as possible).
@@DavidsKanal
> that'll bottleneck you way faster than in other languagues because it's that much harder to execute that code in another thread
How easy it is in other langs?
@@hl7297 for c#, more or less:
new Thread(fib).Start()
new Thread(fib).Start(singleParamPassedAsObject)
new Thread(() => fib(param1, param2, param3)).Start()
to create a completely new thread
or
Task.Run(fib)
Task.Run(async () => { fib(param1, param2, param3); })
to run your code on one of pooled threads (generally better idea, unless the code is long running or does synchronous io)
you also have PLINQ where you can for example do ' var x = someArray.AsParallel().Select(i => i*2 + 2).ToArray();' which will distribute the calculations over all cores and merge the result automagically.
Note that while it's easy to start code on new thread, it's not that easy to make sure the code runs correctly in parallel (think locks, deadlocks, race conditions, compiler and cpu optimizitations causing writes to var on thread 2 not visible to thread , multiple threads accessing same var under coarse-grained lock effectively making your multithreading slower than single thread and many many more)
Thanks for this appreciate the podclass.
That was pretty cool. Thanks for the video!
wow. just impressive how direct and well executed this video is. And it takes time and work. and you make it looks like simple. Just top of the line Job. Congratulations. The sad part is that I can only give a like once.
Great Demo! Thanks.
Thanks for introducing me to worker threads in Node! :D
Came here from Beyond Fireship. Nice video! Subscribed!
Saw this video and subscribed.
Liking the vibe.
YOOOOO, your intro was amazinggg I thought whole video was like that for a sec hahaha tho if it was ngl it would be hella fun and exciting to watch it sure will take ton of time tho-;''
Nicely crafted video ryan, really good
nice and sharp explanation. Thank you. For a turn-based strategy what should one use?
Hey, I had some experience with this feature while doing PWA application. And sadly I should say, that Workers aren't "magic stick", that can fix any performance issue by just wrapping a code into it. One big limitation is that data, transferred between worker and main thread should be serialized. Serialization can cost more than performance you potentially got by splitting algorithm into threads.
Shouldnt buffer sharing like in the example solves this issue?
@@EuSouAnonimoCara Yes and no :D. Anyway you can only serialized data into shared buffer, so performance issue about serialization doesn't went away. It can be managed in early stage of developing module, but sadly you can't just wrap smth into web-worker to make it faster. My point was about it :D
You can use transfer lists to move memory to the other thread downside is you can’t use it in the thread that sent it but you can have the worker thread re transfer the same buffer back after doing whatever it needed to do. Nice thing is you can still use postMessage but can avoid the overhead of deep cloning
@@sawinjerwhere do you learn all this stuff? I'm taking a node js course and I'm not sure if I'd be able to keep up with the people in the comments when discussing this stuff. I think I've heard a few speakers talk about this, but that was a long time ago and I can't remember who or what they said. (Kyle Simpson maybe)? Did you study CS?
@@thelegendofzelda187 I have CS major, but this knowledges I gained in a MDN articles. Also in particular this topic I had commercial experience
That is a very useful video. Thanks a lot!
Thanks Ryan!
Very well done Ryan.
I learned a lot from this video , thank you so much
Nice video. I finally understand the shared buffer data type.
I would love to see more posts / videos like this
Thanks for this video!
I am running a nextjs app, where each request takes around 10-20 seconds to complete (some GPU related stuff).
The whole app used to freeze when more than 4 users used to connect at a time. It was hard to debug because I thought it should be async, turns out as you said if a blocking operation is performed in an async context, the whole event loop freezes and the users see a 502.
I created a redis queue and a separate service that does the blocking requests and everything is working fine now.
Would love to try out worker threads, looks promising.
I wish to make a nextjs app too but after watching this video and comments, I think I have not fully understood nodejs yet... I also want to use nodejs but ofcourse, I don't want frozen app with just a few users! Any tips on learning and understanding it in the real world sense?
@@shantanukulkarni8883 start making this nextjs app and learn what you need on the go
Can you explain redis queue and separate service in detail?
@@ritik-patel05 nextjs api backend puts json stringified data on redis queue. Node script conitinuously looks for new value in queue.. When it gets a value from it, the value is parsed and gets sent to the GPU server. After, the GPU server is done fulfilling the request the node script gets the data and this data is again stringified and put in Redis . While this was happening, the client polls the nextjs app every few seconds to see the status of the job in queue. After processing of the job, nextjs app gets the output from Redis and sends it back to the client.
NodeJS works exceptionally well for I/O Bound Tasks.
Worker has been part of NodeJS since version 12.
And Web Workers initially started in 2009.
Nodejs is outdated BE framework. Why would anybody except for some "fullstack" devs use it nowadays? Libraries are absolute mess with tons of vulnerabilities and close to zero support, let alone half of them dies over 1-2 years. Why would you use it over python, c#, kotlin, go, java and etc... Those languages have frameworks that are way more mature with way more reliable libraries and way less security concerns. People claim that starting with nodejs is simple, while it is not true. It is infinitely easier to build a dotnet7 api for a newcomer. All the tooling works out of the box, no mess, actual code standards, no clickbait content, no confusion with libraries(what orm should I use? MikroORM, Objection, TypeORM, sequelize etc...). You just download SDK and IDE. No need to download nvm. No need to fight over npm, pnpm, yarn. No need for nvm either.
Node is just a mess. Absolute nightmare to work with 3+ people on the same project. It is also absolute nightmare if you need to maintain the project for 5+ years. If I learned anything in the past decade, staying away from node is the best thing...
I think the shared memory is new for node
just gonna ignore that each worker took more than 2x the time
thats true but it looks like the total running time was 1/5th of the time
Context switching !
If you spawn more threads than the available number, they're going to take much longer,
That's way you should create a thread pool first and assign tasks to it.
Initialization of threads also adds up. This is a bad way to benchmark.
These whole techniques seem like going back to regular multi threaded programing languages
@@anothermouth7077 No one is going back to anything, worker threads are even present in thr rust + actix for example.
You need to understand what you are trying to do, in this case if you have a long lock it's interesting to invest into a worker thread.
great video man! internet needs more guys like you. thank you
Really enjoyed it, duuno if it's the voice, but I was glued till the end...Very informative video also
Got insights about load balancing , orchestration and cloud computing . Thanx great for a start !
Thanks for the awesome tutorial,
I think you have concurrency and parallelism mixed up. As well as the definition for asynchronous function. When you ran the worker threads they were running in parallel, that is, multiple threads running the same program on the same tick. When you ran asynchronous, those are concurrent.
I think you're right:
"Concurrency is the task of running and managing the multiple computations at the same time. While parallelism is the task of running multiple computations simultaneously."
I had to stop watching after around 1:17.
The Javascript always run on a single thread, but all of the IO, file handling, etc isn't happening in Javascript.
All of those things are managed in a thread pool. By default, NodeJS maintains a thread pool of 4. This can be overridden by setting the UV_THREADPOOL_SIZE env variable.
@3:02 That is why you don't do CPU intensive execution in Node.JS. But for input-output operation it is very optimum. It is still faster than other non-async language backend runtime.
that is a golden voice for tutorials ❤ i am big fan right now
I already have a difficult enough time keeping node workers alive for more than a few months without PM2. Can't imagine how bad the reliability would be with that responsibility also being pushed directly into 1 node process, lol.
Could you please do a follow up video on how to incorporate shared buffer into this implementation?
Nice and accurate video my friend! I wonder if backend frameworks like ExpressJS or Nest take advantage of this technique. Doing some processing on an image or something is a perfect use case to use worker threads.
Dude you have an amazing voice. thanks for the video.
If you can read between lines then this is definitely a fantastic video. Thanks
Is there any chance of using a database connection pools, dependency injection, and so forth with workers?
It is so easier to do in Go though. It is essentially the same amount of code as in the first (not working) example, but it works asynchronously as you would expect. You can create async functions just by adding "go" keyword to a function declaration (or right beside the execution also, if you want). Use a waitgroup instead of Promise.All and it just magically works as you would expect. Crazy, right?
@Name I myself haven't used go that much yet, but what's wrong with it?
@Name are you serious? Go a bad language? Compared to.... JavaScript? Bro, you are killing me stop.
@Name I think you don't know basic English.
The other guy was talking about go being better than JavaScript and you said that go is a bad language, meaning in context that is no better than JavaScript, V doesn't have absolutely anything to do with this.
Plus you look like a complete moron if you go around shoving your favourite language down everyones throats when its not asked for.
@Name try v?
@@twothreeoneoneseventwoonefour5 it all good until that GC cycle kick in and grab your ball and you have nothing to do to even reduce it a bit. Go is as good as simple but when you start adding more memory related to it, then GC will happily blow all your work. i/e: personally i prefer Jai more, but since god know when it got release, Zig is fine
Excellent vid as always... (and you should probably do voice-over work too, lol) - Have you had any experience with Caddyserver which is written in Go? - I am thinking of using it primarily for its reverse proxy features and it would be interesting to get your take on it...
Caddy is nice! But nginx and apache are pretty good too though. If you're not familiar with any of them and you don't care about battle testedness then caddy for sure
2:08 Small tip:
You may use "console.time" which will log time automatically in ms.
Like this:
console.time("Fib");
const result = fibonacci(iterations);
console.timeEnd("Fib");
Thx man I never knew about this
the intro is awesome!
Absolutely amazing
Can you make video on Worker threads crash course demonstrating with express as well please
I don't know the performance difference between fork and worker thread, but i wanted to point to the fact that worker thread create v8 instance which have the overheard of allocating memory and all that stuff because it's fundamentally creating new process
First time on this channel. Dude! You have excellent voice.
Great content. Sub & like.
NodeJS (or JS) doesn't have a lock for volatile variables, so how can we ensure that two (or more) processes are not changing the same buffer in the memory at the same time?
Hey Ryan, while I understand this video is to demonstrate multicore execution I think it's important to point out that you can achieve asynchronous execution inside the event loop without workers. It would have been good to see you include the following async version of the code in your demonstration so that users understand there is a middle ground that allows asynchronous execution without requiring worker management.
async function fibonacci(n) {
return n < 1 ? 0
: n
Async and promises do the same thing. They are just syntactically different. The video is highlighting that you need multiple cores in order to execute things in parallel. You are not able to execute things in parallel in JS because it only has access to one CPU which can only handle one process at a time. JS Async is not handling tasks at the same time, it is scheduling tasks to be handled at a later time. This is concurrency. This video is about parallelism not concurrency.
if you've ever used assembly language before, Its like putting a jump statement at the end of a block of code to go to a previous (or future)spot in the code, the asynchronous portion of the code, then have another jump statement to jump to the next part of the code the dev wishes to execute when the asynchronous code finishes.
@@terrencemoore8739 If you read my comment you should see that I said that async/await is not multi core execution and my main point was that his example used promises in a way that didn't really make sense, he could have just put the calls to the functions as direct calls as he wasn't really making use of promises in a meaningful way. If the code is going to use promises it should demonstrate how they can execute code asynchronously.
I've never tried writing asynchronous Assembly language code though, so this part may be a bit inaccurate
@@John.Whitson The video is not about asynchronous code, its about parallelism. He acknowledges that JS is async but if you want to do two things at the same time, its impossible. CPUs handle one process at a time. If you want two things to run at once , you need two CPUs
This example seems to be about parallelization rather than concurrency. The first time the calculations ran on a single core and it wouldn't matter if they ran asynchronously or blocking, because the rest of the cores were idle. The second time the job finished faster because of mutithreading, but the result would be the same even it would run in a blocking fashion.
Agreed
Hi Ryan, how then do we decide how many workers to spawn? My basic understanding is a cpu can have multiple cores, and a core can handle multiple threads.
Amazing content ❤
Is this a right approach ? Correct me
1. Assign cpu intensive sync tasks to worker threads
2. Let node event loop/thread pool handle async ops
3. Spawn multiple processes with 1&2 to balance the traffic load
I always thought this is the way, as I said open to corrections
Is it possible pass an object, or a method with sharedBuffer?
I'm trying to split express endpoints to different workers.
But endpoints should be defined in child process not in main worker.
Really interesting, great video.
Ryan, you popped up a few times now -> i subbed
Can I make chrome extension using java
Java in backend, and js in front end
Please let me know 🎉
Very informative video
I try to make the same code but with image resize instead fibonachi(using sharp). And I obtain a little different result. Only 4 workers work in parallel, another wait to them. Why it is working differently?
worker threads are using new processes to accomplish theirs task on separate threads (one thread by process in a single thread fashion) you can get the PID of each worker_threads I guess the only multi-thread capabilities is with IO via LibUv using callbacks, promisses and async await on those...
This is the best video ever. First of all it finally explained to me what nodejs is, and secondly such a clear explanation of worker threads and multi threading. 🔥🔥🔥
Great video!
Great video 🎉 thank you
Workers are really cool if you understand them. I use them for some front-end applications, happy to see they are in node now. They are expensive to create though. They're not equivalent to doing like a goroutine or something.
I'm curious, what would you use them for on the front end? Is it some kind of cpu intensive task that runs on the client?
@@matth23e2 Yeah pretty much. I run a reasonably cpu intensive search algorithm in a worker, and it just returns the array of references. Keeps the main thread unblocked. If you give workers some thought you can use them for a lot of things though.
very well explained!
Great explanation :)
I find it very invocative how it evokes function.
idk how could you contain all this in your mind
Friend, this is the first time I join your channel, I'm happy that RUclips brought me here, your voice is a gift from God. Keep up the excellent work.
Ryan, that's a new thing for me either . Theoretically, I though it's enough to wrap sync func in Promise API, however we both were wrong with this understanding . 😮
Also just wondering how about multi-threaded (child) processes to employ Golang goroutines . Communication-wise between Node and Go use standard streams ? 🤔✌️
You can split sync func into asynchronously executed pieces, but it's not always possible, and requires completely different approach.
Meaning that functions like fibonacci cannot be executed neither asynchronously nor in parallel, as every next step requires data from previous one (adding previous numbers). However you can Promise.resolve(returns of fibonacci), so the fibonacci function itself won't execute faster, but now event loop is able to execute some of the tasks in-between recursive fibonacci calls.
In this case you'll have to await every return from fib fn, but now it won't be as blocking, but in reality it won't make any difference, and most certainly would still block the event loop on high numbers.
Essentially you can't make anything faster with async in javascript, with exception of I/O tasks, which are actually handled by separate thread/threads, so that event loop while it's waiting for I/O to respond - is able to perform something in-between.
At 3:05 I guess you meant 'sequentially' instead of 'synchronously'. In your fibo code case Concurrent = Parallel = Synchronous (concurrent not necessarily sync. Async can be concurrent too). Good tips btw.
Same. Just found out the power of Worker Thread in Nodejs recently.
does a nodejs worker consume less ram compared to starting multiple nodejs servers? Also, are nodejs workers faster to spin up?
heads up.
Code in the Promise constructor runs SYNC and is blocking!
There is no such thing as an async constructor. not even for a promise.
Regarding to 3:18 at Line 7.
To make this happen in a JS async nature you should use Promise.resolve().then(...your code...)
in calling Promise.all([doFib()]) is executing the function doFib() - however, the code runs sync because its calculated in the constructor of the promise which is immediately solving and resolving the task - doing the work immediately, rather than creating a promise that should return once the task is solved.
all the functions in Promise.all() are executed one after another before the constructed array is handed to the Promis.all() function - so Promise.all() is receiving an array where all the promises are already resolved and there's nothing to wait for.
what you would want to see in the Promise.all() test is not everything being calculated in sync - but printed kinda simultaneous as all of them are actually processed in "fake" async - time-slice parallelism.
this is probably affecting the test - i didnt try it myself though because i'm too lazy to type all that code.
And one more thing - if you're using workers, you can ease your life with the "comlink" npm packaged. written by former google dev advocate Surma and its reaaaally taking the pain out of worker thread communication.
> the code runs sync because its calculated in the constructor of the promise
the constructor does not run immediately the callback function, it will be executed (synchronously) only when there is no more to compute in main thread, try something like this:
const a = 'first';
const p = new Promise(resolve => resolve(1)).then(res => console.log(res))
console.log(a)
// first
// 1
I do agree that a better way to explain this could have been using setTimeouts
That's not how it works. The computation doesn't run on a function while it's in the queue. You would waste more time pushing them in the queue because you can only one the computation one at a time as each one is resolved immediately.
@@jc-depre to replicate the OP's point, change your example to:
const p = new Promise(r => {
console.log(1);
r();
});
You will see the result as:
// 1
// First
Also, glad the OP mentioned comlink. I love it!
The continuations of all those promises will run as microtasks on that same thread though so it will still run each in order and block the thread, it will just not do it until the next tick of the event loop. To be properly parallel they need to happen on different threads which requires using worker_threads.
I don't know what is true anymore. Everyone's just speaking their own theory 😢
If I have 8 core CPU that means it has 16 threads. That means we can concurrently run 16 processes or can execute doFib()" 16 times.
Can you tell me how much time it takes if we run "doFib()" 17 times ??
Given that running it once takes 900ms(approx)
you also do commercials or radio announcements, by chance?
(if not, you should) 😁
does it share the loaded lib at runtime ? What if a package has a singleton and it is imported by both workers and master ?? Do they share a ref ? What s the overhead using it ? Besides the sharedbuffer interface, it looks likes a neat improvement of past ipc systems. looks good overall.
I tried to use workers - but I have not seen any improvements. After this I tried clusters - there was improvement. I tested with the Apache ab tool - and there was improvement in performance.
Technically fork is only slightly slower these days. One could man handle fork to essentially be threads. In fact, under Linux, threads are not actually "threads" but "light weight processes" and thats because forking is actually what happens when we create threads; only that when we create threads, we dont clone things like the file descriptor table and some other things, so thread creation becomes somewhat faster than a normal fork.
That being said, I loved this video. New sub.
clone is used for threads, not fork
@@boohba yes, clone is the underlying syscall that it's all built upon. I figured keeping it in the fork/join terminology would be simpler in this case, as fork/join parallelism technically is the ancestor of multi threading. So forking, creating threads, its all built using the clone system call with different parameters.
I'm a Korean junior developer. Your pronunciation is so good that I didn't have a problem understanding your explanation.
Yep, this is the last straw, I am learning Go, good video.
Video was very informative and things are demonstrated well but where can I get that code please help
Wow amazing 👏👏👏
Hey Ryan
awesome voice
awesome teaching
i know you try to convince me to back to Nodejs, but don't try, i decided to stick to GO xD
go is ❤
rust us nice for me too
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
if err != nil
yeah, let alone js on the browser
Thanks!
What is spinning up more processes for you? Server Workers/Childs waiting for job forever in HOT, Apache/Nginx/Lightppd for example - workers already in ram ready and waiting. If you did server config well of course. Yours first example wont use web server or what ?
Great animation, explanation and voice. But for the love of god, please use less compression on the voice :D
@Code With Ryan, have you a github repo with this code or stackblizz ? Thanks !