Cody is great. Super honest and not trying to put on a facade. His videos feel like a chatting with a co-worker on a Thursday afternoon when no one wants to work on bugs anymore and just wanna chat about fun tech.
he definitely seems like a honest guy ... but running a static site on Vercel with NextJS and complaining about memory usage? Running Bun and finding memory leaks? Being impressed by Go channels but no word on web workers?
@@Voidstroyer you can't compare the two, prime turns a 15 minute video into an hour of good discussion points and tangents, asmon just repeats a lot of the video and says "that's crazy"
At my work they require us to use a bunch of in-house developed dependencies. Thanks to these, some apps have post-startup pre-workload idle memory usage of >800MB..
@@ds_7 My guess is that OP works for a company that make apps for other business and therefore prioritize having a standard architecture. This can make it easier to support multiple ongoing projects, streamline the development and switch team members around.
yeah, why are people scaling horizontally when you can put 256GB of RAM in a single process very easily, why are you going for $10/mo AWS bullshit microservice when you can just have a single $35K server running your entire thing for 3 years when you have less than 20,000 consumers. Heck, if you optimize for C1M, you can still have the same $35K server and literally save millions of dollars in cloud computing, just because you used monolith applications and scaled vertically. I bet people do that because they're afraid of servers. they don't bite, trust me, I'm doing my own computing with my own hardware since 2011, hardware keeps getting faster, I'm still not going to need NetFlix (TM) scale until I have a million customers, then I can afford more servers and go for microservice. Why would I rent servers from bezos if I can buy them ? I prefer to have Capex instead of Opex, specially if Capex is ten times less than Opex.
Imagine the thousands and thousands of drives in a data center like Google's or Amazon's. All that heat generated and electricity being consumed to operate them. Now imagine them running in Javascript.
But what's the point of doing something yourself if you're not actually doing a better job? Sure I roll my own debounce, but I am not gaining anything by doing that. People shit on js dependencies, but those are not what makes js perform badly. JS is just fundamentally not designed for performance.
Lodash is awesome, many langauges have similar functionality in the standard library, and lodash is stable enough and good enough to provide that stdlib-like functionality easily and cheaply.
@@JanVerny javascript is not fundamentally designed for performance, so by including unnecessary things you are incurring a larger cost than in some sort of systems programing language. if you want the debounce function, just what you need directly into your project
2:53 I inherited a code base once from a guy who was obsessed with Ramda. After realizing I didn't understand what was going on I started rewriting more and more of the code to actually get rid of it. In the end everyone finally understood what was actually going on and we could quickly identify the bugs that had been in the app from the start.
The 1990s web design was so different -- page rendering within 5 or 10 seconds over modem, maximum image sizes of 10K, making sure the page worked on IE/Netscape 2, 3, 4 and 5. Soooo much fun y'all missed out on!
As you get older and wiser you will all reach the same point and ask yourself: "Seriously, WTF is going on inside my software?". I write the simplest code possible. People don't tend to like my code because "it looks old". That's because I keep it simple and don't over abstract. I often don't abstract at all. But it's okay because I tend to be a lone coder so I can get away with being a miserable, old, stuck-in-my-ways luddite 🤣
Exactly why PHP is generally the best language to use for everything. PHP is better as a command line scripting and system language than as a web language. You get the full power of the OS in ~13MB RAM plus all of the security hardening of PHP being the primary web-facing server side language for the last 25 years constantly under attack. Go, Rust, Python, Javascript, C/C++, C#, Perl, Java, etc. don't have that combo. There are definitely serious security issues buried deep in those languages that PHP solved ages ago. Also, there's no separate compile step in PHP unlike most other languages and yet PHP can hold its own on the performance front fairly well.
@@privacyvalued4134 and PHP is a deeply intuitive way to work, I’ve heard so much really irrational bile against PHP over the years and thought, why are these people so angry?
While I agree (there's zero reason to use Next.js for a landing page like that), he also could've simply set his 'output' value in his next.config.js file to 'export', which would give him an actual static page that could be deployed to any HTTP server/CDN lol
@@DanWalshTV it is not about hosting the static content, it is about why it consumes that much memory with just a simple app. Let say your app need a simple database then you back to the main question: why just doing a simple backend task need that much memory.
@13:18 there is a great video by the V8 devs that talks about this; basically the V8 optimizer can see your structure and minimize the memory usage because it knows the shape - the caveat is that if you start doing stupid stuff like appending keys into that object, it falls over and can't optimize it out.
With .NET Core it has already AOT compilation like Go. Plus, the C# is the much better language. Proper type system including null safety. It's just a bit harder to learn than Go, that's why Go is so popular with people watching videos rather than coding.
@@jancartman321 Only thing I don't like about C# is that Errors are not a part of type-system. So I rarely know whether method throws or not. I would like it to be explicit. Something like in Zig.
Over last 5 years development looked like: - There is a problem to solve, should we solve it? - No, let's spend money and time to use and learn the tool that solves our problem and 40 other problems that we don't have and also introduces 3 new problems. And everyone was like: "Sounds good to me". Like seriously, so many developers so obsessed with performance, and then their ToDo app traverses 6 different services around the world with every request. Like what?!
"When I was a junior dev in the 90s" (or something), we did microservices! We had dozens of .exe-binaries, most of them taking a file and spitting a file. We had "CICD" with build in scripts that build the exe and renamed the current version and copied new one in it's place. Every piece updated independently and if it broke, delete new one, rename previous version back and try again. For some reason every generation comes up with this idea that files and folders are old fashioned and need to be replaced with something new and shiny.
Were you having a stroke when you wrote this? What you describe here, it doesn't need to be replaced, it needs to be collectively deleted from everyone's memory and you need to take off the rose tinted glasses.
"People that use Ramda, are people who want to make a codebase only usable by them" 2:54 THIS IS SO TRUE! Ramda is HORRIBLE, it's like you're reading backwards, everytime you need to put the effect of some function on your 'head scope' it gets worse.
The server still would need some memory to be able to handle the requests themselves, like setting up the connection and processing what the request is for, stuff like that.
How people can still say "if you need to build fast use "? There are incredible web frameworks in Go, Rust and Elixir. I have to believe that the people who push for nodeJS are the ones selling hosting and prefer devs to use the least cost-efficient solutions smh
Worked 2011 for a bank. It was an internal system managing billions of revenue. Just two bare metal server instances with two 3GB JVMs. 30k daily users without issues. In the 90s I worked for an industry company which was able to provide services for a couple thousand employees with an AS400 which used an amount of memory which is comparable to a AWS Lambda :).
Yup. The Church of Simplicity. I think many devs are seeing the light. Shoot, if *Einstein* glorified simplicity in his craft, certainly we lowly web devs/engs can strive for the same ethos.
Those metrics are crazy, 400mb for a single static web page?! I'm a PHP dev and most of our application routes takes around 8-12mb using a full framework like Symfony.
I've had to do the extreme resource savings mentality in node apis before (I didn't get to choose the stack). It's a wild ride having to think about literally every single object/function/string being created and every single process being executed (no serialization and deserialization of json isn't negligible, it can cost a ton if you do it enough). When you need to serve millions of requests in short periods of time as fast as possible, it's amazing what you can learn.
Hmmm, I can't disagree. You shouldn't choose Node (or Deno or Bun or even JVM) if you need efficiency. Typescript is so tempting (and actually delivers) in dev speed for full stack apps.
Well, you didn't get to choose a stack. That's your problem. I understand completely how serving millions of requests with decent perf would be a difficult task, but JS/node/bun are not at fault here. They can do the task, but nobody claims that they are the best for the task.
I am an efficiency-oriented programmer but have never needed to go that deep. Any chance you documented what you learned somewhere? I'm genuinely curious about what happens at the extreme end of optimizations in JS...
I'm of the opinion that the only reason to use node/deno/bun is to run javascript tooling so u can deploy javascript frontends we should've made all the javascript tooling in a compiled language from the start and let the language stay in the browser. super high level languages are fine for a web page ig or for some automation or quick iteration thingy but for long running programs, for programs that actually have to be doing heavy work, they should be done on a compiled language (GCed or not depending on requirements)
In Rust, Cargo workspaces are a godsend. Not only can you start a project as a monorepo, but you can eventually just take a part of it out into its own repo when you think it should and the process is simple. You can do monorepos the same in JS, but it requires soooooooo much extra stuff to get it working correctly that it defeats the purpose. That's why monorepos in JS are usually seen as awful ideas. It's simply that the JS ecosystem has no easy way of doing monorepos so things get weird when you try to separate a dependency into its own repo later on. Cargo by default just makes it trivial enough that you can actually feel confident in attempting monorepos. By default, I even start single Rust projects as workspaces since it shouldn't affect much of how the project works and is developed
I’ve been working on cross-platform applications and using Nx for monorepos and Microfrontends has been pretty smooth. Outside of this use case I wouldn’t recommend it for most projects.
I use typescript on the front and try to use either C# or Go, still learning both, on the back. For work though, I have to use SharePoint on the backend -__-.
My usecase for a Monorepo is that we have individual microservices in it, and a shared library which they are all using (for metrics, connecting to an external platform they are all doing, and other common things). Without a monorepo we would have to code in this library in a separate repo and just hope that it's compatible with all our services. With a monorepo, we are triggering actions to validate that the change in the library is compatible will all services that uses it - which we can validate before merging and letting that code into main. Also we don't even have to push this library to an artifactory, we just build it along with the services that's affected by a PR, simplifies things. There's probably other ways to handle this case, but it works well for us.
There is some talk (I think cpp-con) where a chrome developer shows the ASM v8 generates (without and with jit) and compared it to c++... It was like 30x for jit! He also explains how this hige dofference happens and why JS cannot get much faster
50:59 the HTMX + Go + Templ pipeline could really use a quick start kind of guide. Especially adding in Websockets to the mix was actually a pretty steep learning curve.
I haven't touched GO in a bit but I tried it out making a RESTful api. It felt like I had to switch mental gears despite syntax being much simpler than Java and co.
Exactly. Deno has all you need to build web services and CLIs in its core APIs and std lib. Memory usage is also *much* lower than Hono with Bun. And Deno is more stable and probably not memory leaking like Bun as seen in his graphs. Lastly, we don't know how he deployed. Did he use docker on the same VPC in all cases?
@@spartanA01 Sorry my question wasn't very clear. If the App is trivial, even if it is SSR why is not just cached by the CDN? Requests to the origin should be rare, and when they do happen if there is no stateful behaviour why is the response not just served from a cache in the origin?
@@Alex-qq1gm Modern frameworks like Angular, Next.JS, NuxtJS, generate static content (SSG) separately from dynamically rendered pages for each request (SSR). SSG content can be uploaded to CDN. For example, Next.JS hosting does it automatically. While dynamic information is served by server rendering.
Im a platform engineer and everywhere still using fat snake Python, no incentive to move but wherever possible Im using Go as my goto, its better, due to its default static typing, it prompts engineers to explicitly write the inputs and outputs and thats gold when the code is the documentation in a large number of cases. When everything is staticly typed, its easier to make changes than something like Python where engineers take the easiest route which ends up in: def func(input): ... No idea of its output or what it returns and what makes uo those objects. Its uptake is so slow as people who make these decisions dont see the business benefit and yet its actually more important than what they think.
Good point. But Typescript has an even better type system than Go, e.g. unions (including discriminated unions and exhaustive checks) or null safety. Javascript shouldn't be used at all any more.
This really overlooks that the overhead of JS is not linear to Go. JS is concurrently running an entire compiler with a ton of extra baseline data to do that and to parse and understand the code graph. Measure again under significant load and you’ll find the difference to be a lot less. Depending on the complexity you definitely need to expect 100-250mb baseline memory for the compiler to operate efficiently with the rest of the memory. Depending on what you’re doing, V8 can build more optimal code than Go can because of the runtime context.
I recommended something similar when cody made this video. His response was that he had some forms that couldn't be rendered statically but If you want to stay in JS land you could just be using something like Astro for small for stuff like that.
I might have flunked out of uni, but the time I spent there (CS) was very, very educational. We were taught stuff like you know, operating with given amount of bytes of memory. Nowadays the famous "2 variable value flip" is a curio, but when you have only like I dunno, 6 registers to put values in.... :D Sure I code dumb cruds most of the time, but I promise you that all those fundamentals aren't wasted...
javascript has those features in the spec now, though i think they're Array.prototype.map(), Array.prototype.filter(), and Array.prototype.forEach() for that last one you don't even need that method, you can use a for...in loop afaik they're not even that new, maybe for...in is but the array methods have been a thing for years now
I was watching demoscene coders doing some nice graphics in 6502 asm the other day. a few lines of code to make a cool image "rotation and scaling".. on a 64kB ram computer. Then I watch a modern coding tutorial and just see bloat and bloat. Sure its not the same thing, but still. :)
Uhh I mean it's like way easier to do that on bloat than on ASM 💀💀, that is not a good comparison Go is something that is kindof in the middle nd has both the benefits to some extent! That would be a better example. Also, asm for web 💀
I thought the same seeing solutions people came up with. Remember someone very proud of thier 900tps, Im just standing there.. sure you ment 90000tps right?.. nopes.. was off with x100 from the target. After that I just stopped caring.. tried a few times.. but never pushed on anything. ”Sure its fine, nice 900tps you got.” We that started with ASM, are just different. Gotta accept that.
I still write code for such hardware, in assembler mostly but not exclusively. It requires a way of thinking quite different from what most modern devs do, tho it does have obvious similarities to developing for tiny embedded systems.
Yes but you can install lodash libs independently to reduce a bunch of that overhead from pulling in the entire library. Whether you still need them or not is debatable as most of the functionality can be done inhouse.
I’m a complete coding noob. I’m also a DevOps guy at a 100% Microsoft shop. Right off the bat, I’ve got a lot going against me. lol I have what I think is a useful app idea and I’ve been wondering what I should use to build it. Even though I have had a lot of exposure to C# and Angular, I could have tried to start with that since it’s somewhat familiar. After watching this, I am convinced now more than ever to try something new… like maybe Go.
I want to see the numbers with other JavaScript frameworks. Reacts has a tendency to be a bloated bundle with the virtual dom. Svelte is a compiler so I want to see its numbers in the same situation.
When Primeagen talked about incidental complexity, my mind went directly to testing the UI in React Testing Library. It is nearly impossible to write even the most basic UI tests without having to do a series of hacks. Then when you finally get the UI test, one test for clicking a damn button takes half a second to do. It really is brain dead and untenable. I spent more time wrestling with the broken RTL library than with any other function I made myself. It is just ridiculous!
Chronic self hosted here, your puny memory numbers mean nothing to me. The extra cost of using another 60GB of memory on top of my 100 watt server (mostly from hard drives) is nothing. This is a joke, obviously it can matter, I just don’t mind my 10 websites doing nothing on my server. Thankyou internet
What's the point in reducing your own bundle size/memory footprint/whatever if you are then made to include a dozen third party tracker scripts that each load a megabyte of JavaScript from dubious URLs before first paint?
23:30 I worked in the project where we had so many eslint rules that we couldn't integrate with prettier. There was a separate repo just for eslint rules and one person job was just to manage it, even though there were only 20 devs
More recent bun versions: - with Hono you get 50 mb - with Elysia it seems closer to 40 Edit: I made a server with: - RPi 3 - DietPi OS - Bun 1.1.21 Using the built-in bun server, the whole system runs on 46MB RAM.
Use the libraries, have a working product then skim it until you are left with the cheese or the skeleton of a fully functioning site with only functions used and no dependancies that aren't used
you can build rust bins using musl to be self contained! So its like a compiler target option thingy. And you can build containers "FROM scratch" containing only the rust-musl bin. Dunno about go, but would think that can pretty much do the same thing on default following primes explainations. So i think thats a non-issue for both languages
You can do your own middleware stuff in the new version of go. It's honestly really slick. I used to clown on the go community because they were all "don't need packages, just use standard!" until I needed to do the exact thing it claims to be exceptional at, simple web servers, then I get recommended a web framework. But that isn't the case anymore.
I learned web components before dabbling in these front-end frameworks and I find web components to be easier. With the frameworks, you have to learn a whole ecosystem and keep up with the changes, whereas with web components you need to know vanilla es6 js and learn how components work.
Go Modules works great. You just have to set your GOPRIVATE environment variable to exclude the public server checksum verification for your internal Git hosts. That's it.
Just 5 minutes in, but my take is that most people are used to having a ton of resources in their PCs. That much CPU and memory are like training wheels on a bike. Most newer programmers keep those on for life. Back in college, I was doing a project on a PIC16F690. I chose that one for the reasonable price and the massive memory (256bytes 😂).
WebDevCody always brings very normal ideas a lot of people have like it is one of the most original ideas he’s ever had. This one could be titled: I figured out that bloat isn’t good
Do what you need for your core business functions yourself? yes, absolutely. 25 years ago now I started with web development, but 40 years ago I started learning to code, on systems with 64k ram. It took me a few years to get there, but by 2004 I was creating things, including ones which use JS, which still work perfectly fine today, because they do not depend on any external code or libraries or such. It does a bit of 'probing' to figure out what the DOM looks like, which makes it work with pretty much every browser which supports JS and was released in the last 20+ years. I 'modernized' the site a couple years ago by replacing the css, but beyond that, it hasn't needed any changes just to keep it working, only to expand the functionality. It uses a 'framework' I created myself, limited to the things I need, and completely self contained and hosted by my own servers. This is more work initially, but, not having to touch things unless I wanted to functionally change them, never running into libraries breaking my code, not running into a framework no longer being maintained, etc, has saved so much more time over the last 2 decades, that initial investment was totally worth it.
Actually, Formula1 has 5 standard tire grades, produced by Pirelli. Slicks, Intermediates, Soft, Medium, Hard. The teams cannot fux with reinventing wheels.
I made it bois, prime reacted.
Subbed! ❤
You just need to dis JS
Bro, I've subbed like 2 years ago now, and you've been putting out banger after banger for years, I always leave your videos with an insight
You just need to grow a moustache now 👌
Nice
Cody is great. Super honest and not trying to put on a facade. His videos feel like a chatting with a co-worker on a Thursday afternoon when no one wants to work on bugs anymore and just wanna chat about fun tech.
I like cody he doesnt pretend to know more than he does and is super responsive to comments/feedback :)
he definitely seems like a honest guy ... but running a static site on Vercel with NextJS and complaining about memory usage? Running Bun and finding memory leaks? Being impressed by Go channels but no word on web workers?
@@dandogamer I just ask ChatGPT questions before I make videos and edit it to sound intelligent 😎
@@WebDevCody based
@@dandogamer like other self proclaimed gurus at 25
Prime has turned the 15 minute video into an hour-long video once again. Hooray 🎉.
he is doing everytime. our boy knows his audience well
He is the asmongold of tech youtube
@@Voidstroyer you can't compare the two, prime turns a 15 minute video into an hour of good discussion points and tangents, asmon just repeats a lot of the video and says "that's crazy"
@@victor1882 You are correct with those extra details. I was merely stating the comparison of turning a short video into a long one.
It wouldn't be transformative...
At my work they require us to use a bunch of in-house developed dependencies. Thanks to these, some apps have post-startup pre-workload idle memory usage of >800MB..
Why even use the in-house crap if it sucks?
@@ds_7 My guess is that OP works for a company that make apps for other business and therefore prioritize having a standard architecture. This can make it easier to support multiple ongoing projects, streamline the development and switch team members around.
@@QvsTheWorld in-house takes cremedelacreme talent to be a success
@@darshandev1754 It takes a developer centric culture, "normal" corporate culture cannot foster excellent in-house libraries/tools.
@@tensor5113its definitely a dev company, I can guess it bcuz I have read several blogs and did research on tech where I live.
Make monoliths great again.
Modular monoliths (modulith) - best of both worlds!
The monolith tooling and hardware have both gotten a lot better. It's in a much nicer position than 5 years ago
Nobody needs fucking Necrons in their codebase.
@@basione clearly we need to define and implement Imperator Titan architecture.
yeah, why are people scaling horizontally when you can put 256GB of RAM in a single process very easily, why are you going for $10/mo AWS bullshit microservice when you can just have a single $35K server running your entire thing for 3 years when you have less than 20,000 consumers. Heck, if you optimize for C1M, you can still have the same $35K server and literally save millions of dollars in cloud computing, just because you used monolith applications and scaled vertically.
I bet people do that because they're afraid of servers. they don't bite, trust me, I'm doing my own computing with my own hardware since 2011, hardware keeps getting faster, I'm still not going to need NetFlix (TM) scale until I have a million customers, then I can afford more servers and go for microservice.
Why would I rent servers from bezos if I can buy them ? I prefer to have Capex instead of Opex, specially if Capex is ten times less than Opex.
JavaScript developer shocked: learns that bloat is actually bad
😂
bloat is not bad!
You can ride on water in it!
*acknowledges - get's back to bloating stuff.
@@jsonkody That's a boat, man
How are they not thinking of bloat every second? I know why but still, it's troubling how many pay it no mind.
Imagine the thousands and thousands of drives in a data center like Google's or Amazon's. All that heat generated and electricity being consumed to operate them. Now imagine them running in Javascript.
They are
@@thewhitefalcon8539 say sike right now
A literal crime against humanity
Developers do not care about our environment. The amount of wasted energy due to lazy, mediocre developers is just terrible.
@@gracjanchudziak4755 I have bad news for you: it doesn't stop there
This is on par with best dev convos I've seen hahaha! Love your work mate!
He's not wrong, debounce and throttle are easy functions you can implement by yourself. Lodash just makes it quick and easy.
But what's the point of doing something yourself if you're not actually doing a better job? Sure I roll my own debounce, but I am not gaining anything by doing that. People shit on js dependencies, but those are not what makes js perform badly. JS is just fundamentally not designed for performance.
Lodash is awesome, many langauges have similar functionality in the standard library, and lodash is stable enough and good enough to provide that stdlib-like functionality easily and cheaply.
@@JanVerny liability
@@JanVerny javascript is not fundamentally designed for performance, so by including unnecessary things you are incurring a larger cost than in some sort of systems programing language. if you want the debounce function, just what you need directly into your project
@@gwentarinokripperinolkjdsf683 But I need a debounce function anyways, it's not unnecessary. What larger cost is there?
2:53 I inherited a code base once from a guy who was obsessed with Ramda. After realizing I didn't understand what was going on I started rewriting more and more of the code to actually get rid of it. In the end everyone finally understood what was actually going on and we could quickly identify the bugs that had been in the app from the start.
The 1990s web design was so different -- page rendering within 5 or 10 seconds over modem, maximum image sizes of 10K, making sure the page worked on IE/Netscape 2, 3, 4 and 5. Soooo much fun y'all missed out on!
Dude, you just triggered my PTSD...
As you get older and wiser you will all reach the same point and ask yourself: "Seriously, WTF is going on inside my software?". I write the simplest code possible. People don't tend to like my code because "it looks old". That's because I keep it simple and don't over abstract. I often don't abstract at all. But it's okay because I tend to be a lone coder so I can get away with being a miserable, old, stuck-in-my-ways luddite 🤣
Exactly why PHP is generally the best language to use for everything. PHP is better as a command line scripting and system language than as a web language. You get the full power of the OS in ~13MB RAM plus all of the security hardening of PHP being the primary web-facing server side language for the last 25 years constantly under attack. Go, Rust, Python, Javascript, C/C++, C#, Perl, Java, etc. don't have that combo. There are definitely serious security issues buried deep in those languages that PHP solved ages ago. Also, there's no separate compile step in PHP unlike most other languages and yet PHP can hold its own on the performance front fairly well.
@@privacyvalued4134 and PHP is a deeply intuitive way to work, I’ve heard so much really irrational bile against PHP over the years and thought, why are these people so angry?
Still waiting for Prime's C# arc
@nickchapsas
Needs to happen, wonderful ecosystem to develop in
Once the type unions (DU) are there it will start
"Why does a single static page use so much memory". Maybe don't use Next.JS for a single static page
That’s the whole point of the video!
i like cody but its just sunk cost stockholm syndromeat this point, and its just painful to see lol
While I agree (there's zero reason to use Next.js for a landing page like that), he also could've simply set his 'output' value in his next.config.js file to 'export', which would give him an actual static page that could be deployed to any HTTP server/CDN lol
@@DanWalshTV it is not about hosting the static content, it is about why it consumes that much memory with just a simple app. Let say your app need a simple database then you back to the main question: why just doing a simple backend task need that much memory.
@@funkdefied1 yeah I commented while watching, then heard my comment back in the rest if the video 😅. I just had to…
@13:18 there is a great video by the V8 devs that talks about this; basically the V8 optimizer can see your structure and minimize the memory usage because it knows the shape - the caveat is that if you start doing stupid stuff like appending keys into that object, it falls over and can't optimize it out.
I will continue using the right tool for the right job and scale
Btw C# is getting native compilation nowadays so it can be like Go
With .NET Core it has already AOT compilation like Go. Plus, the C# is the much better language. Proper type system including null safety. It's just a bit harder to learn than Go, that's why Go is so popular with people watching videos rather than coding.
@@jancartman321 Only thing I don't like about C# is that Errors are not a part of type-system. So I rarely know whether method throws or not.
I would like it to be explicit. Something like in Zig.
Jvm languages got that also with native images via graalvm.
Also buildpacks are a thing for highly optimized deployables.
C# and Java have had native compilation for ages
but c# blazor the c# webassembly (the frontend) is still slower than javascript
Over last 5 years development looked like:
- There is a problem to solve, should we solve it?
- No, let's spend money and time to use and learn the tool that solves our problem and 40 other problems that we don't have and also introduces 3 new problems.
And everyone was like: "Sounds good to me".
Like seriously, so many developers so obsessed with performance, and then their ToDo app traverses 6 different services around the world with every request. Like what?!
lovely take
Sorry, but I don't understand if you are talking about the problem being people using JS frameworks or using GO for web development.
The Odin Project should make this video mandatory for their Full Stack JS path.
Im about 90% complete with OP, and so happy I found this
"... never wanted figuratively **kermit sewer slide** more ... " 48:45
My god... I burst out laughing, this is pure poetry hahahaha
"When I was a junior dev in the 90s" (or something), we did microservices! We had dozens of .exe-binaries, most of them taking a file and spitting a file. We had "CICD" with build in scripts that build the exe and renamed the current version and copied new one in it's place. Every piece updated independently and if it broke, delete new one, rename previous version back and try again. For some reason every generation comes up with this idea that files and folders are old fashioned and need to be replaced with something new and shiny.
Were you having a stroke when you wrote this? What you describe here, it doesn't need to be replaced, it needs to be collectively deleted from everyone's memory and you need to take off the rose tinted glasses.
@@JanVerny yes I am thoroughly confused
15 minute video into one hour and 7 minutes, what a reaction
"People that use Ramda, are people who want to make a codebase only usable by them" 2:54
THIS IS SO TRUE! Ramda is HORRIBLE, it's like you're reading backwards, everytime you need to put the effect of some function on your 'head scope' it gets worse.
Why would a static page need any memory at all?!
Maybe because they need VDom for react.js to optimization that website? hahaha
The server still would need some memory to be able to handle the requests themselves, like setting up the connection and processing what the request is for, stuff like that.
@@orderandchaos_at_work because JS Andys don't know the difference between a web framework and a web server.
The 0 RAM server challenge, let's go!
@@baka_baca Just host the static html file to a CDN
How people can still say "if you need to build fast use "? There are incredible web frameworks in Go, Rust and Elixir. I have to believe that the people who push for nodeJS are the ones selling hosting and prefer devs to use the least cost-efficient solutions smh
Worked 2011 for a bank. It was an internal system managing billions of revenue. Just two bare metal server instances with two 3GB JVMs. 30k daily users without issues. In the 90s I worked for an industry company which was able to provide services for a couple thousand employees with an AS400 which used an amount of memory which is comparable to a AWS Lambda :).
It's like the Jeavons paradox. When we're given more resources we use even more of them.
Probably why i,ve always just enough.
Yup. The Church of Simplicity. I think many devs are seeing the light. Shoot, if *Einstein* glorified simplicity in his craft, certainly we lowly web devs/engs can strive for the same ethos.
You should have Chris Ferdinandi on your stream. He advocates for a simpler web.
Meanwhile that Fat Snake Python constricts the CPU and eats memory by the Gigabyte :D
Have you tried compiling with Cython?
Pretty sure cpython is very lightweight and doesn't consume much memory, CPU usage is another story
Those metrics are crazy, 400mb for a single static web page?! I'm a PHP dev and most of our application routes takes around 8-12mb using a full framework like Symfony.
This is a sign for me to keep learning symfony
@theprimetime Please bring back, welcome to Costco! I love you. My daughters love it and watches your channel with me just to hear it.
I've had to do the extreme resource savings mentality in node apis before (I didn't get to choose the stack). It's a wild ride having to think about literally every single object/function/string being created and every single process being executed (no serialization and deserialization of json isn't negligible, it can cost a ton if you do it enough). When you need to serve millions of requests in short periods of time as fast as possible, it's amazing what you can learn.
Hmmm, I can't disagree. You shouldn't choose Node (or Deno or Bun or even JVM) if you need efficiency. Typescript is so tempting (and actually delivers) in dev speed for full stack apps.
Well, you didn't get to choose a stack. That's your problem. I understand completely how serving millions of requests with decent perf would be a difficult task, but JS/node/bun are not at fault here. They can do the task, but nobody claims that they are the best for the task.
What was at the core of you learnings? I tried to optimise a http server using uwebsockets and a binary protocol like protobuf.
I am an efficiency-oriented programmer but have never needed to go that deep. Any chance you documented what you learned somewhere? I'm genuinely curious about what happens at the extreme end of optimizations in JS...
I'm of the opinion that the only reason to use node/deno/bun is to run javascript tooling so u can deploy javascript frontends
we should've made all the javascript tooling in a compiled language from the start and let the language stay in the browser.
super high level languages are fine for a web page ig or for some automation or quick iteration thingy
but for long running programs, for programs that actually have to be doing heavy work, they should be done on a compiled language (GCed or not depending on requirements)
In Rust, Cargo workspaces are a godsend. Not only can you start a project as a monorepo, but you can eventually just take a part of it out into its own repo when you think it should and the process is simple. You can do monorepos the same in JS, but it requires soooooooo much extra stuff to get it working correctly that it defeats the purpose. That's why monorepos in JS are usually seen as awful ideas. It's simply that the JS ecosystem has no easy way of doing monorepos so things get weird when you try to separate a dependency into its own repo later on. Cargo by default just makes it trivial enough that you can actually feel confident in attempting monorepos. By default, I even start single Rust projects as workspaces since it shouldn't affect much of how the project works and is developed
Server side rendering in JS has always been the craziest thing. Either use CSR or something other than JS if you must render HTML on the server.
I’ve been working on cross-platform applications and using Nx for monorepos and Microfrontends has been pretty smooth.
Outside of this use case I wouldn’t recommend it for most projects.
Great mentality. Even when using cloud services. Most things could cost less and be effective with just a VPS.
3:10 I haven't seen one of these cuts in prime's videos in a long time, this made me so happy
I love watching videos from these creators and knowing that prime is gonna make a reaction to it.
Package Bloat is real in every ecosystem. I almost always cite this as a reason packages fail.
Edit: spelling
walking away from javascript to LITERALLY ANYTHING ELSE! amen!
did a year ago
How about we go farther and have software that works without being constantly connected to the internet like the good ol days.
@@darshandev1754 To what? How's it going?
I use typescript on the front and try to use either C# or Go, still learning both, on the back. For work though, I have to use SharePoint on the backend -__-.
My usecase for a Monorepo is that we have individual microservices in it, and a shared library which they are all using (for metrics, connecting to an external platform they are all doing, and other common things). Without a monorepo we would have to code in this library in a separate repo and just hope that it's compatible with all our services. With a monorepo, we are triggering actions to validate that the change in the library is compatible will all services that uses it - which we can validate before merging and letting that code into main. Also we don't even have to push this library to an artifactory, we just build it along with the services that's affected by a PR, simplifies things. There's probably other ways to handle this case, but it works well for us.
There is some talk (I think cpp-con) where a chrome developer shows the ASM v8 generates (without and with jit) and compared it to c++... It was like 30x for jit! He also explains how this hige dofference happens and why JS cannot get much faster
That comment on Monorepos made me laugh my ass off. I feel it, I feel it!
50:59 the HTMX + Go + Templ pipeline could really use a quick start kind of guide. Especially adding in Websockets to the mix was actually a pretty steep learning curve.
XY Problem sounds like a great quote a former manager told me about how Ford said his customers kept asking for faster horses…
I haven't touched GO in a bit but I tried it out making a RESTful api.
It felt like I had to switch mental gears despite syntax being much simpler than Java and co.
yeah it feels a little diffrerent, especially the error handling
Deno has a wicked standard library based on Golang.
Exactly. Deno has all you need to build web services and CLIs in its core APIs and std lib. Memory usage is also *much* lower than Hono with Bun. And Deno is more stable and probably not memory leaking like Bun as seen in his graphs. Lastly, we don't know how he deployed. Did he use docker on the same VPC in all cases?
The real question is why does a suposedly SSR frontend framework need to do anything other than serve raw HTML and JS from behind a CDN?
If it's serving raw HTML/JS from CDN then that wouldn't be SSR 🤔
Are you thinking of SSG (static site generation)?
@@spartanA01 Sorry my question wasn't very clear. If the App is trivial, even if it is SSR why is not just cached by the CDN? Requests to the origin should be rare, and when they do happen if there is no stateful behaviour why is the response not just served from a cache in the origin?
@@Alex-qq1gm yeah good question
@@Alex-qq1gm Modern frameworks like Angular, Next.JS, NuxtJS, generate static content (SSG) separately from dynamically rendered pages for each request (SSR). SSG content can be uploaded to CDN. For example, Next.JS hosting does it automatically. While dynamic information is served by server rendering.
Im a platform engineer and everywhere still using fat snake Python, no incentive to move but wherever possible Im using Go as my goto, its better, due to its default static typing, it prompts engineers to explicitly write the inputs and outputs and thats gold when the code is the documentation in a large number of cases. When everything is staticly typed, its easier to make changes than something like Python where engineers take the easiest route which ends up in:
def func(input):
...
No idea of its output or what it returns and what makes uo those objects.
Its uptake is so slow as people who make these decisions dont see the business benefit and yet its actually more important than what they think.
Good point. But Typescript has an even better type system than Go, e.g. unions (including discriminated unions and exhaustive checks) or null safety. Javascript shouldn't be used at all any more.
This really overlooks that the overhead of JS is not linear to Go. JS is concurrently running an entire compiler with a ton of extra baseline data to do that and to parse and understand the code graph. Measure again under significant load and you’ll find the difference to be a lot less. Depending on the complexity you definitely need to expect 100-250mb baseline memory for the compiler to operate efficiently with the rest of the memory. Depending on what you’re doing, V8 can build more optimal code than Go can because of the runtime context.
I mean for a one pager.. why are you deploying it to node at all? use next sure and just build statically and chuck it on S3?
Word, compile it into static assets and put cloud front in front of it. Bam!
Gonna have to look into this strat.
I recommended something similar when cody made this video. His response was that he had some forms that couldn't be rendered statically but If you want to stay in JS land you could just be using something like Astro for small for stuff like that.
14min --> 1hr (very decoded) -- Prime How do you make these things feel like a movie y'all
Frontend engineers rediscovering backend languages is wild to watch.
Yeah, hono is 14kb no deps and works in any JS env.
1 min of video = 4 minutes when prime reacting
3:25 i love these little intermissions
ok this was hilarious. I'm subscribing.
I might have flunked out of uni, but the time I spent there (CS) was very, very educational. We were taught stuff like you know, operating with given amount of bytes of memory. Nowadays the famous "2 variable value flip" is a curio, but when you have only like I dunno, 6 registers to put values in.... :D
Sure I code dumb cruds most of the time, but I promise you that all those fundamentals aren't wasted...
We use loadash at work to map, filter, and iterate through arrays! it's wonderful!
javascript has those features in the spec now, though
i think they're Array.prototype.map(), Array.prototype.filter(), and Array.prototype.forEach()
for that last one you don't even need that method, you can use a for...in loop
afaik they're not even that new, maybe for...in is but the array methods have been a thing for years now
Thanks so much for this video
Man, I am just so happy to get to write in Elixir. ;)
I was watching demoscene coders doing some nice graphics in 6502 asm the other day. a few lines of code to make a cool image "rotation and scaling".. on a 64kB ram computer. Then I watch a modern coding tutorial and just see bloat and bloat. Sure its not the same thing, but still. :)
Uhh I mean it's like way easier to do that on bloat than on ASM 💀💀, that is not a good comparison
Go is something that is kindof in the middle nd has both the benefits to some extent! That would be a better example. Also, asm for web 💀
I thought the same seeing solutions people came up with. Remember someone very proud of thier 900tps, Im just standing there.. sure you ment 90000tps right?.. nopes.. was off with x100 from the target. After that I just stopped caring.. tried a few times.. but never pushed on anything. ”Sure its fine, nice 900tps you got.”
We that started with ASM, are just different. Gotta accept that.
I still write code for such hardware, in assembler mostly but not exclusively. It requires a way of thinking quite different from what most modern devs do, tho it does have obvious similarities to developing for tiny embedded systems.
4:21 slow clap -- the why? this is what starts the rabbit hole to the matrix
Yes but you can install lodash libs independently to reduce a bunch of that overhead from pulling in the entire library. Whether you still need them or not is debatable as most of the functionality can be done inhouse.
I’m a complete coding noob. I’m also a DevOps guy at a 100% Microsoft shop. Right off the bat, I’ve got a lot going against me. lol I have what I think is a useful app idea and I’ve been wondering what I should use to build it. Even though I have had a lot of exposure to C# and Angular, I could have tried to start with that since it’s somewhat familiar. After watching this, I am convinced now more than ever to try something new… like maybe Go.
Prime:
Static linking = Yes,
monolithic app = Yes,
mono repo = No ?
What does he even mean with "just use a repo, not a mono repo?" I thought mono repo means someone isn't using multiple repos
its true that go and js have different mx+b, but everyone knows that you remove the constants
Can someone explain monorepo vs single repo? He was railing against the former in favour of the latter, but they sound like the same thing
yeah I have no fukin idea what he is talking about, it's like hearing a crazy person argue against themselves
Huge difference, when it comes to tooling and Organisation. Google has all of its code in a monorepo. The biggest in the World.
"Even a kilobyte being saved" I still remember 1kb of chip ram was huge a few years ago
I want to see the numbers with other JavaScript frameworks. Reacts has a tendency to be a bloated bundle with the virtual dom. Svelte is a compiler so I want to see its numbers in the same situation.
Only Prime would be able to turn a 15 minutes youtube video into a full 1hr length reaction. OMEGALUL
JVM begs to differ that 380MB is a lot of memory.
When Primeagen talked about incidental complexity, my mind went directly to testing the UI in React Testing Library.
It is nearly impossible to write even the most basic UI tests without having to do a series of hacks.
Then when you finally get the UI test, one test for clicking a damn button takes half a second to do.
It really is brain dead and untenable.
I spent more time wrestling with the broken RTL library than with any other function I made myself.
It is just ridiculous!
Turning 14 mins video into 1 hour, He is really good at it.
Working in games and also stock exchange services, make you really chase the waste.
It's amazing the amount of work you can do in one millisecond.
Chronic self hosted here, your puny memory numbers mean nothing to me. The extra cost of using another 60GB of memory on top of my 100 watt server (mostly from hard drives) is nothing.
This is a joke, obviously it can matter, I just don’t mind my 10 websites doing nothing on my server.
Thankyou internet
What's the point in reducing your own bundle size/memory footprint/whatever if you are then made to include a dozen third party tracker scripts that each load a megabyte of JavaScript from dubious URLs before first paint?
That prove-ably don't even do anything and the ad industry knows it
23:30 I worked in the project where we had so many eslint rules that we couldn't integrate with prettier. There was a separate repo just for eslint rules and one person job was just to manage it, even though there were only 20 devs
Seems like an Organisational issue. Ever seen Googles c++ styleguide?
If you want to re-invent the wheel, either work for yourself or work somewhere (like F1) where the business values inventing it's own wheels
More recent bun versions:
- with Hono you get 50 mb
- with Elysia it seems closer to 40
Edit: I made a server with:
- RPi 3
- DietPi OS
- Bun 1.1.21
Using the built-in bun server, the whole system runs on 46MB RAM.
I'm curios about Deno. Bun is fascinating, but not really production ready, while Deno is.
Honah* as they say
Use the libraries, have a working product then skim it until you are left with the cheese or the skeleton of a fully functioning site with only functions used and no dependancies that aren't used
you can build rust bins using musl to be self contained! So its like a compiler target option thingy. And you can build containers "FROM scratch" containing only the rust-musl bin. Dunno about go, but would think that can pretty much do the same thing on default following primes explainations. So i think thats a non-issue for both languages
Like build a 50k lines of rust backend that musl compiles to self contained 25mb's to be run as a container. Cannot complain at that point!
C# here go!!
As someone who started on PHP v3, it's hilarious to watch declarative JS morphing into basically PHP.
The monorepo at my work is spread so thin that a single requirement change removed an entire service one time
But sharp is just a javascript native binding. The real code at the backend is compiled C++ codes anyway?
You can do your own middleware stuff in the new version of go. It's honestly really slick. I used to clown on the go community because they were all "don't need packages, just use standard!" until I needed to do the exact thing it claims to be exceptional at, simple web servers, then I get recommended a web framework. But that isn't the case anymore.
I learned web components before dabbling in these front-end frameworks and I find web components to be easier. With the frameworks, you have to learn a whole ecosystem and keep up with the changes, whereas with web components you need to know vanilla es6 js and learn how components work.
Go Modules works great. You just have to set your GOPRIVATE environment variable to exclude the public server checksum verification for your internal Git hosts. That's it.
As an added note. I don't believe I've ever watched a video where I agree on so much.
Luau's parrallelism is the simplest thing ever, it just works automagically!
"Wild!?! I was absolutely Livid!"
Just 5 minutes in, but my take is that most people are used to having a ton of resources in their PCs. That much CPU and memory are like training wheels on a bike. Most newer programmers keep those on for life.
Back in college, I was doing a project on a PIC16F690. I chose that one for the reasonable price and the massive memory (256bytes 😂).
WebDevCody always brings very normal ideas a lot of people have like it is one of the most original ideas he’s ever had.
This one could be titled: I figured out that bloat isn’t good
Wow we went from php, to "static" js react to the new "php" (nextjs). CDN what is that? 😂😂😂
Because they want to use vercel and serverless😅😂😂
Coz cummon it is as simple as moving builds behind a reverse prosy
Do what you need for your core business functions yourself? yes, absolutely.
25 years ago now I started with web development, but 40 years ago I started learning to code, on systems with 64k ram.
It took me a few years to get there, but by 2004 I was creating things, including ones which use JS, which still work perfectly fine today, because they do not depend on any external code or libraries or such. It does a bit of 'probing' to figure out what the DOM looks like, which makes it work with pretty much every browser which supports JS and was released in the last 20+ years. I 'modernized' the site a couple years ago by replacing the css, but beyond that, it hasn't needed any changes just to keep it working, only to expand the functionality. It uses a 'framework' I created myself, limited to the things I need, and completely self contained and hosted by my own servers.
This is more work initially, but, not having to touch things unless I wanted to functionally change them, never running into libraries breaking my code, not running into a framework no longer being maintained, etc, has saved so much more time over the last 2 decades, that initial investment was totally worth it.
Actually, Formula1 has 5 standard tire grades, produced by Pirelli. Slicks, Intermediates, Soft, Medium, Hard. The teams cannot fux with reinventing wheels.
21:45 - I don't fully understand the point/sarcasm in this part. Why is having to have a package.json with exports listed bad?
The standard library for go still isn't 100 there yet. Fibers way of handling middleware is chefs kiss. Gin second place.