0:29 Theo always says edge functions cost $0.65 per million. But to be clear, that's for middleware. Edge function pricing is $2 per 1 million execution units (EU). One EU is 50 ms of compute. For billing, EUs are calculated per request like this: (total CPU ms / 50ms) = EU Example 1 - light CPU usage: 20 ms CPU / 50 ms = 0.4 EU = Rounded to 1 EU = $2 per million requests. Example 2 - heavy CPU usage: 310 ms CPU / 50 ms = 6.2 EU = Rounded to 7 EU = $14 per million requests. You are not billed for the CPU time spent waiting for network calls. Waiting for fetch() is "free". Looking at my own billing last month, I had 70 ms average CPU time. I was charged almost exactly 2 EU per request or $4 per million. So the above calculations checks out. Spent way too long on this comment but hopefully it's helpful for someone.
For a stateful application, I would rather not go for Edge and I find this absolutely ridiculous. Data replication has always been a considerable issue and I think we as an industry/community/whatever have been trying to solve for quite a bit. Just because you can ship some static JS across CDN doesn't exactly mean you can do the same with Backend logic and this paradigm can't be extrapolated for querying data. This is kind of a fundamental misunderstanding! EDIT: P. S: Distributed Databases are the way to go if you're actually considering to reduce Response times and then there is a whole new problem of CAP THEOREM. So Edge magically hasn't solved anything
Yeah, people shilling things they are paid to do has been as old as time. Did Vercel stop sponsoring videos or what? Edge runtime has been nothing more than milking the shit out of devs and startups. Love you theo, but your opinions have always been sus to me. :D
@@spicynoodle7419 No we web devs will never learn. I think you are cs guy. I am self taught I don't know basics of cs. Web Dev is all about trends and emotions.
I cannot fathom how people could not foresee this happening and they had to throw tens of billions of dollars to the problem before realizing they were shooting themselves in the foot. And I still see companies spending millions migrating to serverless edge now so they didn't get the memo. The same is happening with db at edge: it's amateurish to think it will work so well in so many cases that it actually ends up being the new generic deployment pattern. To this day, the simplest deployment scenario for performance, by a mile, is still the same as 15 years ago: render as close as possible to the data, cache as close as possible to the users. So basically compute + db close to each other + CDN at edge for dynamic content. This is still the generic way to do it despite the insane amount of frankly foolish effort chasing things that really didn't add up at initial scrutiny.
@@MeonisRP If this will not be the case how will we get money to earn 😂. This stupidity is good as long we are earning money 😂. If anyone is offended sorry but I want money if this earns me money whats the issue.
I've said it forever. Just use a VM, and use an Elixir or Go backend with a self hosted relational database on the same VM. If you're single region go with postgres, if you're multiregion use cockroachdb with geopartitioned tables
Hmm, I think Google has a production ready, battle tested solution with Cloud Spanner -> multi region distributed data with Strong ACID and has a Postgresql Interface too...
Also CockroachDB, Amazon Aurora, FoundationDB and others. The only issue is their pricing (either for managed, or for manage-it-yourself resource allocation).
Second this. Spanner uses Paxos for data replication among replicas to ensure data availability and fault tolerance, not for the transaction commit process itself. This means the latency of reaching consensus does not directly add to the latency of write operations. Additionally, the use true time API for commits makes them strongly consistent.
Seeing all this complexity is starting to get to me. I love web development, but it's becoming too hectic again I think. There is probably some Laravel (PHP) app out there running on a single VM with some CDN in front of it and it's generating millions of dollars a year and here in Node land we are reinventing everything it seems lol I'm glad I'm good at resisting just jumping on each hype train unnecessarily.
People whining about things getting "hectic". LOL dude you have no clue what you're talking about. We have more availability to tools and knowledge than ever and because of that you think its hectic? Just because you see a lot of things you dont understand doesnt make it hectic. You're just confused.
@@FainTMako No I'm not confused and I do understand these technologies, but it just sometimes feel we are solving problems we created for ourselves. You don't need a damn edge runtime to serve a blog or a basic website. Complexity like this is being showed down our throats all the time. You still have people successfully using Wordpress and generating milions using that and avoiding all this complexity that seems to plague the Nodejs ecosystem. I wasn't talking about access to knowledge, I was talking about people using unnecessarily complex solutions to a problem that doesn't really exist in their website/app.
Devs really love making life complicated for themselves, the old classical monolith server is good enough for 95% of all web servers. First render means squat if all the following calls are slow.
The main constraint here is a global database which leads to the idea of the database being distributed. Creating an architecture around user-scoped databases is a fundamental shift in paradigm which unlocks edge databases.
Its funny that theo doesnt see this. He's an interesting twist of new-age with a weird touch of boomer mentality. Its the exact shit I really dislike seeing in the industry.
Cloudflare doesn't just have D1, it also has durable objects. I don't have a perfect grasp of how it works myself but its worth looking into. But I think its like many little databases that are near your users (I think turso can also do this too). for example if you were making a jackbox games type thing, you could spin up a durable object for each room created and spin it down when the game is over. Then you could replicate the data to your analytics database on your own time, but it wouldn't matter if its slow because users wouldn't be involved in that
Holy shit. Literally the first comment in this section that I've read that makes sense and shows you know what you are talking about from a professional perspective..
I'm not quite convinced. I want to postulate two theses. 1. Going to your closest datacenter is almost as fast as going to your nearest CDN. 2. Going to your closest datacenter, going to the central datacenter and back, is almost as fast, as you going to the central data center. Yes, database replication isn't perfect, but if you are smart about it, you can probably render a lot more of the page without waiting for the central server, than the empty shell a CDN would give you.
I recently learned about the existence of WinterCG, a body for standarization of js runtimes. I hope vercel, cloudfare, deno, bun... and obviously node, get to agree in a standard to improve dx with multiple runtimes.
A side-by-side video showing a basic setup with no frills & no edge vs each complex variant would be helpful. (probably exists somewhere) Then we could see what is actually gained for the added complexity. It would also be useful to determine which types of apps gain the most from these different configurations.
Just build a majestic monolith and put a CDN in front of it. If you do it well, it should be able to handle millions of concurrent users on a 20$ a month vps that you can run any runtimes you want. Then if your database needs to scale past that, you can set up a dedicated internal network with as many nodes as you want, and even have it scale on demand depending on the host you pick. Then if some parts of your app require breaking off, then you do that. But that's when you get to unfathomable numbers like google and facebook handle.
For reading only content yes you can take a very high load just with a good cdn setup and a vps or a baremetal behind that... Even with cloudflare free tiers... But as soon as you need to check the database for every user it start to become more complicated...
Quite the opposite, this is ideal for such cases. Accessing the database from localhost to localhost is as fast as it gets. And if you aren't using redis for session cache, then you're missing out on free performance.@@LtSich
unironically, you can do a lot with just sharing objects globally in memory, which you can't do if your site is distributed across hundreds of edge nodes.@@gang_albanii
I think the solution is already existing and was implemented for the VK and Facebook long ago. They used zone routing for specific users assigning them the host url (as a closest and less loaded regional server) With this being said, the 99.99% of users will get a rapid speed on the edge with their own DB "copy" Thus they are sending requests and getting responses from/to their dedicated host namespace, edge server, DB. When the user moved to another state his data will be moved to another server as well.
9:13 Real bleeding edge stuff, I recommend you read PRINCIPLES OF DISTRIBUTED DATABASE SYSTEMS by M. TAMER OZSU, PATRICK VALDURIEZ, published in 1990 if you want to learn more.
Edge computing is yet another solution to yet another problem entirely imagined or created by the baroque complicating of the web development sold to developers by opportunistic cloud providers and OSS pet projects that have outlived their usefullness. Some spoilers for the next five years of people waking up: - It's not actually that hard to manage a server in your own data center. It's definitely not harder than dealing with AWS, kuberntes, etc, and it's a hell of a lot cheaper. - The problems being solved by transpiled languages like SASS/TypeScript, etc. are generally non-existent or grossly overstated. - Frameworks like React/Vue/etc. are both more complex and more difficult than just using the modern web platform. - The web platform has a built-in modern feature-rich language with support for classes, async, modules, etc. It's called JavaScript. - The web platform has built-in components. They're called web components. - The web platform has a built-in router. It's called a web browser. The MPA "page reload flickering" concerns are really a thing of the past, particularly with the Page Transitions API. - The web platform has a built-in cross window/tab event bus. It's called Broadcast Channels. - The web platform has a world of built-in components that came about while you were being distracted chasing frameworks. (expanders, dialogs, datepicker, colorpicker, datalist, autocomplete, etc.)
Only thing I really disagree with is that SCSS makes css much easier with mixins and frontend frameworks are good when you are building a lot of dynamic content. Writing that stuff in vanilla JS is a bigger pain (imperative vs declarative programming), as well as keeping track of state in vanilla JS can get out of hand quickly.
As anything in programming, "it depends". These points depend on your requirements. I do believe people reach for frameworks and some tools too soon, but they are not a gimmick a lot of the time. As for your "for the next five years", who cares about that if you need to build something now? Most stuff people will build now won't even be alive in 5 years. In 5 years I'll agree with most of your point, but I need to get work done now. Regarding edge? Yeah I'm not really buying it yet. Observing what's going on, but I'm cautious. - Managing your own server is definitely more work. It's not just about creating a VM and running your app on it. Updates, backups, hardware upgrades etc. There is a lot more work involved. Definitely harder than clicking few buttons inside some dashboard. Although great skill to have in order to realize what "problems" those cloud providers solve for you and when to reach for their services - I consider SASS still super useful at the moment because of nesting and file partials. In five years, I'd consider SASS definitely obsolete, but Typescript not at all - Yeah if you need a website you don't need React/Vue etc. Anything more complex than that and vanilla JS is just a chore to work with. If you work alone do whatever you want, but if you work in a team don't drag others down with your bad decisions. I'd expect that in 5 years it will be mostly the same, but popularity of frameworks will just shift around and other frameworks will steal some more market share from React - Web components are not as powerful as framework specific components. Web components are the future, but in the present we need something better a lot of the time. I'd say they still won't be great in 5 years because 5 years ago people were already saying web components will destroy everything else and nothing really happened - View Transitions API doesn't have good enough browser support yet. In 5 years yeah that API will be amazing - Broadcast Channel is a great API for its purpose - Tell that to designers that are frustrated by not being able to style those UI elements to match everything else. And also some of these are just missing some enhanced UX capabilities or are just not as good as some custom made options. Although I prefer native HTML elements as much as possible, they don't satisfy every project's needs. Sure in 5 years these might satisfy most of our needs, but I'd say not all of them Although I do really agree with your opinion that using native/vanilla/platform as much as possible is a great choice, but some things just don't cut it in many cases.
@@rand0mtv660 - Managing your own servers is pretty painless these days with so-called "private cloud" solutions and other strategies (VM hosts, docker, etc.) - Agree that SASS used to be pretty handy for nesting. Now that CSS supports it in all major browsers, that point is kind of moot. - Disagree that native tech is a "bad decision." Consider this - over the next five years, we can slowly update our project to newer technologies as they are available to all browsers. Anyone chasing frameworks is going to have to wait for wrappers and ultimately gear up for yet another complete rewrite once their framework stops being updated in favor of a framework touting a new leaky abstraction that's going to fix everything. And switching frameworks isn't easy. React components work in react, and nowhere else. Etc. - Web components are absolutely as powerful as downstream framework components. React, etc. don't add new features to the platform. They just offer abstractions over what is already there. Agree that five years ago, native web components and associated APIs were not in great shape without third-party libraries/compilers. Do not agree now at all. "Nothing really happened" - A lot happened. You should go look. Google, Microsoft, Amazon, GitHub, SalesForce, Netflix etc. certainly have. - View transitions are complete in Chrome/Edge and will be supported in FF and Safari in 2024. Fair that it's premature to say use this, but it's more of a five months thing than a five years. - You're talking about ShadowDOM. You don't have to use ShadowDOM for web components. In fact, it's a bit easier not to. If you don't, your global styles apply all the way down as usual. If you do use shadow DOM, you use design tokens with CSS Variables to offer controlled styling (variables pierce the shadow.) Components don't fix everything, but if you want components, web components beat Vue components, React components, etc. all day. No build step, one JS file that can be dropped and used anywhere, complete compatibility with frameworks, and are entirely future-proof.
Always the same, doing a full circle, then do it like 15 years ago but more complex. Same with frameworks and so on. All the hype all the learning and migration and once you are OK the web says „we figured out it’s better to do XY!“
If we want to keep the data in one region, using pinned edge is a good solution to have cheap and fast compute without cold start, and still close to the data.
What if you can map 95% of a user's db activity to a dedicated db instance that's close to them, and still maintain the other 5% of slower activity in a way that's easy to cover up? (optimistic updates, etc) That's what I think of when looking at how I might use Turso to solve a problem. Also batch processing can be done on that distributed data without affecting the user.
The flip side of this to me is whether you have to replicate all transactions back out to all edge instances - that could be a lot. If you had a customer specific set of data cached at the edge and replicated though, that could be killer. No idea if that sort of sharding is feasible.
This is not a new problem/solution. This is something engineers have worked to solve since the start of the internet. The new problem is amateur engineers who call themselves full stack, not actually understanding the full stack. Then jumping on hype trains and getting promoted in their companies. The JS world is full of this, it changes so much, ut for what actual value. Some of its a move forward, but majority of it really isn't. It's just complexity shrouded in abstractions.
I agree, but I also want the dynamic aspect of things that I've gotten used to with rendering on edge. Having partial pre-rendering in a solid state would solve a lot as I see it.
Hot take: maybe we should avoid database sessions that make all those roundtrips in the first place and use jwt server sessions instead. If done right with auth.js, jwt can be as securily stored in a cookie and hold RBAC info in custom claims. This not only saves the roundtrip and database computational costs, but also greatly simplifies those edge location problems. TLDR; the edge is much simpler when you make only one roundtrip.
But if you only make one round trip, you're not saving much time anyway? So, why add complexity? But your JWT point makes sense, at least on a surface. Probably there are cases when it would create problems, and you'd have to be extra careful with the security since you put sensitive data in a client token, but it's worth investigating.
@@DmitryShpika Yes, you definitely can't put sensitive information in a custom claim. But roles aren't sensitive info. The goal of edge computing is to bring everything closer to the user. Ideally, you not only want to bring the server as close as possible, but the database too (with replication). The problem is the distance between your server and db. When there are multiple round trips, even if you put your db closer to your server in an edge setup, you might still end up with longer distances to travel than with the regular serverless approach.
My take is always that there are a bunch of smart people out there that will figure out the ideal solution, and meanwhile we can just use what fits our current needs. Of course CTOs, startups, and some dedicated enginners are still going to need to figure out what's best, but I'd argue that for most people the debate is much simpler. They don't need to be blazingly fast, but just average.
Why do we need a global database for everything? Would it make more sense to have regional databases, storing only data specific to regional users? If there are cases of someone in e.g. EU wanting to fetch US data, obviously it would take longer but they'll be aware of that e.g. an EU customer accessing their Amazon US account.
The main thing I get of this is: we can all benefit if the servers are hosted closer to where each of us are. And the reason that's not possible as of today is...? Idk, seems that we're all hosting on US services. I.e. for LATAM we only have a cloud server on Brazil.
Maybe I’m confused and obviously I’d need to know more about the architecture, but why have the Auth broken up into different requests anyways? I’d imagine it makes more sense to do all of that querying in one request than to have the application figure it out in this piecemeal process.
Because people refuse to learn/use sql is the easy answer. The standard these days is prisma that does a ton of single requests and reparses all the data to make it into what you asked for instead of letting the DB do it's thing
TLDR: What's your take on CRDB? CockroachDB can enable performant and scalable multi-master ACID compliant SQL with local nodes that can receive reads or writes that sync masters across all regions. Maybe a client-side shard/cache (not a full master region of CRDB) could enable smooth replication from the client to nearest cluster node?
I see edge data bases being good for read only and having a centralized data base for write. Kind of like the days of memory cards for games, all the static data on the cd and the dynamic data on the memory card. Static data D1 and worker, critical write data firebase. The rest api could return a binary to return the state of the static content.
Freaking hell this guy just discover the CDN... Meanwhile he (and many others, cough cough next.js and vercel) have been hyping useless features. The worst thing is developers that supposedly know what they doing, can't see the problems for themselves
All that complexity, compute close to the client etc and I'm just wondering - is this really necessary? If your app needs to work in multiple regions it's very good to have database replicas in the same country as user (or at least continent). How close is close enough though? Is this really the way to get the best performance? Except for very few select applications wouldn't CDN's be more than enough? All that effort could have been spend on making application more performant, smaller and faster in general and it would probably be much bigger gain than trying to squeeze these few ms of being a tiny bit closer to the client (which in the end still would need to get back to the main server...). I guess I was an edge sceptic from the start....
My company uses RDS Aurora (Postgres flavor) with a few globally distributed readers. We tested PlanetScale as a possible alternative but ultimately decided to stick with Aurora for the time being. In our tests, PlanetScale was about 2x more expensive for the same read performance (tested globally). PS was marginally faster on write speed but not enough to justify a 2x cost increase.
What is cool about Planetscale, is you don't need pro database dev to configure it So maybe your company have some dev and engineer to maintain it and have better performance and cost
If you use jwt authentication you don't need to make a request to the database for auth. Then if you only need to make one or two real request I think edge is great. But you do say that. I think distributed databases are super cool though.
@nikilragav when people login to the service you need to do a db lookup to create the token but after that you don't need to do a db lookup anymore. The edge function just checks the token with the public key.
@@quintencaboso every time you invalidate the token you'll need to do that db lookup again, right? Also how does the public key thing validate the token? I don't understand what algo is involved I guess
3:25 I don't quite understand, isn't CDN just serving static assets, all the request should still happen between the client and the server? How is the server content "technically" streaming back to CDN?
@@spicynoodle7419 it makes replication easier (and multimaster replication at least feasible), and with it moving the db closer to the user. If you actually need edge latency and global scale, you get bigtable, dynamo, and cassandra, and you give up ad hoc queries.
The edge sells a seductive story, but the tradeoffs are not worth it for app developers. They will be less so in the future, for 2 reasons. 1. Data transfer between the user's device and the origin server is getting faster through better infra: HTTP3, higher 5G coverage, compact serialization formats such as protobuff, etc. In Germany, we don't have optic fiber as a standard yet. Networking is not that tight of a bottleneck that we need to drop a perfectly reasonable setup (colocated server and database) and adopt a more complex and less well understood mental model. 2. The concept of a strongly consistend relational database on the edge doesn't work. These databases require distributed consensus to guarantee consistency, which is to my knowledge far more costly on the edge than on a small number of nodes at known locations. This limitation is intrinsic and not going to go away without a major breaktrhough in distributed database technology. That breakthrough isn't here yet.
I don't understand why not using regional edge runtime. Is it only for the dependencies hell? because it's getting better and better every day. Edge makes much more sense for speed and price.
You could be biased because you are sponsored by planetscale You said in the past you aren't biased from that because they genuinely were a technology you reccomended when you agreed for them to sponsor you, but what about when they become more obsolete, you will still have them as a sponsor which will make your opinion more resistant to change, aka biased
0:29 Theo always says edge functions cost $0.65 per million. But to be clear, that's for middleware. Edge function pricing is $2 per 1 million execution units (EU). One EU is 50 ms of compute.
For billing, EUs are calculated per request like this:
(total CPU ms / 50ms) = EU
Example 1 - light CPU usage:
20 ms CPU / 50 ms = 0.4 EU = Rounded to 1 EU
= $2 per million requests.
Example 2 - heavy CPU usage:
310 ms CPU / 50 ms = 6.2 EU = Rounded to 7 EU
= $14 per million requests.
You are not billed for the CPU time spent waiting for network calls. Waiting for fetch() is "free".
Looking at my own billing last month, I had 70 ms average CPU time. I was charged almost exactly 2 EU per request or $4 per million. So the above calculations checks out.
Spent way too long on this comment but hopefully it's helpful for someone.
I Always appreciate people actually running the numbers. Looks interesting
Thanks for the numbers, appreciate your effort🙌. This is the first and only YT comment I've screenshot.
This is terrific, thanks
See you in 6 months again
🤣
😂😂😂
I love how the conclusion is basically what everyone was doing 10 years ago. Sometimes (or most of the time), new isn't always better.
Yeah, I mean you gotta try new things tho cause that is how you get improvement
But it’s worth trying otherwise how else would we progress
@@null_spacex But it should probably be tested on your own projects and not on company time and resources.
@@adampattersonwhat happened to R&D 😂😂😂
@@adampatterson if that’s how you feel about your company then sure, but at my company I do things differently
For a stateful application, I would rather not go for Edge and I find this absolutely ridiculous. Data replication has always been a considerable issue and I think we as an industry/community/whatever have been trying to solve for quite a bit. Just because you can ship some static JS across CDN doesn't exactly mean you can do the same with Backend logic and this paradigm can't be extrapolated for querying data. This is kind of a fundamental misunderstanding!
EDIT:
P. S: Distributed Databases are the way to go if you're actually considering to reduce Response times and then there is a whole new problem of CAP THEOREM. So Edge magically hasn't solved anything
We fly all our users to our AWS region when they want to use the app.
from the edge to the taint... web dev moves so fast
Lmaooo
Yeah, people shilling things they are paid to do has been as old as time. Did Vercel stop sponsoring videos or what? Edge runtime has been nothing more than milking the shit out of devs and startups. Love you theo, but your opinions have always been sus to me. :D
😂 He was and will always be sus
he's part of vercel. so business is business@@himurakenshin9875
this needs more upvotes
JS Devs ☕
Need 2500 packages to do maths. Need 50 services for a hello world site. Can't set up shit unless it has a web UI.
They'll learn some day
@@spicynoodle7419 No we web devs will never learn. I think you are cs guy. I am self taught I don't know basics of cs. Web Dev is all about trends and emotions.
I cannot fathom how people could not foresee this happening and they had to throw tens of billions of dollars to the problem before realizing they were shooting themselves in the foot. And I still see companies spending millions migrating to serverless edge now so they didn't get the memo. The same is happening with db at edge: it's amateurish to think it will work so well in so many cases that it actually ends up being the new generic deployment pattern.
To this day, the simplest deployment scenario for performance, by a mile, is still the same as 15 years ago: render as close as possible to the data, cache as close as possible to the users. So basically compute + db close to each other + CDN at edge for dynamic content. This is still the generic way to do it despite the insane amount of frankly foolish effort chasing things that really didn't add up at initial scrutiny.
I'm pretty sure its similar to one of those situations where out of a team of 120 devs, 119 are forced to follow the whims of an incompetent lead
This only prooves that even CTOs at bigtech are kinda dumb. Or they simply justify the existence of their oversized teams by overengineering lmao
@@MeonisRParen’t CTOs just engineers?
💯💯💯💯💯
@@MeonisRP If this will not be the case how will we get money to earn 😂. This stupidity is good as long we are earning money 😂. If anyone is offended sorry but I want money if this earns me money whats the issue.
I've said it forever. Just use a VM, and use an Elixir or Go backend with a self hosted relational database on the same VM. If you're single region go with postgres, if you're multiregion use cockroachdb with geopartitioned tables
Why elixir or go
It's a beautiful day; I'm switching from Edge to Bono with or without you, but I still haven't found what I'm looking for.
Hmm, I think Google has a production ready, battle tested solution with Cloud Spanner -> multi region distributed data with Strong ACID and has a Postgresql Interface too...
This should also be possible with yugabytedb
Also CockroachDB, Amazon Aurora, FoundationDB and others. The only issue is their pricing (either for managed, or for manage-it-yourself resource allocation).
cockroachdb tidb
Second this. Spanner uses Paxos for data replication among replicas to ensure data availability and fault tolerance, not for the transaction commit process itself. This means the latency of reaching consensus does not directly add to the latency of write operations. Additionally, the use true time API for commits makes them strongly consistent.
Seeing all this complexity is starting to get to me. I love web development, but it's becoming too hectic again I think.
There is probably some Laravel (PHP) app out there running on a single VM with some CDN in front of it and it's generating millions of dollars a year and here in Node land we are reinventing everything it seems lol I'm glad I'm good at resisting just jumping on each hype train unnecessarily.
People whining about things getting "hectic". LOL dude you have no clue what you're talking about. We have more availability to tools and knowledge than ever and because of that you think its hectic? Just because you see a lot of things you dont understand doesnt make it hectic. You're just confused.
@@FainTMako No I'm not confused and I do understand these technologies, but it just sometimes feel we are solving problems we created for ourselves.
You don't need a damn edge runtime to serve a blog or a basic website. Complexity like this is being showed down our throats all the time.
You still have people successfully using Wordpress and generating milions using that and avoiding all this complexity that seems to plague the Nodejs ecosystem.
I wasn't talking about access to knowledge, I was talking about people using unnecessarily complex solutions to a problem that doesn't really exist in their website/app.
See you in 8 months for "We are hosting our own servers now"
Devs really love making life complicated for themselves, the old classical monolith server is good enough for 95% of all web servers. First render means squat if all the following calls are slow.
The main constraint here is a global database which leads to the idea of the database being distributed. Creating an architecture around user-scoped databases is a fundamental shift in paradigm which unlocks edge databases.
Its funny that theo doesnt see this. He's an interesting twist of new-age with a weird touch of boomer mentality. Its the exact shit I really dislike seeing in the industry.
Nothing is closer to the client, than the client itself.
Thanks Sherlock Holmes
@@joshuagornall260bro is telling you to make a mobile app with the data staying in user's device like good old offline apps
Cloudflare doesn't just have D1, it also has durable objects. I don't have a perfect grasp of how it works myself but its worth looking into. But I think its like many little databases that are near your users (I think turso can also do this too). for example if you were making a jackbox games type thing, you could spin up a durable object for each room created and spin it down when the game is over. Then you could replicate the data to your analytics database on your own time, but it wouldn't matter if its slow because users wouldn't be involved in that
Holy shit. Literally the first comment in this section that I've read that makes sense and shows you know what you are talking about from a professional perspective..
Think durable object as a little memory backed database. But it's literally just an object in memory that's synced and replicated.
Durable object is not replicated, theres only one instance of each object globally
@@magnusred2945 LOL
@@FainTMako ???
"Edge runtime" that offers a limited set of Node's APIs, like making you use fetch instead of fs? Bro, just give me a VPS+CDN.
I'm not quite convinced.
I want to postulate two theses.
1. Going to your closest datacenter is almost as fast as going to your nearest CDN.
2. Going to your closest datacenter, going to the central datacenter and back, is almost as fast, as you going to the central data center.
Yes, database replication isn't perfect, but if you are smart about it, you can probably render a lot more of the page without waiting for the central server, than the empty shell a CDN would give you.
Great point.
database replication is a big problem over long distance...
I recently learned about the existence of WinterCG, a body for standarization of js runtimes. I hope vercel, cloudfare, deno, bun... and obviously node, get to agree in a standard to improve dx with multiple runtimes.
A side-by-side video showing a basic setup with no frills & no edge vs each complex variant would be helpful. (probably exists somewhere)
Then we could see what is actually gained for the added complexity.
It would also be useful to determine which types of apps gain the most from these different configurations.
Just build a majestic monolith and put a CDN in front of it. If you do it well, it should be able to handle millions of concurrent users on a 20$ a month vps that you can run any runtimes you want. Then if your database needs to scale past that, you can set up a dedicated internal network with as many nodes as you want, and even have it scale on demand depending on the host you pick. Then if some parts of your app require breaking off, then you do that. But that's when you get to unfathomable numbers like google and facebook handle.
For reading only content yes you can take a very high load just with a good cdn setup and a vps or a baremetal behind that... Even with cloudflare free tiers...
But as soon as you need to check the database for every user it start to become more complicated...
Quite the opposite, this is ideal for such cases. Accessing the database from localhost to localhost is as fast as it gets. And if you aren't using redis for session cache, then you're missing out on free performance.@@LtSich
and drop database, "your data fits in ram".
unironically, you can do a lot with just sharing objects globally in memory, which you can't do if your site is distributed across hundreds of edge nodes.@@gang_albanii
I think the solution is already existing and was implemented for the VK and Facebook long ago.
They used zone routing for specific users assigning them the host url (as a closest and less loaded regional server)
With this being said, the 99.99% of users will get a rapid speed on the edge with their own DB "copy"
Thus they are sending requests and getting responses from/to their dedicated host namespace, edge server, DB.
When the user moved to another state his data will be moved to another server as well.
what is "VK"?
@@basedest4451 It's russian Facebook clone
@@basedest4451Most likely social network popular in Russia.
@@basedest4451 since they put it right next to Facebook, I assume it's VKontakte - a russian social media platform.
9:13 Real bleeding edge stuff, I recommend you read PRINCIPLES OF DISTRIBUTED DATABASE SYSTEMS by M. TAMER OZSU, PATRICK VALDURIEZ, published in 1990 if you want to learn more.
I keep returning to this video again and again.. Brilliant!
I like my Stack with Cloudflare Workers and D1
Edge computing is yet another solution to yet another problem entirely imagined or created by the baroque complicating of the web development sold to developers by opportunistic cloud providers and OSS pet projects that have outlived their usefullness.
Some spoilers for the next five years of people waking up:
- It's not actually that hard to manage a server in your own data center. It's definitely not harder than dealing with AWS, kuberntes, etc, and it's a hell of a lot cheaper.
- The problems being solved by transpiled languages like SASS/TypeScript, etc. are generally non-existent or grossly overstated.
- Frameworks like React/Vue/etc. are both more complex and more difficult than just using the modern web platform.
- The web platform has a built-in modern feature-rich language with support for classes, async, modules, etc. It's called JavaScript.
- The web platform has built-in components. They're called web components.
- The web platform has a built-in router. It's called a web browser. The MPA "page reload flickering" concerns are really a thing of the past, particularly with the Page Transitions API.
- The web platform has a built-in cross window/tab event bus. It's called Broadcast Channels.
- The web platform has a world of built-in components that came about while you were being distracted chasing frameworks. (expanders, dialogs, datepicker, colorpicker, datalist, autocomplete, etc.)
Only thing I really disagree with is that SCSS makes css much easier with mixins and frontend frameworks are good when you are building a lot of dynamic content. Writing that stuff in vanilla JS is a bigger pain (imperative vs declarative programming), as well as keeping track of state in vanilla JS can get out of hand quickly.
As anything in programming, "it depends". These points depend on your requirements. I do believe people reach for frameworks and some tools too soon, but they are not a gimmick a lot of the time. As for your "for the next five years", who cares about that if you need to build something now? Most stuff people will build now won't even be alive in 5 years. In 5 years I'll agree with most of your point, but I need to get work done now. Regarding edge? Yeah I'm not really buying it yet. Observing what's going on, but I'm cautious.
- Managing your own server is definitely more work. It's not just about creating a VM and running your app on it. Updates, backups, hardware upgrades etc. There is a lot more work involved. Definitely harder than clicking few buttons inside some dashboard. Although great skill to have in order to realize what "problems" those cloud providers solve for you and when to reach for their services
- I consider SASS still super useful at the moment because of nesting and file partials. In five years, I'd consider SASS definitely obsolete, but Typescript not at all
- Yeah if you need a website you don't need React/Vue etc. Anything more complex than that and vanilla JS is just a chore to work with. If you work alone do whatever you want, but if you work in a team don't drag others down with your bad decisions. I'd expect that in 5 years it will be mostly the same, but popularity of frameworks will just shift around and other frameworks will steal some more market share from React
- Web components are not as powerful as framework specific components. Web components are the future, but in the present we need something better a lot of the time. I'd say they still won't be great in 5 years because 5 years ago people were already saying web components will destroy everything else and nothing really happened
- View Transitions API doesn't have good enough browser support yet. In 5 years yeah that API will be amazing
- Broadcast Channel is a great API for its purpose
- Tell that to designers that are frustrated by not being able to style those UI elements to match everything else. And also some of these are just missing some enhanced UX capabilities or are just not as good as some custom made options. Although I prefer native HTML elements as much as possible, they don't satisfy every project's needs. Sure in 5 years these might satisfy most of our needs, but I'd say not all of them
Although I do really agree with your opinion that using native/vanilla/platform as much as possible is a great choice, but some things just don't cut it in many cases.
"more complex and more difficult than just using the modern web platform" 🤣🤣🤣🤣🤣 what are you yapping about
@@randomman172 waiting for an argument.
@@rand0mtv660
- Managing your own servers is pretty painless these days with so-called "private cloud" solutions and other strategies (VM hosts, docker, etc.)
- Agree that SASS used to be pretty handy for nesting. Now that CSS supports it in all major browsers, that point is kind of moot.
- Disagree that native tech is a "bad decision." Consider this - over the next five years, we can slowly update our project to newer technologies as they are available to all browsers. Anyone chasing frameworks is going to have to wait for wrappers and ultimately gear up for yet another complete rewrite once their framework stops being updated in favor of a framework touting a new leaky abstraction that's going to fix everything. And switching frameworks isn't easy. React components work in react, and nowhere else. Etc.
- Web components are absolutely as powerful as downstream framework components. React, etc. don't add new features to the platform. They just offer abstractions over what is already there. Agree that five years ago, native web components and associated APIs were not in great shape without third-party libraries/compilers. Do not agree now at all. "Nothing really happened" - A lot happened. You should go look. Google, Microsoft, Amazon, GitHub, SalesForce, Netflix etc. certainly have.
- View transitions are complete in Chrome/Edge and will be supported in FF and Safari in 2024. Fair that it's premature to say use this, but it's more of a five months thing than a five years.
- You're talking about ShadowDOM. You don't have to use ShadowDOM for web components. In fact, it's a bit easier not to. If you don't, your global styles apply all the way down as usual. If you do use shadow DOM, you use design tokens with CSS Variables to offer controlled styling (variables pierce the shadow.) Components don't fix everything, but if you want components, web components beat Vue components, React components, etc. all day. No build step, one JS file that can be dropped and used anywhere, complete compatibility with frameworks, and are entirely future-proof.
This was super helpful, it clarified a lot of the big picture for me. Thanks!
Always the same, doing a full circle, then do it like 15 years ago but more complex. Same with frameworks and so on. All the hype all the learning and migration and once you are OK the web says „we figured out it’s better to do XY!“
What type of hosting does Edge imply? A specific one? Vercel? Azure/GCP/AWS? Is it its own thing? Can I run the edge runtime in a container?
What examples of the type of write heavy apps with this problem? For read heavy sites, aren't these all problems solved two decades ago?
If we want to keep the data in one region, using pinned edge is a good solution to have cheap and fast compute without cold start, and still close to the data.
Worrying about 150ms of DB latency while loading 150kb of Next.js at the same time.
What if you can map 95% of a user's db activity to a dedicated db instance that's close to them, and still maintain the other 5% of slower activity in a way that's easy to cover up? (optimistic updates, etc)
That's what I think of when looking at how I might use Turso to solve a problem. Also batch processing can be done on that distributed data without affecting the user.
The flip side of this to me is whether you have to replicate all transactions back out to all edge instances - that could be a lot. If you had a customer specific set of data cached at the edge and replicated though, that could be killer. No idea if that sort of sharding is feasible.
This is not a new problem/solution. This is something engineers have worked to solve since the start of the internet.
The new problem is amateur engineers who call themselves full stack, not actually understanding the full stack. Then jumping on hype trains and getting promoted in their companies.
The JS world is full of this, it changes so much, ut for what actual value. Some of its a move forward, but majority of it really isn't. It's just complexity shrouded in abstractions.
It's almost like serverless was never designed for write heavy applications without caching.
I agree, but I also want the dynamic aspect of things that I've gotten used to with rendering on edge. Having partial pre-rendering in a solid state would solve a lot as I see it.
such a well explained video with awesome diagrams to boot
Hot take: maybe we should avoid database sessions that make all those roundtrips in the first place and use jwt server sessions instead.
If done right with auth.js, jwt can be as securily stored in a cookie and hold RBAC info in custom claims.
This not only saves the roundtrip and database computational costs, but also greatly simplifies those edge location problems.
TLDR; the edge is much simpler when you make only one roundtrip.
But if you only make one round trip, you're not saving much time anyway? So, why add complexity?
But your JWT point makes sense, at least on a surface. Probably there are cases when it would create problems, and you'd have to be extra careful with the security since you put sensitive data in a client token, but it's worth investigating.
@@DmitryShpika Yes, you definitely can't put sensitive information in a custom claim. But roles aren't sensitive info.
The goal of edge computing is to bring everything closer to the user. Ideally, you not only want to bring the server as close as possible, but the database too (with replication). The problem is the distance between your server and db. When there are multiple round trips, even if you put your db closer to your server in an edge setup, you might still end up with longer distances to travel than with the regular serverless approach.
My take is always that there are a bunch of smart people out there that will figure out the ideal solution, and meanwhile we can just use what fits our current needs. Of course CTOs, startups, and some dedicated enginners are still going to need to figure out what's best, but I'd argue that for most people the debate is much simpler. They don't need to be blazingly fast, but just average.
In sqlite there is Marmot, from its repo : Marmot is a distributed SQLite replicator with leaderless, and eventual consistency.
Just wait until there are servers on Mars.
Working in tech: Where every decision you make is wrong
Why do we need a global database for everything? Would it make more sense to have regional databases, storing only data specific to regional users? If there are cases of someone in e.g. EU wanting to fetch US data, obviously it would take longer but they'll be aware of that e.g. an EU customer accessing their Amazon US account.
You found the way at last. Just run a server. Own your shit. Don't overcomplicate shit.
The main thing I get of this is: we can all benefit if the servers are hosted closer to where each of us are. And the reason that's not possible as of today is...? Idk, seems that we're all hosting on US services. I.e. for LATAM we only have a cloud server on Brazil.
Maybe I’m confused and obviously I’d need to know more about the architecture, but why have the Auth broken up into different requests anyways? I’d imagine it makes more sense to do all of that querying in one request than to have the application figure it out in this piecemeal process.
Because people refuse to learn/use sql is the easy answer. The standard these days is prisma that does a ton of single requests and reparses all the data to make it into what you asked for instead of letting the DB do it's thing
TLDR: What's your take on CRDB?
CockroachDB can enable performant and scalable multi-master ACID compliant SQL with local nodes that can receive reads or writes that sync masters across all regions. Maybe a client-side shard/cache (not a full master region of CRDB) could enable smooth replication from the client to nearest cluster node?
How do I configure a CDN to deliver a cached shell but then stream in the rest of the content from origin?
Render everything as a skeleton and fetch json to fill it in with data
I see edge data bases being good for read only and having a centralized data base for write. Kind of like the days of memory cards for games, all the static data on the cd and the dynamic data on the memory card. Static data D1 and worker, critical write data firebase. The rest api could return a binary to return the state of the static content.
Freaking hell this guy just discover the CDN... Meanwhile he (and many others, cough cough next.js and vercel) have been hyping useless features. The worst thing is developers that supposedly know what they doing, can't see the problems for themselves
How is linode and digitalocean with a cdn better or worse to use in this situation????
Every time I hear the time I just think of the wrestler 😎
Just use both, server/serverless for mutation and read your own write, and edge for everything else if you are really conscious about latency
All that complexity, compute close to the client etc and I'm just wondering - is this really necessary? If your app needs to work in multiple regions it's very good to have database replicas in the same country as user (or at least continent). How close is close enough though? Is this really the way to get the best performance? Except for very few select applications wouldn't CDN's be more than enough? All that effort could have been spend on making application more performant, smaller and faster in general and it would probably be much bigger gain than trying to squeeze these few ms of being a tiny bit closer to the client (which in the end still would need to get back to the main server...). I guess I was an edge sceptic from the start....
Wouldn't using Apache Cassandra solve the database issue ?
My company uses RDS Aurora (Postgres flavor) with a few globally distributed readers. We tested PlanetScale as a possible alternative but ultimately decided to stick with Aurora for the time being. In our tests, PlanetScale was about 2x more expensive for the same read performance (tested globally). PS was marginally faster on write speed but not enough to justify a 2x cost increase.
What is cool about Planetscale, is you don't need pro database dev to configure it
So maybe your company have some dev and engineer to maintain it and have better performance and cost
@@Reptiluka_ very true - PlanetScale is definitely easier to setup and maintain.
Love the excalidraw doc name 1:49
If you use jwt authentication you don't need to make a request to the database for auth. Then if you only need to make one or two real request I think edge is great. But you do say that. I think distributed databases are super cool though.
How are you sending the token without DB lookup?
@nikilragav when people login to the service you need to do a db lookup to create the token but after that you don't need to do a db lookup anymore. The edge function just checks the token with the public key.
@@quintencaboso every time you invalidate the token you'll need to do that db lookup again, right? Also how does the public key thing validate the token? I don't understand what algo is involved I guess
write-thru cache co-located with the displaced server?
...consistency can be someone else's problem 😅
What is this browser? (in the video obviously)
if you really cared about those seconds, you should have used something like Sveltekit or Astro.
One cannot build user dashboards with astro. Rest I can agree the being close to metal is more important.
how does astro fetch data faster
@@himurakenshin9875yes you can
please please talk about "local first" solutitions like replicache. It solves this exact problem without any deploy shenanigans.
Local-first ftw!
So then are we supposed to build a monolith?
this is great for microservices but nah I'm done dealing with cold startups causing nasty delays.
The document being called "Edging is overrated" killed me lol
Honestly, for most use cases moving your app to the edge in the first place seems to me like an indicator of bad judgment
3:25 I don't quite understand, isn't CDN just serving static assets, all the request should still happen between the client and the server? How is the server content "technically" streaming back to CDN?
If it's dynamic, just do it from a datacenter. Period....
it's funny to see how deep this rabbit hole gets. you guys keep overcomplicating things at infra and DX level just to try and shave 10ms time load
FWIW, this is why we did nosql in the first place
How the hell does nosql help with this
@@spicynoodle7419 it makes replication easier (and multimaster replication at least feasible), and with it moving the db closer to the user. If you actually need edge latency and global scale, you get bigtable, dynamo, and cassandra, and you give up ad hoc queries.
The edge sells a seductive story, but the tradeoffs are not worth it for app developers.
They will be less so in the future, for 2 reasons.
1. Data transfer between the user's device and the origin server is getting faster through better infra: HTTP3, higher 5G coverage, compact serialization formats such as protobuff, etc. In Germany, we don't have optic fiber as a standard yet. Networking is not that tight of a bottleneck that we need to drop a perfectly reasonable setup (colocated server and database) and adopt a more complex and less well understood mental model.
2. The concept of a strongly consistend relational database on the edge doesn't work. These databases require distributed consensus to guarantee consistency, which is to my knowledge far more costly on the edge than on a small number of nodes at known locations. This limitation is intrinsic and not going to go away without a major breaktrhough in distributed database technology. That breakthrough isn't here yet.
What drawing tool is this. is it tldraw?
Have u tried Deno and Deno Deploy?
Have you tried Fauna yet?
Back to Aurora
Here we go again
Deno is the new node
Bun is the new Deno
all hail vercel
Theo , what browser are u using ?
Arc
90% of web dev 'innovation' at this point is no longer solving REAL problems. its just bored nerds shifting around problems.
i thought of ms edge on first when i saw the title, then edge CDN
Does anyone have a TLDW summary?
Great take!
will next js use bun instead of node in future ?
probably not or in a far future, they are betting in turbo for now
@@versaleyoutubevanced8647 well, turbo could run on bun
@@igrschmidt i think they have different way of building the app, concurrents
@@versaleyoutubevanced8647 but turbo is bundler and I was talking about bun run time
🦔
Isn't edge dumb anyway, because everything is less secure this way on more hackable?
I think everyone hopes that cryptocurrency ledgers will solve the distributed database problem. Idk
can't we just make better loading screens? just a thought.
Why not install a native program so you don't need to download hundreds of megabytes of data over the network all the time
I don't understand why not using regional edge runtime. Is it only for the dependencies hell? because it's getting better and better every day. Edge makes much more sense for speed and price.
Ohhh that Edge
You could be biased because you are sponsored by planetscale
You said in the past you aren't biased from that because they genuinely were a technology you reccomended when you agreed for them to sponsor you, but what about when they become more obsolete, you will still have them as a sponsor which will make your opinion more resistant to change, aka biased
It’s like the politician who isn’t influenced by large “donors”
I never believed in the edge or its benefits.
In closing, this is an ad. Lol
Please do more animations with react (nextjs)
Nerd-Mode
Well well well
Azure Cosmos
Edge is a terrible browser
Agree. Can't believe Theo was running his server code on it
take a look tu GUN