That's It, I'm Done With Serverless*

Поделиться
HTML-код
  • Опубликовано: 29 авг 2024
  • Gonna be a long year of moving everything to the edge...
    ALL MY VIDEOS ARE POSTED EARLY ON PATREON / t3dotgg
    Everything else (Twitch, Twitter, Discord & my blog): t3.gg/links
    S/O Ph4seOne for the awesome edit 🙏

Комментарии • 466

  • @t3dotgg
    @t3dotgg  Год назад +138

    Oh no this is going viral which means the comments are going to utter shit. FAQ HERE - IF YOU ASK SOMETHING I ANSWER HERE YOU'RE PERMABANNED
    "What is the edge runtime" - it's an ALTERNATIVE TO NODE that is VERY MINIMAL and VERY FAST. That is why it has no cold starts. That's also why it has no native ability. (mentioned at 8:54)
    "Regional edge is contradictory; You made that up!" - As I said AT THE START. Edge references TWO THINGS. Runtime and location. If you limit your edge functions to run in a specific region, they're called "regional edge functions". Learn more here: vercel.com/blog/regional-execution-for-ultra-low-latency-rendering-at-the-edge
    Seriously if y'all don't cut the shit I"m turning off comments.

    • @xN811x
      @xN811x Год назад +22

      lol

    • @xN811x
      @xN811x Год назад +7

      Something to add to the FAQ maybe, as I can't seem to find the original comment questioning it:
      - Why does the cold start take several seconds?
      - Why would we care about it? The number of needed cold starts approaches zero in the long run

    • @NotesNNotes
      @NotesNNotes Год назад +77

      I just got here stop yelling at me

    • @ruyvieira104
      @ruyvieira104 Год назад

      @@xN811x typescript compiling all the node_modules

    • @BusinessWolf1
      @BusinessWolf1 Год назад

      The internet is full of idiots. You should know this by now.

  • @Gabriel_Pureliani
    @Gabriel_Pureliani Год назад +758

    First serverless, then edge, next thing you know theo's in your kitchen installing a server for less latency.

    • @ea_naseer
      @ea_naseer Год назад +20

      they might as well start torrenting databases and some parts of the backend to reduce latency

    • @sohn7767
      @sohn7767 Год назад +26

      Eventually we are back in the Stone Age of locally running services

    • @vivarantx
      @vivarantx Год назад +2

      ​@First Last is the AGI gay by any chance?

    • @RawBert
      @RawBert Год назад +1

      @@ea_naseer at this point blockchain makes more sense

    • @Fernando-ry5qt
      @Fernando-ry5qt Год назад +1

      @@ea_naseer Might not be that crazy, running microservers on edge and using a token algorithm to handle the data persistance.

  • @plopplippityplopyo
    @plopplippityplopyo Год назад +235

    It's funny watching devs reinvent everything they shunned 😂

    • @Winnetou17
      @Winnetou17 Год назад +7

      This needs more upvotes

    • @muskrat7312
      @muskrat7312 Год назад +12

      YES! So spot on. It's even better when you listen to a dev that has "discovered" security principles and having them walk the team through all the things they should have been doing from the beginning.

    • @meorung05
      @meorung05 Год назад +16

      Just wait until they fully move client side rendering back to the server

    • @BKearal
      @BKearal Год назад +6

      @@meorung05 You mean server prerendered HTML. The heresy! :D

    • @philipgumm9243
      @philipgumm9243 Год назад

      Was thinking the exact same thing.

  • @wiztek1197
    @wiztek1197 Год назад +53

    "Put everything on a VPS" and stop thinking
    Best decision i ever made

    • @t3dotgg
      @t3dotgg  Год назад +10

      see: my video on getting DDOS'd last week lol

    • @timschupp2080
      @timschupp2080 5 месяцев назад

      @@t3dotggno one said you cant put it behind cloud-flare anyways

    • @Kimitri
      @Kimitri 4 месяца назад +14

      @@t3dotgg so put everything on a vps with cloudflare protection

    • @sphinxwar8529
      @sphinxwar8529 3 месяца назад

      ​@@Kimitri I agree, the level of fucking idiocy devs go to while there are simple point-and-click solutions is astounding. I feel like there are two different worlds - the actual dev world and YT tech space where everything is THE NEW THING. Like SHUT THE FUCK UP oh my god. Yes, it's fun to learn the new things, but wait for them to mature before switching. Fucking tech youtube is a bunch of hipsters ordering essential oils for a flu.

  • @heidji
    @heidji Год назад +136

    I feel that web developers went from optimizing LAMP stacks and dove directly into serverless, to quickly realize that it's expensive if you're not going to optimize, and then went back to optimizing. Next video will be about "how I ditched serverless to configure my own Varnish/Redis cache"

    • @o_glethorpe
      @o_glethorpe Год назад +36

      So true. You need to ask yourself how this man make money and you will see that this is a propaganda campaign disguised as tech content, and dont you dare debunk or you will get ban.

    • @heidji
      @heidji Год назад +33

      @@o_glethorpe I don't necessarily think that he's doing propaganda, but rather milking content for views. No one on earth changes their mind on tech stacks like this guy

    • @Firestar-rm8df
      @Firestar-rm8df Год назад

      I've heard memcached has better performance than redis as it it written in a lower level language and supports multithreading, but admittedly haven't tested it yet.

    • @System0Error0Message
      @System0Error0Message Год назад

      well you cant optimise your php if aws is the one running it on serverless.

    • @Firestar-rm8df
      @Firestar-rm8df Год назад +1

      @@System0Error0Message I hate php. I hope it dies and is replaced with c++ backends and wasm clients.

  • @allNicksAlreadyTaken
    @allNicksAlreadyTaken Год назад +528

    You are the meme JavaScript developer that has a different stack every week, but pretends they don't.

    • @DannyMcPfister
      @DannyMcPfister Год назад +8

      😂😂😂

    • @33ethan33lol1
      @33ethan33lol1 Год назад +5

      Are you a Developer who doesn't like to improve your stack ? I mean updates the stack and change are two very separate things buddy. 🎉🎉❤ Theo

    • @curiouslycory
      @curiouslycory Год назад +26

      Theo does the experimentation and research and as a result I get to spend a lot less time doing it to find really great tools.
      I hope he keeps changing his stack every week!

    • @damionmurray8244
      @damionmurray8244 Год назад +7

      @@curiouslycory I concur. Your comment epitomizes the phrase *_"'Tis wise to learn from one's mistakes, but wiser still to learn from the mistakes of others"_*

    • @Davidlavieri
      @Davidlavieri Год назад +22

      Chasing the framework of the week, the tech of the month whatever it is, never satisfied. This is not targeting real developers with real companies, just the usual twitter's "TODO app devolver" thats basically just clickbaity sponsored content "omg it only takes .00023seconds to create this full fledge app with this tech" the full fledge app being a todo app auth 😂

  • @mattiarasulo9934
    @mattiarasulo9934 Год назад +107

    idk guys I feel at this point when I was building with Express and EJS and deploying on a VPS with a simple NGINX reverse proxy we reached the peak and then we started over-engineer..

    • @SimonCoulton
      @SimonCoulton Год назад +25

      @@vinos1629 of course there is a reason, adding to your resume…

    • @svenhofstede
      @svenhofstede Год назад +9

      Agree 100%. Maybe this is useful for use cases that I’m not aware of but to me this seems like major extra complexity. Connecting to your database using REST?! Your existing code won’t always work because the runtime doesn’t support certain functions? And if they do it’s because the vendor is maintaining some custom converter? What problem are we solving here? Is it purely for cost reduction while achieving low global latency?

    • @ethannr1
      @ethannr1 Год назад +5

      Build for now and not the future, the majority of projects will never need to use edge

    • @t3dotgg
      @t3dotgg  Год назад +9

      I don't think you guys understand just how hard it is to do VPS management correctly at scale.

    • @t3dotgg
      @t3dotgg  Год назад +17

      @@vinos1629 and I don’t have to hire engineers like y’all to manage it full time while maintaining cheaper, faster, more scalable solutions 🤷‍♂️

  • @_2_100
    @_2_100 Год назад +10

    this is a marketing video

  • @LawJolla
    @LawJolla Год назад +25

    Soon we will rediscover servers where cold starts, hand shakes, and ORM spin ups aren’t a thing.

    • @captaindrake8040
      @captaindrake8040 10 месяцев назад +3

      don't tell Theo he will rewrite everything again

  • @AmithKini
    @AmithKini Год назад +116

    It's surprising to see how "Frontend development" went from "Develop HTML, JS, CSS" --> "Develop SPA but backend handles database and hosting" --> "Decide if a feature/page is served on serverless or edge and take care of it". Does this mean small app developers no longer need a dedicated backend engineers?

    • @pts394
      @pts394 Год назад +15

      yeah backend devs are often useless lol
      (for small apps)

    • @gamemoves2415
      @gamemoves2415 Год назад +90

      @@pts394 terrible opinion. Who do you think wrote the code for edge to work? Frontend Devs? Lmoa

    • @daleryanaldover6545
      @daleryanaldover6545 Год назад +12

      It's fullstack all along. I've been fullstack since I entered the industry for all I know.

    • @daleryanaldover6545
      @daleryanaldover6545 Год назад +13

      @@pts394 words of a front end dev who can't center a div and happen to be stuck in vim.

    • @pts394
      @pts394 Год назад +6

      Hey guys, chill, I was talking about smaller apps, ofc backend devs are important, nobody is debating that. Maybe I used not enough words to say what was on my mind

  • @jacobwwarner
    @jacobwwarner 10 месяцев назад +2

    I really like how you give an explanation with a visual diagram to support it. It's really helping me understand some of the more infrastructure-side tech.

  • @MarthinusBosman
    @MarthinusBosman Год назад +22

    I want the "Edge" to just become everyone's PCs/phones and everything just running like torrents and not off any owned server whatsoever.

    • @pts394
      @pts394 Год назад

      sounds similar to a blockchain distribution

    • @pursuitofknowledge5566
      @pursuitofknowledge5566 Год назад +9

      @@pts394 lol no it doesnt

    • @pts394
      @pts394 Год назад

      @PursuitofKnowledge how come? Blockchain is a similar concept, isn't it? Everybody in the network has the ownership of something, and at the same time, everybody has the ability to share it with multiple users at the time. It's like a giant system built on top of some sort of community

    • @pursuitofknowledge5566
      @pursuitofknowledge5566 Год назад +9

      @@pts394 I would suggest looking up what torrents are.

    • @pts394
      @pts394 Год назад

      @@pursuitofknowledge5566 yeah sounds like a good idea after all lol

  • @aprilmintacpineda2713
    @aprilmintacpineda2713 Год назад +25

    Sometimes Theo isn't a reliable source.

    • @lex-fridman
      @lex-fridman Год назад +9

      hmm.. like a lot of times in fact. I suspect most people watching Theo are not real senior engineers and those who are senior just use Theo's content as one uses newspapers, just to hear about topics in the market and selectively do their own deep research on technologies.

    • @manai2683
      @manai2683 Год назад +3

      @@lex-fridman To be honest that's what anyone should be doing with any info on the internet in general. Sure there are more or less reliable sources, but in the end, you make the call.

    • @xenonchikmaxxx
      @xenonchikmaxxx Год назад

      ​​​​@@lex-fridmansystem architect here. Some time ago I've consulted my friends about tech stack for their startup idea. The issue is that in my country salaries of developers are pretty high, so I've decided to put in serverless everything that was possible. My main job is in big corpo so I'm not familiar with such stuff and Teo's videos helped me a lot.
      For my standards the result app was sht af, performance sucked, but its more pragmatic than hire expensive team of java+js devs for startup with unknown future. If everything will go right and guys get some investment we ofc hire a normal team but for now it's not an option.
      So, all this stuff is not suitable for big corpos where "Serious Enginners" work but useful for testing business ideas.

    • @alokpuri9953
      @alokpuri9953 Год назад +3

      I would like to correct you. It's not "sometime" , you can never trust Theo, this is just an vercel sponsored channel

    • @zakariabougroug2687
      @zakariabougroug2687 3 дня назад

      ​@@lex-fridman and you're assuming juniors are not capable of doing that?

  • @Joao-te9sl
    @Joao-te9sl Год назад +35

    What you are referring to as Edge Runtime is in fact specifically the Cloudflare Workers runtime (Vercel is powered by Cloudflare Workers), that runs on V8 Isolates. There is nothing that prevents you from having another edge runtime with an approach that does not use Isolates. AWS Lambda@Edge is also on an "edge runtime", but they use their own solution to achieve this (on which you can only use Node and Python as far as i know), it's just much inferior to Cloudflare's.

    • @NotOats_
      @NotOats_ Год назад

      When I was watching this I kept thinking "this sounds exactly like CF workers", good to know it basically is

    • @DisFunctor
      @DisFunctor Год назад

      that's good to know! Sorry if this is a stupid question, but doesn't that mean it's possible to achieve the same without going through Vercel at all?
      I've never used either, but it looks like if one were to use CloudFlare workers directly, you'd have more flexibility (don't need to be constrained to using node, for instance), which I'm sure has its trade-offs since I'm guessing Vercel provides some conveniences in case you do want to use their service instead.

    • @NotOats_
      @NotOats_ Год назад

      @@DisFunctor I've never used Vercel but yeah you can achieve the same result using CF Workers directly. I am sure there's some sort of service/features that Vercel adds to the equation though

  • @keyone415
    @keyone415 Год назад +24

    Seems like the Vercel Edge Runtime is only for JS app, so not for me :) An inter-continental round trip takes 150ms, so in most situations, we don't even need multi regional deployments. I'm fine with doing a little bit of server management and never have a "cold start" and having super fast DB queries... On AWS if we pick things like Spot instances we can save a lot on ECS costs, when done right you can save like 90% on the on demand price. Serverless is fine for doing Async use cases, but I don't think it's a good choice for Web Servers where latency is critical.

    • @paologaleotti8478
      @paologaleotti8478 Год назад +1

      i agree but sometimes It can be a really good choice if you dont mind cold starts. very low prices and you get auto scaling out of the box

    • @t3dotgg
      @t3dotgg  Год назад +7

      Edge runtimes can only run JS or WASM generally, and you'll only see response times this good in JS. Crazy enough, having a fully virtualized engine for your language has benefits sometimes

    • @SkywalkerWroc
      @SkywalkerWroc Год назад +2

      Yep. That fear of cold starts is grossly overblown.

  • @matthew_whitfield
    @matthew_whitfield Год назад +18

    I use lambda running node/express for SSR and find that less than 1% of user requests end up being cold starts now that the site has a decent level of traffic. With a cold start, the TTFB on a good connection rises from around 200ms to around 400-450ms, which isn't bad enough to make me want to switch. Plus the JS code is 20mb+ uncompressed on the server side so I would have to do something with that.

    • @nsuid7499
      @nsuid7499 Год назад +1

      curious what your setup looks like? would you mind sharing a little more detail? I'm assuming you're using some template engine to serve HTML from lambdas and routing reqs through AWS APIGateway.

    • @matthew_whitfield
      @matthew_whitfield Год назад +6

      @@nsuid7499 Yeah, a page request first goes to CloudFront (CDN), if it hasn’t been cached it goes to API Gateway (HTTP API) which then calls the Lambda function to generate the HTML for the page.
      The site uses the Angular framework ("boo!"), so it’s their rendering engine, but I imagine it would work just as well with any other template engine.
      The lambda functions need to get an item from DynamoDB to produce the page (house price information site).
      The average execution time for a warm Lambda function is 50-75s of which I think the database query is 10-15ms. For users 200ms seems to be the time it takes to get anything back, so I guess the rest is all network stuff like directs, DNS, connection etc.
      Cold start on average seems to add around 200-250ms, and they are a very small percentage of total requests.

    • @wissens4644
      @wissens4644 Год назад

      Dev Test

  • @jricardoprog
    @jricardoprog Год назад +4

    And my shared server that responds in less than 50ms with PHP!

  • @lennart5738
    @lennart5738 Год назад +15

    Hey Theo. I don't know if you actually measured the difference in request time to the DB but from what I know there is signifcant overhead to an HTTP request unless you are using Connection: keep-alive.
    Does the edge runtime support keep-alive connections?
    (P.S. I checked the FAQ first)

  • @JB-fh1bb
    @JB-fh1bb Год назад +5

    Edge location is basically only useful for static content. Making a brochure website for a big client? Edge location is a solid choice. Want anything with live data? Rethink it.

  • @TomasJansson
    @TomasJansson Год назад +5

    I like your visualization around 5:00 on the impact of moving the service further away from the db.

  • @oscarhagman8247
    @oscarhagman8247 Год назад +23

    it's only been a month since your T3 tutorial and i feel like it's already getting outdated. Will you make an update video where you build it with all the new things you have been talking about since then?

    • @t3dotgg
      @t3dotgg  Год назад +23

      Of course. Unlike my tutorial, none of this tech is ready for general use and recommendation. Hopefully soon…ish…

    • @oscarhagman8247
      @oscarhagman8247 Год назад

      ​@@t3dotggthanks! and yeh, whenever it's ready ofc

  • @ccccjjjjeeee
    @ccccjjjjeeee Год назад +5

    World confirmed to be a flat circle 💯

  • @brainsniffer
    @brainsniffer Год назад +41

    The funny thing is, I thought when serverless was released it had the same sort of pricing, and people where massively touting it because you could get millions of runs for pennies - am I misremembering that? I’d imagine, once there was uptake and people began using serverless that it became worth it for providers to begin charging for potential availability, tracking the minutes and charging that way. I would expect a similar shift for edge.

    • @rzr1191
      @rzr1191 Год назад +10

      Don't let the terminology mislead you. It's really just "Web standard APIs inside v8 isolates"
      It's on the "edge" because providers like cloudflare can achieve incredibly high multi-tenancy on far fewer servers compared to VMs
      A 1-5MB bundle is much more reasonable to replicate to 100+ regions compared to 300MB to 1GB+ docker images
      It's objectively cheaper than "serverless" VMs. And much better suited to applications like frontend servers which don't really need much more than a subset of Web APIs

  • @jonnyso1
    @jonnyso1 Год назад +9

    Doesn't like slow apps... writes everything with typescript.

  • @pappdomi5
    @pappdomi5 Год назад +7

    Wouldn't read-only DB regions be a solution for the global edge problem? (Planetscale can manage that) Or I can't guarantee my global edge function calls my closest read-only database?

    • @invinciblemode
      @invinciblemode Год назад +1

      You can guarantee that your edge fn calls the closest db.

    • @t3dotgg
      @t3dotgg  Год назад +4

      You’re thinking too far ahead I wasn’t gonna talk about this for at least a month or two…

  • @francescociulla
    @francescociulla Год назад +4

    I like the strong opinions. Watching it now

  • @zeon137
    @zeon137 Год назад +6

    I hope we can keep edging together for a long while

  • @MattThomson
    @MattThomson Год назад +10

    I don't know what makes everything change so much. You learn one thing and next month it's not the right way.

    • @o_glethorpe
      @o_glethorpe Год назад +3

      Everyone is pushing some kind of agenda, you should learn as much as possible and create your own opinions, dont just go with the flow and never assume to yourself any opinion just because some dude with fancy hair said so.

    • @vivarantx
      @vivarantx Год назад +2

      he's just selling his service...serverless is fine for 99% of the apps

  • @pedroalonsoms
    @pedroalonsoms Год назад +18

    This is serverless stuff is getting way too overengineered.
    At this point just throw the whole app into a ec2 box with ubuntu and call it a day.
    If you ever reach page-load performance issues, start by trying to optimize in other places first (maybe start by removing huge JS frameworks).
    In the worst case, if you ever need to get a super-optimized backend, just hire prime to rewrite the entire thing in rust.
    I mean at this point we’re rearchitecting the way the entire internet works from scratch: building new db drivers, new cache systems, new ways to split up databases across many servers, new rate-limiters, etc.
    Just kidding Theo. Love your vids.

  • @k-c
    @k-c Год назад +10

    I still like good old PostgreSQL

  • @jannisbaalmann6870
    @jannisbaalmann6870 Год назад +20

    Theo the type of guy who is so convincing in every single video that he makes me doubt my stack forever 😂

  • @js_madness
    @js_madness Год назад +3

    Whats the tool Theo is using for drawing all the stuff ?

  • @OldKing11100
    @OldKing11100 Год назад +2

    My brain fried when trying to figure out the subtle differences between edge/regional I'm just happy to be able to throw my SvelteKit frontend up to Cloudflare pages with specifying which routes can be prerendered. Then for anything more complicated I connect Cloudflare Pages to my Go Fiber API and DB backend through Cloudflare Tunnel. Edge functions are still spooky to me, even though I do use fetch().

  • @brefaccion
    @brefaccion Год назад +5

    Edge runtime feels like an oportunity for CQRS revival to me. Serverless was already a good fit for it but with those limitations specifically, it almost reads like the intent of `exec` in pure FP variations of CQRS.
    Meaning, you aren't meant to do IO in it, you just take a command and a state (usually local to the system), and emit events out of it instead of running long-living requests (which can be triggered elsewhere reacting to events similar to redux-sagas), and you just need to keep a minimal internal state, only what is necessary for your `exec` to make decisions on what events to emit. Plus events are meant to be JSON.stringify-able

  • @rickdg
    @rickdg Год назад +3

    Respect to the editor going back to screenshot that yes from Jacob 😂

  • @mochalatte3547
    @mochalatte3547 Год назад +2

    What presentation tool do you use to explain all about serverless? quick drawing is what kept my attention when you were pointing out the subtle differences between serverless vs edge. Kudos.

  • @tcurdt
    @tcurdt Год назад +13

    I hear operations people quietly sigh on this video. Leaving some things aside: The main problem is distribution of data services - and also the privacy aspects of that.

  • @BosonCollider
    @BosonCollider Год назад +3

    Stupid question: WHY are edge functions free of cold start delays? I though edge functions were just serverless functions running on the edge? I feel like a bunch of people came up with a new set of buzzwords without rigorously defining them.

    • @Evan-dh5oq
      @Evan-dh5oq Год назад +1

      The edge runtime is not the same as edge computing.

  • @Saurabhkumar-bn3dl
    @Saurabhkumar-bn3dl Год назад +10

    But what about ridiculous bandwidth cost of vercel? Is that manageable?

    • @t3dotgg
      @t3dotgg  Год назад +3

      Most of the "bandwidth" will be cached after the first requests. Don't serve massive assets off an expensive CDN and you'll be fine.

    • @JoRyGu
      @JoRyGu Год назад +14

      This is the elephant in the room for all of these services he's using. They all have pretty generous free tiers but balloon in cost for any serious applications. Vercel's pricing is rough enough for my company to be exploring building out infrastructure on AWS just to get Next with all the features it offers without Vercel.

    • @Saurabhkumar-bn3dl
      @Saurabhkumar-bn3dl Год назад +1

      @@JoRyGu Tbh vercel does makes it easier to use cutting edge things like he is talking about a lot. But this is something which is holding me back too. The balloon in the cost is why I opted for self hosted supabase instead of Planetscale. I haven't used amplify yet, but do you know how much worse it is than vercel? Or does it support all the latest features of next 13?

    • @t3dotgg
      @t3dotgg  Год назад +8

      @@JoRyGu you should watch my video about AWS cost if you sincerely believe this.
      The bandwidth costs are in line with any other CDN of similar quality. Be smart about where you put your big assets and you’ll be fine. Your Vercel services should return HTML and JSON, if you’re moving terabytes of either you’re probably making a mistake

    • @mmzzzmeemee
      @mmzzzmeemee Год назад +7

      Tbh, my biggest gripe with vercel rn is that their free tier does not allow commercial usage.
      Almost any other provider does (e.g. netlify, heroku formerly, cloudflare pages, etc.).
      It's such a shame cause vercel is still probably the most convenient but their free tier is just a deal breaker. Idk why they did that but to me it's simply a bad business decision.

  • @derschutz4737
    @derschutz4737 Год назад +7

    U can use wasm as ur compile target to run rust/c++ on edge

    • @rzr1191
      @rzr1191 Год назад +1

      Yes, but you lose our on the hyper fast cold starts promised by edge runtime
      using web APIs you use the included runtime whereas using another language could mean you're shipping libs that end up resulting in larger payload with slower start time
      The tradeoff does make sense for heavier use cases like image processing

    • @derschutz4737
      @derschutz4737 Год назад +1

      @@rzr1191 Do you have data to back that up? rust WASM has incredibly low cold start times/binary sizes, on lambda rust can be almost an order of magnitude faster to start and 3x less size than an equivalent node function. Why would edge suddenly mean that WASM has worse performance? Also, my main benefit is the fact that I don't have to use typescript (absolute garbage language)

  • @michidk
    @michidk Год назад +2

    When talking about edge locations, you assumed that there always has to be a cold time or some slow database connection build-up, there are solutions where this is not the case (warm standbys). Additionally you can also send HTTP requests to database API from edge locations, this is not only limited to edge runtimes.

  • @user-ik7rp8qz5g
    @user-ik7rp8qz5g 11 месяцев назад +1

    With all troubles of "serverless", isnt it better to just use "servered"? Keep process running on server to avoid cold start, have infinitely faster connection to database on same server/cluster. And request time from user to server is same as regional edge

    • @Vim_Tim
      @Vim_Tim 10 месяцев назад

      "Keep process running" tends to be expensive at any notable scale, especially in the cloud and especially if the app needs to be globally-distributed.

    • @istipb
      @istipb 9 месяцев назад

      ​@@Vim_Timserverless becomes more and more expensive as ut scales

  • @carlosbueno6252
    @carlosbueno6252 Год назад +1

    You shoul'd take a look at Fermyon.
    It's basically everything you want when you talk about edge runtime.
    It's basically webassembly on the server, it allows you to have no cold start and spin up and down you infra as needed.

  • @hebestreitfan6973
    @hebestreitfan6973 Год назад +4

    At this point it feels like Theo keeps over-optimizing his app for content

    • @hebestreitfan6973
      @hebestreitfan6973 Год назад +5

      He will eventually discover the benefit of just running a few containers on a VPS

    • @SkywalkerWroc
      @SkywalkerWroc Год назад

      Simplicity and maintainability is far more important than shaving off 100ms from the page load time.

  • @dontdoit6986
    @dontdoit6986 Год назад +4

    Alright, how much is Vercel paying you?

  • @jajwjwiwo
    @jajwjwiwo Год назад +1

    Under the hood Vercel deploys to AWS Lambda. To deliver world class dynamic sites in production, Vercel has turned AWS Lambda into an edge-first compute layer

  • @nikolozichikhladze8638
    @nikolozichikhladze8638 Год назад +1

    Can somebody link me a video where Theo is talking about Firebase or why he would never consider it If such exists? Just curious of Firebase functions vs vercel edge comparison

  • @kemoboy
    @kemoboy Год назад +3

    And when I think that for more than half of my and my clients web apps/sites I just use a VPS and it just works fine, with loading times that are good. :( When did deploying stuff become so overcomplicated?

  • @studiowebselect
    @studiowebselect Год назад +1

    Vercel is using aws or i am wrong?
    Also for the cold start time, its because your code is to big. Sound crazy, but we are in an time where you need to optimize the weight of the backend as we do with the front-end.
    For the DB connection, use DynamoDb as mush as possible and dont use external db of aws. Serverless DB from aws is bad also.
    So lightweight code, DynamoDb with a aws non-serverless db and it will be fast without cold start lag

  • @SeanCassiere
    @SeanCassiere Год назад +2

    Does anyone know of an edge compatible cuid generator?
    The paralleldrive one seems to use node:crypto which doesn't work in edge, and I don't believe Drizzle has a helper for it.

    • @t3dotgg
      @t3dotgg  Год назад +3

      The paralleldrive one _should_ work but doesn't due to a weird bug in bundling for app router. Works in /pages for some reason 😅
      I've raised it with vercel and hope they have it handled soon. Suffering with uuid v4 for now 😅

  • @Stealthy5am
    @Stealthy5am Год назад +3

    So the main point is to reduce cold starts and you're planning to achieve that by putting yourself in a very constrained and limited environment? I'd recommend spending some time instead optimizing your code to reduce the cold starts duration instead of rewriting it all. You mentioned cold starts taking seconds, I'd say that's not really expected... You must have some low hanging fruits. You can easily achieve under a second for sure. I'd say low hundreds are not a big stretch either.
    Basically, the only thing edge has is the cold starts, but you don't really show anything about the "good" cold starts of edge. Instead you're talking about RTT of requests, but that's exactly the same for regional edge vs serverless.

    • @Vim_Tim
      @Vim_Tim 10 месяцев назад

      You cannot completely "optimize away" cold starts. Even zero runtime languages like C++ and Rust experience cold starts in serverless. The only other option is keeping some serverless instances warm, but this is far more expensive and ruins the consumption-based pricing advantages of serverless.

  • @WhhhhhhjuuuuuH
    @WhhhhhhjuuuuuH Год назад +5

    Great video. I would if you included a bit on how this helps your customers.

    • @rewrose2838
      @rewrose2838 Год назад

      Around 21:30

    • @pokefreak2112
      @pokefreak2112 Год назад +10

      @@rewrose2838 That it? If cold starts and prisma are the bottlenecks it sounds to me like optimizing prisma and hosting on a regular server are the most obvious solutions. People really just jumped on the serverless bandwagon for the *hypothetical* benefit of scalability only to suffer the very real downside of cold starts because they don't actually have users

  • @domw2391
    @domw2391 Год назад +6

    prisma just release a blog that how they speed up the init on lambda function

    • @t3dotgg
      @t3dotgg  Год назад +7

      You should check out my last video lol

    • @domw2391
      @domw2391 Год назад

      @@t3dotgg will do!

  • @JJSmalls
    @JJSmalls Год назад +3

    "I hate serverless" - yes, someone I can agree with
    "I'm just going to move everything to the edge" - Oh great, a new tech stack with a gimmick marketing name.

  • @spotgaming4668
    @spotgaming4668 Год назад +2

    Yeah but pal heres the thing, everytime we ain't gonna use planetscale or database which provides apis to interact with their db , so that's a big issue. In that cases, we have to do everytime connecting to db, fire request and close the connection, for every request.

    • @t3dotgg
      @t3dotgg  Год назад +1

      Sounds like lots of room to innovate in the industry 🥲

    • @leos5246
      @leos5246 Год назад +1

      Postgres with PGBouncer and you have direct connection, pooling manager, don't need to think about open/closing sessions (since it's claims to be milliseconds but Im too often face with the situation when it adds up 100ms on top of the request).
      Keep any RDBMS on single server until it's possible to handle all your requests, divide you relational and non-relational data between nosql databases and rdbms, move out analytical requests out of your RDBMS to the warehouse
      It's all about architecture and your needs, there are no single solution for all

    • @gdsftvdr
      @gdsftvdr Год назад

      ​@Leo S yep, I agree. ECS/Fargate feels like a winner for scaling with RDS databases. DX feels a bit better too. Long running tasks etc.

  • @matthew1106
    @matthew1106 Год назад

    This is a fantastic video. Really wonderful job explaining the trade offs.

  • @pharmokan
    @pharmokan Год назад +5

    tfw you start out as bleeding edge JS YT channel but then evolved into a devops channel because thats where all your bottlenecks occur

  • @oleksandrploskovytskyy1520
    @oleksandrploskovytskyy1520 Год назад +1

    Yeah it’s awesome!
    Btw is it possible already to use just “edge” keyword instead of “experimental-edge” for deployments or it’s different runtimes where the experimental one is for new features like regional edge?

  • @badazzmorris
    @badazzmorris Год назад +2

    Poor, sad Tux Penguin 😢🐧

  • @almeidaofthejoel
    @almeidaofthejoel Год назад

    Thank you for showing that you could opt into edge on pages as well as API routes, I was trying to find out if you could for like an hour yesterday and couldn't find that anywhere. Everything I see on Vercel is just showing API routes

  • @BlackThreadDev
    @BlackThreadDev 3 месяца назад

    Great vid. What diagram software is that?

  • @_____case
    @_____case Год назад

    "I want my ideal stack to be serverless." - Ryan Dahl

  • @vinu_kaliyar_
    @vinu_kaliyar_ Год назад +1

    Which software you used to draw the flow?

  • @theshy6717
    @theshy6717 Год назад +1

    Do you have a video of you building an app on the edge with planetscale? I find it hard to use planetscale without prisma and would love to see how you do it

  • @Jason-eo7xo
    @Jason-eo7xo Год назад

    Simple solution: just get rid of all databases. No databases = no database requests. BOOM!

    • @paulchatel2215
      @paulchatel2215 Год назад +1

      Yeah probably one of his next videos : "I'm done with dynamic pages"

  • @thisbridgehascables
    @thisbridgehascables 6 месяцев назад

    The hilarious part to ‘serverless’ is that there still is a server or servers behind it.
    ‘Serverless’ doesn’t mean no server.. a computer can be a server. So going serverless literally would be like writing code on paper and never having it get complied or presented to you in physical form..

  • @DontEatFibre
    @DontEatFibre Год назад

    What is that drafting/presentation software on 3:39?
    My projects are usually quite simple so I have been using Cloudflare workers and KV Storage to host static website exports and to build basic APIs.
    Now they have made their D1 SQL databases for workers open for testing.

  • @cherubin7th
    @cherubin7th Год назад +1

    Idk I deployed my own cloud on premise and I find putting stuff directly on Linux is less cumbersome and less complicated than all that serverless or edge or whatever stuff.

  • @DiegoBM
    @DiegoBM Год назад +1

    This was incredibly informative! Where can we read more about practical coding for the edge runtime (including handling non-edge-ready dependencies and dealing with data)?

  • @TheRealAfroRick
    @TheRealAfroRick Год назад +1

    What language are you running in where you are getting cold starts in SECONDS?!?!?

    • @Vim_Tim
      @Vim_Tim Год назад

      It's not about the language, it's the runtime architecture. Check out the Cloudflare Workers documentation about the V8 Isolates - Cloudflare keeps the V8 runtime alive while creating & destroying the individual processes.
      "V8 orchestrates isolates: lightweight contexts that provide your code with variables it can access and a safe environment to be executed within. You could even consider an isolate a sandbox for your function to run in."
      "A single runtime can run hundreds or thousands of isolates, seamlessly switching between them. Each isolate’s memory is completely isolated, so each piece of code is protected from other untrusted or user-written code on the runtime. Isolates are also designed to start very quickly. Instead of creating a virtual machine for each function, an isolate is created within an existing environment. This model eliminates the cold starts of the virtual machine model."

  • @Ratstail91
    @Ratstail91 Год назад +2

    I have one server, in Sydney, Australia. It's good enough for now, and I've built my whole site's system to use it. Who knows what will happen when I officially release it.
    That price though...
    Edit: What the hell is vercel? am I so much of a dinosaur I can't understand it?

    • @zuma206
      @zuma206 Год назад

      i believe it started out mostly as a frontend hosting solution, then slowly started to support more backend solutions. now they offer simple ways to host fullstack jamstack apps like next, remix, sveltekit etc.

    • @TigreXspalterLP
      @TigreXspalterLP Год назад +2

      vercel is the sponsor of the channel ;)

    • @abdirahmann
      @abdirahmann Год назад +5

      am dead at your edit 🤣🤣🤣🤣, a dinosour!!, bruh!!💀

  • @L33tRose
    @L33tRose Год назад +3

    Damn the edge looks really impressive. I might take a deep dive into v8 isolates to see how they work and what other tech is using them.
    Would be really cool to see a follow up video on moving ping to the edge and seeing a before and after in performance.

  • @NateLevin
    @NateLevin Год назад +1

    16:30 This is my only issue with this video - if the user has slow internet, then global edge will be much, much faster, no? Assuming that the request times are equal assumes that the user has the same internet speed as the server.
    Anyways, you are absolutely right when you consider lots of serial requests.

    • @t3dotgg
      @t3dotgg  Год назад +1

      Why would the speed of their internet affect the latency of a call? Bandwidth and latency are entirely different and unrelated lol

  • @Alakeks
    @Alakeks 4 месяца назад

    What is this software for drawing?

  • @TheNivk1994
    @TheNivk1994 Год назад +1

    What is the tool he is using to paint and write?

  • @hackmedia7755
    @hackmedia7755 8 дней назад

    your server could have caches to speed up retrieval, especially if a lot of users are requesting the same thing.

  • @MrWandalen
    @MrWandalen 11 месяцев назад

    Hello! Thanks for video! Wonderful diagrams! Which software did you use to draw it?

  • @mountainash
    @mountainash Год назад

    Edge as a location IS IMPORTANT! Just like a CDN; the location edge out-does Regional edge when the data is cacheable (eg 10 latest blog posts; or newest rentals in your city). Edge locations is also important when you can use a distributed DB like the SQLite offering many edge providers now offer.
    This video seems already outdated. (Old school thinking)

  • @user-xl5kd6il6c
    @user-xl5kd6il6c Год назад +7

    >You will own no servers, and you will be happy
    Yeah, no thanks

  • @JacobKrajewski
    @JacobKrajewski Год назад

    Edge is also used to talk about edge devices. For instance, whether or not a computer vision model is handled online somewhere, or on the physical device, or the "edge device" or "edge processing"

  • @Leofmoura87
    @Leofmoura87 Год назад

    Can anyone help me to understand why we don't have cold start on the EDGE? I missed that.
    thanks!

  • @sbmb9613
    @sbmb9613 10 месяцев назад

    So no ffmpeg @ edge ... shouldn't v8 isolates support wasm? Technically able to execute rust binaries?

  • @gleicon
    @gleicon Год назад

    I'm kind of lost of the difference about lambda slow start and how edge functions get rid of it...

  • @jaroslavhuss7813
    @jaroslavhuss7813 Год назад

    Guys eat me, but I am running my projects on Raspberry PI Cluster -> 4x Rasbperry PI 4B 8GB ram hidden behind the firewall, 1GB ethernet connection, all of them on SSD (not SD card) and... this is the 1,5 year and they do just fine :D

  • @AlvarLagerlof
    @AlvarLagerlof Год назад +2

    Edge is serverless

  • @chair547
    @chair547 6 месяцев назад

    Thanks for this great edging tutorial theo

  • @theoDSP
    @theoDSP Год назад +2

    Edge runtime is so last year. Why not run the server on the actual user computer instead? Faster than ever.

  • @atridgedcosta4374
    @atridgedcosta4374 Год назад +2

    Use something like Turso and just move your db to the edge as well.

  • @benkogan1579
    @benkogan1579 Год назад +7

    This concept of regional vs. global edge runtimes is interesting. It makes sense that you want your code running close to the database if you are making many requests. I feel like the major difference between regional and serverless is the cold starts. If serverless somehow solves this issue, that would seem like the best of all worlds, right? No cold start, close to DB, everything is compatible. I guess we gotta wait and see.

    • @t3dotgg
      @t3dotgg  Год назад +7

      If cold starts are solved, I'll probably delete this video

  • @GccYak-eg3wk
    @GccYak-eg3wk Год назад

    Vercel edge functions charge for execution units of 50ms. Isn't that the same as lambda charging for CPU time?

  • @Khari99
    @Khari99 Год назад +5

    Wow this was crazy informative. I didn’t think edge was much of a big deal and I didn’t understand how much it differed from serverless so I never bothered with it. But now that you broke it down, I’m definitely going to try it out! The amount of value I get out of your channel sometimes is absurd. Thanks Theo. Keep up the good work

  • @NeoChromer
    @NeoChromer Год назад +3

    I really dont see a benefit of this if you dont have 100k+ users in different regions.. am I wrong on this?

  • @TheXiguazhi
    @TheXiguazhi Год назад +1

    Idk why i cringe every time i hear someone say running on the edge to avoid spinning up a linux box. Someome somewhere spun up a linux box for you to run your edge functions on...and youre paying for it. Its why lambda costs a lot and im sure once initial custlmer base is built for edge applications they will increase their prices and no longer be loss leaders

  • @alichamas63
    @alichamas63 Год назад +2

    It's like he's young and learning on the fly or something?

  • @GeekOverdose
    @GeekOverdose Год назад +4

    I'd like to emphasize, this is probably only a solution for small complexity apps. If you have a massive system with multiple user types, serving multiple different UIs, might not be the best approach

    • @leos5246
      @leos5246 Год назад +2

      I think it depends what do you understand under complex apps, 90% of the apps in the internet and corps can be handled by 1 server with database on it

  • @ReedoTV
    @ReedoTV 7 месяцев назад

    One day apps will become so close to us they run locally

  • @reinoob
    @reinoob Год назад +1

    what do you mean kubernetes? just use docker-compose and nginx to proxy things around XD

  • @pkoch
    @pkoch Год назад

    Theo has an Arch Linux mustache in the thumbnail. Can't unsee.

  • @RemotHuman
    @RemotHuman Год назад +1

    What about the traditional way of having a regional server always running (so no cold starts or prisma spinning up for first time right?)

    • @knm080xg12r6j991jhgt
      @knm080xg12r6j991jhgt Год назад +1

      I think his point is: then you have to maintain that infrastructure and hire people to do it. Developer/devops time is expensive.

    • @RemotHuman
      @RemotHuman Год назад

      @@knm080xg12r6j991jhgt you could still deploy to a managed platform, but I guess you are right since it wouldn't scale horizontally automatically like spinning up more serverless functions or serverless edge runtime functions would. (Is that the right reason? Its the only one I can see right now.)
      I wonder if you could do a hybrid though that still doesn't require devops like maybe instead of lambdas you make a server with no state and duplicate instances can be spun up and down automatically like lambdas, but less often and multiple functions could share a database connection. And you could use existing tooling for languages other than js. I don't have much experience with back end so maybe I'm missing something about how servers work.

  • @nickscott9832
    @nickscott9832 Год назад +4

    Would it be a big win to have the auth built into the database, e.g. request data from a table (db checks permissions and returns data in a single request/response if they have permission and says no otherwise), rather than sending multiple requests to see if they have that permission before requesting the data.