- Видео 59
- Просмотров 38 371
AJ Stuyvenberg
США
Добавлен 2 мар 2014
Is Hono the holy grail of web frameworks?
Recorded live: www.twitch.tv/aj_stuyvenberg
The post: blog.cloudflare.com/the-story-of-web-framework-hono-from-the-creator-of-hono/
Hono: hono.dev
Get at me: astuyve
The post: blog.cloudflare.com/the-story-of-web-framework-hono-from-the-creator-of-hono/
Hono: hono.dev
Get at me: astuyve
Просмотров: 2 515
Видео
The AWS Lambda Doom Loop
Просмотров 45614 дней назад
My post: aaronstuyvenberg.com/posts/lambda-timeout-doom-loop How Lambda warms your functions: aaronstuyvenberg.com/posts/understanding-proactive-initialization The docs: docs.aws.amazon.com/lambda/latest/dg/troubleshooting-invocation.html#troubleshooting-timeouts
Avoid these AWS billing mistakes
Просмотров 24021 день назад
We talked about it on stream but it didn't make the edit. If you _did_ accidentally leave something on or enable something you didn't intend to - always always always open a support ticket. AWS is good about forgiving accidental overages. Recorded live: www.twitch.tv/aj_stuyvenberg Follow for updates: astuyve Thanks to Corey Quinn's blog at the duckbill group: www.duckbillgroup.com/...
Busting the serverless myth
Просмотров 1,3 тыс.28 дней назад
Recorded live: www.twitch.tv/aj_stuyvenberg Follow for updates: astuyve Use this project, ship your code anywhere you want: github.com/awslabs/aws-lambda-web-adapter
How Netflix solved container CPU contention
Просмотров 2,8 тыс.Месяц назад
Netflix packs a TON of containers onto individual hosts, but had trouble deciphering if an app was slow or the underlying host was just swamped. Here's how they solve this problem with eBPF. Recorded live: www.twitch.tv/aj_stuyvenberg Post link: netflixtechblog.com/noisy-neighbor-detection-with-ebpf-64b1f4b3bbdd
How Lambda made container cold starts 15x faster
Просмотров 1,1 тыс.9 месяцев назад
How Lambda made container cold starts 15x faster
Using SQS and Lambda the right way
Просмотров 1,6 тыс.11 месяцев назад
Using SQS and Lambda the right way
STOP using Lambda layers (use this instead)
Просмотров 1,7 тыс.11 месяцев назад
STOP using Lambda layers (use this instead)
No Trespassing - Legal Antenna BASE jumping
Просмотров 5684 года назад
No Trespassing - Legal Antenna BASE jumping
Losing and Finding my GoPro while BASE jumping
Просмотров 1,2 тыс.8 лет назад
Losing and Finding my GoPro while BASE jumping
Solo B.A.S.E Jump from the High Nose, Lauterbrunnen, Switzerland
Просмотров 6269 лет назад
Solo B.A.S.E Jump from the High Nose, Lauterbrunnen, Switzerland
Great vid, subbed!
Hi, a newbie here, why the size of the bundle of server side framework is important ?
Hey good question! It's because the size of your bundle directly impacts your cold start latency for serverless functions! I gave a whole talk on this at AWS re:Invent: ruclips.net/video/2EDNcPvR45w/видео.html
Less code (assuming it's still doing similar work) will generally run faster than more code as it can take better advantage of CPU cache etc, "Locality of Reference" is a term to search further on the subject. JS is an interpreted language so less code to interpret also means faster startup times which is especially important on hosting platforms like "lambda" that can scale to zero
nice! I've been using hono for our public facing api for about a year now - it's fantastic
Yeah it's awesome!
I have been deciding between encore and hono and looks like I'm going hono even though I find encore's take on services impressive but deployment would be expensive. 😅 Thanks for sharing that article and talking through it.
Sure thing, glad you enjoyed it!
Thanks for diving in on this, I actually worked at a company that had thousands of lambdas using Serverless framework... Total mess, not to mention serverless framework is slow in general. I've been thinking about that setup and hono definitely has the right ideas. The bundle size was massive, they all used I also give massive props to the creator for using zod, which has become my goto in TS, it's simply fantastic.
Thanks! Yeah Hono has really hit something special here, I think it's doing a lot of the things we initially sought to do on the Serverless Framework which fell off after focus shifts revenue challenges.
Hey AJ, do you do career mentoring?
Hey! not really, but I'm happy to answer any specific questions. You can DM me on twitter
Late but here!
Thanks Nacho!!
Great video. That adapter could be really useful.
Thanks Ryan!!
This is really helpful, thank you for the great explainer. This is sort of want happened for the Crowdstrike error. When it shut down everything? I'm doing a report right now for my cybersecurity class. IS there anyway to run that code on a virtual server lets say VM ware on kali linux? Thank you in advance.
No problem! You can run it on a regular VM but you'll need to set up the AWS Lambda Runtime Emulator as well
Mistake 13: Not using the AWS Pricing Calculator. It's not easy, but it forces you to think about your architecture and data throughput. More of a personal preference: Start building with simple services. Sometimes all you need is S3, an EC2 t4g.nano and Let's Encrypt.
awesome
Great advice for people new to AWS! Hopefully some of them will not make the same mistake most of us did before.
delusional
Great tip! Thank you, AJ.
As I see these open source projects , the more I realize people dont know sht about lambdas or how cloud works. So if you add express to a lambda what happens is that the whoooole event comes to the lambda , and the lambda has to get up with all the code from express and rout it on it !!!!!! there is somehting called api gateway where the routing happens. So by using your solution you are paying more on compute power for nothing and making your lambdas slower. Funny enought you have same convept of api gateway on every cloud provider. Plus , you are using an example with fargate.. the most expensieve of all the on demand services. I understand your background but this tutorial makes people expend a lot of money plus a lot of resources. Not all open source projects are worthed. Easier than this is ussing middy and have a grapper to detect in which provider you are in.
Your comment is entirely indecipherable so I won't bother, but if you think I'm confused about how lambda functions work, you may want to look me up.
Why would you use api gateway instead of raw lambda?
@@maratmkhitaryan9723 Oh tons of reasons, I wrote a whole post about the difference between mono-lambda APIs versus single function (routing in APIGW) APIs: aaronstuyvenberg.com/posts/monolambda-vs-individual-function-api
Great video AJ! There is so much misinformation out there.
Thanks so much!!
But what about my applications data AJ, WHAT ABOUT THE DATA!! 😂 In all seriousness, great vid. Very much needed.
haha yeah your database can absolutely lock you in, no question about it
The lock-in comes from the services used and not so much from the handler deployments. And this is also no protection against huge bills.
@@tcurdt that's my point, if the service is expensive you can deploy the same application somewhere else cheaper with no code changes.
@@astuyve My point is: dependencies on services is what create the real lock-in.
@@tcurdt I'm not sure you're making a cogent point. I've just demonstrated how you can have no dependency on any one service. You can build an application and deploy it to Lambda, Fargate, Coolify, k8s on oracle - literally anywhere. If the service charges too much, just leave. The real lock in is at the database (if you store everything in say - dynamoDB)
@@astuyve You can't run the lambda without storing state somewhere. Or if you have something more sophisticated, you will be teased to use some of the AWS managed products. Especially ones that have proprietary protocols in use. RDS, S3, are examples of protocols that you can use in nearly any other cloud. But there also are many other AWS products with proprietary protocols, which will not allow your app to move to other cloud.
@@maratmkhitaryan9723 I think that's mostly true. There's a higher bar for interop with something like MongoDB or Spanner and DynamoDB (for example). But a ton of people are still simply using SQL, and now with egress fees waived from AWS if you're moving to another cloud, you've never been less locked in.
Thank you for calling out the "I host on one VPS" idiocy. It's not that hard to do high availability from the start, and it really shows the people spouting that have never hosted a multi-million dollar app on it. There is simply no excuse for not running a setup like this, and easily scaling to multiple hosts, ECS and Lambda make it too easy. Servers only ever crash at 3am in <your timezone>, or at <peak traffic time>, there is no inbetween for some reason.
@@bear_jaws exactly
Great video! Your comment at the end "read the docs" isn't fair though, because the docs recommend local emulation of lambda which is the total opposite of the compute agnostic direction you're recommending. In my company we don't abstract at the level of the container, we abstract at the level of the express server and just create different wrappers for lambda / local / docker. The downside of using custom containers in lambda is that you lose the highly tuned config of AWS's standard containers and it won't perform as well under load and may have worse cold starts.
Hey good questions but not quite correct. Firstly, the Lambda Web Adapter doesn't require you to use a container, I just did because that makes it easier to test it elsewhere. You can ship a zip function this way. However there's no downside to using a container in Lambda. You can use the exact same base image that AWS uses for Lambda, they publish it even before it makes it to the Lambda runtime itself. I'm not sure what you meant by performing well under load but based on the evidence I've collected that is not the case, the container is irrelevant because it's not actually running in Lambda (it's simply a packaging mechanism). Finally, containers have much faster cold starts now - in many cases even faster than zip functions. I wrote a long blog post about it, which you may have seen: aaronstuyvenberg.com/posts/containers-on-lambda
@@astuyve great blog post, and glad they've improved cold starts. The issue I had under load was with unstable connection pooling via aws-sdk following hibernation when trying to invoke downstream services in any significant quantity. I just had noticeably higher request latency (I was invoking another function). I don't know if this was due to the container I was using, or due to the bundled aws-sdk that's optimised for lambda, or both. Deployment speed was an issue too from memory, what was it like when you tested in Jan?
@@origanami Again the container part is simply a deploy time packaging mechanism. Your code is fully extracted into s3 after you deploy a container to Lambda (yeah, that part is slower than zip, but it's up to 10gb so the increased size is fair IMHO). We run a few thousand functions continuously to verify our implementations across runtimes and at a quick glance I haven't found any discrepancy between container-based functions and zip-based functions when it comes to performing a direct lambda-to-lambda invocation. I'm not sure what the cause of the issue you saw is, but I'm not able to replicate it myself.
What about the impact on cold start? I expect that would have a biimpactn impact on that front
@@anthonytrad less than you'd think! Container images have come a long way in Lambda, I did another video about how they made them so fast ruclips.net/video/qAYY9df2hVQ/видео.htmlsi=oPfCPAQzmxceNJ3I The web adapter does introduce a bit of overhead but it's a pretty tiny binary overall
🙌🙌🙌🙌 awesome demo
Thanks!!!
great video, thank you!
Thanks!!
really great video, subscribed.
@@banafish thanks so much!
Great stuff. eBPF is really powerful!
Oh and sched_clock() returns the number of nanoseconds since the clock started running as a unsigned 64bit int, which it does when the system boots. I can go DEEP into the oddity of how this works on multicore systems, and the fact that some CPU architectures support it cleanly and natively and on others the kernel has to do a load of work in software to provide this facility (and if its being done in "software" it won't usually have anywhere close to nanosecond precision in reality).
The fact we are doing this insane stuff with something that descends (through of course a lot of generations) from putting firewall rules into BSD kernels is ... wild.
absolutely insane - AND it still makes for a pretty good firewall toolkit too
They went so far optimizing the engine for packet filtering that they discovered the virtual machine for the DSL could be used for any other high performance filtering in-kernel 😂
Watching this movie while working from home with my daughter sleeping on me (in a sling like yours) - but this was way too complex for me!
Nice review of that article. Congratulations on the baby! 🎉 You did all this while rocking your child in a sling - extra dope!
@additionaddict5524 please delete your comment so I can comment "first" on this video, thanks in advance
ok
you didn't put first :(
veery clever
Very useful vid, thanks. Would be wonderful if you shared the cost of this experiment as a bonus for watching till the end ;)
@@HeRvAsH93 zero dollars and zero cents, all of this is well within the AWS free tier limits
You’re amazing man I’d like to see a tutorial on how to create lambda functions from scratch
Thanks for the kind words! I'm happy to do that, we create a ton of functions from scratch all the time
please kill the background music in the future. hard to hear you clearly. thanks!
Thanks! I'll try to fix the mix. I always ask chat when I start how the audio is, but will adjust further.
Interesting! AJ, what's the impact of this on costs.
404'th subscriber
Yoooo thank you!
Great stream. Thank you 😊
When the stream quit I thought it was the cameraman falling off the stand.
LMAO
Also I'd love to see a graph or some type of visualization on what percentage and specifically which parts of my code is already cached. I guess AWS would never do this
I agree, but mostly for the sake of curiosity. I'm not sure what actionable insight we'd get from that information, but it would be interesting!
love it! Thanks for the explainer. We were wringing our hands wondering if we should switch from zip to container based after getting close to the 256mb limit and fist fighting with layers. In the end we did and looks like it was a good choice.
So glad you enjoyed it! Definitely avoid using Lambda Layers (with a few exceptions). I did another video on this
why u in jail?
I swing by proactively every now and then, as a preventative measure.
Hey @astuyve, Thanks for this guide, its very helpful. I want to take this problem further and ask if you Did you ever observer SQS consumer Lambda scaled more than 1250. We have set lambda's reserved concurrancy to 2000 and we have almost millions of messages in sqs queue, And our lambda is not erroring or anything and we observe that lambda does scale to 1250 and then stopped. I think this is some sort of Limit from AWS. Did you ever observed something like that?
Hey! Great question. 1250 is the new maximum number of concurrent invocations consuming from one queue, you can find that limit in the new announcement blog for faster scaling (aws.amazon.com/blogs/compute/introducing-faster-polling-scale-up-for-aws-lambda-functions-configured-with-amazon-sqs/) as well as the docs. I'd suggest increasing the batch size so each invocation receives more messages, and then increasing the RAM and optimizing the function code to process multiple messages at a time. Good luck!
You rock man, keep doing what you are doing! Thank you.
Thank you so much for the kind words, it means a lot and I sincerely appreciate it!
dang, shoulda read the docs. many such cases
crazy how often that seems to be the case!!
So much ROI for such a small change
Exactly! Super simple, instantly boosts performance by ~5x or more.
very productive, these small but efficient tips can really save a day.
Thanks, glad you're enjoying it!
just curious: how would you compare using partial batch response vs pushing the failures into DLQ? is it fair to say that partial batch response is good for temporary failures (since it'll just retry), but I would need to use DLQs to put a limit on the overall number of retries for a given message?
Hey! Yeah great question. If the failure isn't temporary you'll of course need some kind of fatal-handling system. If I don't expect to re-drive messages, I usually just set a max attempts value and then write them to logs on the last attempt.
Thank you for the reply!@@astuyve Do you mind elaborating how would you set a max attempts value? (If you push the records back into the queue using partial batch response, is it possible to include some kind of attempt number?) I am new to Lambda and SQS and really appreciate the content you're putting out!
Hey! AWS tracks that for you, so you simply have to mark the messageId in the batchItemFailures response and SQS will keep track of the number of attempts for that message. No need to change anything in your own code besides the example I've shown here. Glad you are enjoying it, and good luck on your learning journey!
Very informative video! You rock, thank you. I'm curious about your thoughts on our use case: The primary reason we opted for layers was due to the significant speed boost it provided our CI/CD process, shaving off about 5-10 minutes by uploading our node_modules as a layer. In our architecture, lambdas of the same service share the same modules (similar to what you'd find in a non-serverless, microservice environment), so instead of each lambda consuming around 200MB, each one only uses 2MB, and the layer is uploaded just once. I would be really interested to know your thoughts on our use case and solution. Do you think it's an abuse? Would you recommend a different pattern? Thanks!!
Thanks for the kind words! My biggest concern in your case is safe deployments. Unless you're using Lambda versions + aliases, you'll be unable to add/update a dependency without fully backwards compatible code without risking Lambda errors for at least a few seconds because the updateFunction and updateFunctionConfiguration API calls are asynchronous. When I ran into this with Vercel, those functions errors out for about 6 seconds until both operations were complete. This can be avoided with aliasing, but you still risk dependency smashing. Either way, I hope you use what works best for you. Good luck!
@astuyve Wow, very interesting, that's definitely good to know! Yes we are using versions and aliases because of provisioned concurrency which is required is, so we set is as a basic requirement for a stack. By the way, have you encountered issues where, even though provisioned concurrency is configured, Lambdas still experience cold starts? This happens even with regular traffic, not just bursts. It’s something that occurs quite randomly for us, and we can't find the root cause… Also we recently started using Datadog for serverless applications and are really excited about it. 😊
@@MatanCohen-Abravanel Glad you're liking the product! Yes I've seen this happen with Provisioned Concurrency even with regular traffic, as ever so slightly (occasionally) there are no warm instances to serve the function. I'd check the lambda concurrency metric to see if you're bumping close to your PC value. If that function is nowhere near the provisioned capacity, I'd probably open a bug with AWS and ask.
Great video!
Thanks!!
Great breakdown and explanation, AJ! 👏
Thanks so much Aaron!!