AWS Lambda Cold Starts Deep Dive - How they work

Поделиться
HTML-код
  • Опубликовано: 3 дек 2024

Комментарии • 40

  • @pranabsharma98
    @pranabsharma98 4 года назад

    I work with AWS and found this SO USEFUL!

  • @danielhalmstrand2123
    @danielhalmstrand2123 4 года назад +2

    Thank you for a great introductory hot topic ..really appreciate it

  • @aditijain2448
    @aditijain2448 2 года назад +1

    really helpful for beginners understood it clearly!

  • @RahulNath
    @RahulNath 2 года назад

    Good one Sam! Thank you for the deep dive into Cold Starts

  • @giridharansubramanian3923
    @giridharansubramanian3923 2 года назад

    Great video! Hats off!!

  • @nakseungchoi7154
    @nakseungchoi7154 Год назад

    Great content. Love it.

  • @theDan_YT
    @theDan_YT 2 года назад

    Thank you for useful content

  • @andreyalexandrov2016
    @andreyalexandrov2016 3 года назад +6

    Great explanation! Works better at x1.5 speed. Don't thank :)

  • @debasisnath9860
    @debasisnath9860 3 года назад

    thank you... I am your subscriber :)

  • @marc-alexandrepaquet7696
    @marc-alexandrepaquet7696 2 года назад +1

    7:58 this is not true, even if you have a reserved capacity for a function, it can still scale. Throttling will happen when you hit the maximum concurrency of your account or if you set a concurrency limit on individual AWS Lambda functions.

    • @CompleteCoding
      @CompleteCoding  2 года назад

      I've just looked this up again and you're right!

  • @jimg8296
    @jimg8296 4 года назад

    Do you see a benefit of combining multiple functions into a single lambda to reduce cold starts. For example if a know web page might make 3 calls at load it might get 3 cold starts, If those 3 function we in a single lambda which could parse the request then there would be only 1 cold start.

    • @CompleteCoding
      @CompleteCoding  4 года назад

      That's a really interesting concept. It should reduce cold starts but can make the code and monitoring the function/s more complicated. You might have 10 errors but you would have to explore further to find out which api caused the error.
      You can also approximate the amount of cold starts reduced by joining them. If each api is hit very infrequently (few times a day) then adding them will still mean almost all of them are cold starts as not within 10 minutes.
      If you are getting loads of requests (several a minute or more ) then it probably won't make much difference as the containers aren't getting close to the cool down. Weirdly it might increase them as it means more likely to have concurrent requests.
      The time it would make a difference would be when you get 3-10 requests an hour. This means the container is cooling off after a large portion of the requests. Joining three Lambdas into one would mean your going from average 12 min between requests (5/hr) to 4 mins. This means the single container is probably never going to cool down.
      Going through all of this is a lot of work for a few cold starts and as your usage changes your calculations would need to change.
      If you HAVE to have the quickest lambda times possible then there are ways to prime your lambda with a set of concurrent requests every 5 minutes, ensuring that there are always at least X containers for each lambda. The cost of hitting a lambda 3 times every 5 minutes is also going to be tiny.

  • @jimg8296
    @jimg8296 4 года назад

    My nodejs lambda using Aurora mysql the cold starts are up to 45 seconds.

    • @CompleteCoding
      @CompleteCoding  4 года назад

      Wow that's a lot. Setting up the container should take a few hundred ms. I've not worked with Aurora mysql but it might be that creating a new connection to the database is what is causing the long delay.

  • @devgenesis6436
    @devgenesis6436 4 года назад

    Great video..i have been struggling with this provlem since a month..our lambda is in vpc as well and with 1024 ram it gets a cold starts around 5-6 sec..dont know how it can be improved..the second invocations is arround 600-700ms..The lambda is of an enterprise api so its code size is also arround 5-6mb..can u suggest me some solution. PS- the keep alive period is now 5-6mins in AWS

    • @CompleteCoding
      @CompleteCoding  4 года назад

      There are a few things I would suggest:
      - If you're not already, use webpack and minify the code when deploying to production. It will reduce your code size and therefore lambda cold start. It will also stop you uploading unnecessary node-modules.
      - You could use provisioned capacity so you never get a cold start. This does have a cost implication though
      - write a script to "poke" your lambda every 5 minutes. You can even set this up in serverless as a new event on the function. Search for serverless cron jobs.

    • @devgenesis6436
      @devgenesis6436 4 года назад

      @@CompleteCoding oh i forgot to mention the most important thing..its actaully on .net so no web pack minifier for it..though i know about cloud watch rules i had setup something that to poke but in the end it defeats the purpose of paying as u go isnt..and we have likr 400+ lambdas and 20+ api gateways interacting with the i.e. our entire enterprise is on aws...any other solution?

    • @CompleteCoding
      @CompleteCoding  4 года назад

      ​@@devgenesis6436
      If you're lambdas are being hit once every 20 minutes and running for 6s (5s cold start + 1s runtime) then you're using 15GB seconds (GBS) per hour.
      If you poke it every 5 minutes and the poke lasts 0.1s (minimum lambda run time) and then for just 1s every time it gets a 'real' request then your total is 3 + 12*0.1 = 4.2 GBS.
      Thats a saving of 78% on your AWS bill!
      If you used reserved capacity then you have to pay the whole time but with this method, you only pay when it's running.

  • @anatoliistepaniuk8217
    @anatoliistepaniuk8217 3 года назад

    is there a cold start when container is scaled?

    • @CompleteCoding
      @CompleteCoding  3 года назад

      Yes, every time a new container starts it has to go through all the steps (download, and initialise) before the lambda can run. AWS have done a lot recently to reduce this time even more so unless you're running super latency dependant features it is a minimal problem

  • @aurelienjoro2185
    @aurelienjoro2185 3 года назад

    Hi ! i'm new in AWS Lambda, right now i try to develop a function API geteway, SO I want to create function call every 5
    minutes a Sage 100 API and send directly to my database (CMS Directus)
    without Getway API url, so I know how the retrieved data
    from the Sage API but I am can't send the data(from Sage 100 API) without
    going to the Geteway API. I thank you

    • @CompleteCoding
      @CompleteCoding  3 года назад

      So you're trying to get data from Sage every 5 minutes and send that to your database (CMS)?
      You can create a lambda which automatically runs every 5 minutes. That will then get the data from the Sage API and send it to the database using the API that is part of the CMS (docs.directus.io/reference/api/items/#create-multiple-items).
      You don't actually need to create your own API for this.
      Here's the video for creating a scheduled Lambda. ruclips.net/video/VhhFkNYCmZ0/видео.html

    • @aurelienjoro2185
      @aurelienjoro2185 3 года назад +1

      @@CompleteCoding Yes That's it!
      Thank's for Your Help.

  • @lawais1977
    @lawais1977 2 года назад

    Are you sure that today lambdas stay alive 10 min? I read somewhere that unloads when process is finished...

    • @CompleteCoding
      @CompleteCoding  2 года назад +1

      Yes, lambdas definitely stay 'awake' for some amount of time. How long varies but is somewhere between 10 and 45 minutes.
      You can test by calling you lambda (which will cold start), call it again after the first request (warm start - faster response) and then call it 15 minutes later.
      I might set up a script to do some testing around this.

  • @bestoos2099
    @bestoos2099 4 года назад

    Once again Great work sir.. ...
    Sir can you please make a video on how to upload images in s3...

  • @sourabhk2373
    @sourabhk2373 3 года назад

    Nice video! But I think demonstrations with some images would have been more helpful rather than you talking to the camera.
    Maybe even showing live examples of what you are explaining would be even better.

    • @CompleteCoding
      @CompleteCoding  3 года назад +1

      It's quite hard to demonstrate. One api call just takes 0.3s longer than the second.