Deploying AWS Lambda Functions with Docker and Amazon ECR | large 10GB packages (2023)

Поделиться
HTML-код
  • Опубликовано: 25 авг 2024
  • In this video I use Docker and ECR to deploy a larger package. Most of the video we use the CLI to do this. Make sure you the below prereqs ready to follow along.
    Prerequisite:
    - AWS Account
    - Docker Desktop
    - AWS CLI Setup
    Python and DockerFile used Github Repo: github.com/hit...
    Below are all the CLI commands used in order to help you follow along.
    Commands and Descriptions:
    Repository Creation in Amazon ECR
    - Command: `aws ecr create-repository --repository-name my-lambda-repo-demo`
    - Description: Creates a new repository in Amazon Elastic Container Registry (ECR) with the specified name.
    Building a Docker Image
    - Command: `docker build -t my-lambda-image .`
    - Description: Builds a Docker image using the Dockerfile in the current directory and tags it with the specified name.
    Authenticating Docker with Amazon ECR
    - Command: `aws ecr get-login-password --region (region) | docker login --username AWS --password-stdin (account id).dkr.ecr.(region).amazonaws.com`
    - Description: Retrieves an authentication token from ECR and then uses it to log in to the Docker client.
    Fetching AWS Account ID
    - Command: `aws sts get-caller-identity --query Account --output text`
    - Description: Retrieves the AWS account ID for the authenticated user or role.
    Tagging the Docker Image for ECR
    - Command: `docker tag my-lambda-image:latest (account id).dkr.ecr.(region).amazonaws.com/my-lambda-repo-demo:latest`
    - Description: Tags the previously built Docker image with the ECR repository URL.
    Pushing the Docker Image to ECR
    -Command: `docker push (account id).dkr.ecr.(region).amazonaws.com/my-lambda-repo-demo:latest`
    - Description: Pushes the tagged Docker image to the specified ECR repository.
    #aws #awslambda #awstutorial

Комментарии • 10

  • @lukmannurhafizramli1377
    @lukmannurhafizramli1377 2 месяца назад +2

    You're so fucking awesome I love you so much

  • @roniantonio6278
    @roniantonio6278 7 месяцев назад

    Pretty straightforward and clean, thank you!

  • @joseantonioromeroespejo160
    @joseantonioromeroespejo160 6 месяцев назад

    Great vídeo!!! Thanks.

  • @kapilbadokar
    @kapilbadokar 7 месяцев назад +1

    Hi , Great tutorial . I'm trying to do selenium web scraping with chromedriver and deploying it on aws lambda. However, Lambda can't find my chromedriver or its failing to setup the chromedriver. I have tried multiple approaches with my dockerfile. Could you please help me with that?

    • @Hitchon
      @Hitchon  7 месяцев назад

      Ahhh yes I once had a project where I needed to deploy chromedriver on EC2 and also found it frustrating, I remember it only worked after I used a specific headless version.
      I haven’t done it on Lambda let me get back to you on this

    • @nicolasforero6187
      @nicolasforero6187 6 месяцев назад

      Hi@@Hitchon, Thank you for your tutorials, I'm actually doing the same thing, web scrapping with Selenium and Chromedriver, which headless version works for you?
      Also the code that I have downloads CSV files(I have only tested it in my Vs Code editor, but if I use lambda, do you know where those files will be stored? and if I can send those files to another Python script to clean them? I appreciate your time!

  • @techaisolution
    @techaisolution 2 месяца назад

    Hi, this setup spike my billing very high,
    The setup was to build lambda function to read the latest file from the s3 dir and make transformation then finally to s3 target dir,
    So this all setup with the python script has to run once the s3 notification to lambda function that an file just came to s3.
    But it went into a loop and made the s3 and lambda billing spike
    Let me knew what is the issue in my setup that i didn't noticed at first while running this python script in lambda

    • @Hitchon
      @Hitchon  2 месяца назад

      I would reach out to AWS support they can be forgiving and you may be able to get a refund on this spike

    • @techaisolution
      @techaisolution 2 месяца назад

      @@Hitchon
      the refund with the aws team is still in the discussion process.

    • @rennisdodman8739
      @rennisdodman8739 Месяц назад

      it sounds like u did the recursion thing where you use a s3 Push to trigger a function. correct?
      the problem i think is that ur function is then saving data back into the same s3 used to trigger the function. so its going to trigger the function again and again and again and again.
      make sure the bucket u push data to is different from the bucket that triggees ur function.
      doing something like:
      df= pd.read_parquet( trigger_bucket )
      df= my_functuon(df)
      df.to_parquet( load_bucket ) #do not use trigger bucket