Great Explanation. I would like to know that when Lambda copy data from source bucket to target bucket, where it stores the data ? And if data is let's say 1Tb, then how Lambda would work ?
There is a CSV file in the local computer and it is uploaded to AWS S3. If any changes are made in that CSV file on the local computer, the changes should reflect in AWS S3 automatically using AWS Lambda function. What are the steps to achieve this?
Hi , that's a great info and thanks for the tutorial...i have question and if this can be answered, can solve my problem..so i have a custom app and we have integrated with AWS event bridge and wants events to be targeted out side of AWS ..one we are using is Google cloud storage..so will the similar python script which can solve my problem
Create an AWS Lambda function to count the number of words in a text file. The general requirements are as follows: Use the AWS Management Console to develop a Lambda function in Python and to create its required resources. Report the word count in an email using an Amazon Simple Notification Service (SNS) topic. Optionally, also send the result in an SMS (text) message. Format the response message as follows: The word count in the file is nnn. Replace textFileName with the name of the file. Specify the email subject line as: Word Count Result Automatically trigger the function when the text file is uploaded to an Amazon S3 bucket. Test the function by uploading several text files with different word counts to the S3 bucket. Forward the email produced by one of your tests to your instructor along with a screenshot of your Lambda function.
Thanks for your clear explanation. I followed your steps, as you said, but I am getting errors while running the lambda function. Could you please help me ASAP? Error:- { "errorMessage": "module 'urllib' has no attribute 'unquote_plus'", "errorType": "AttributeError", "requestId": "0cb2294e-a023-4ab2-8395-05f70689e10f", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 17, in lambda_handler object_key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']) " ] }
I have one doubt if the bucket contains multiple objects and any file from one particular folder is overwritten will it reflect in the new bucket as well ?
Thank you. I also use two buckets ( destination and incoming one)+ lambda function for resizing images. On the server side I use django-storages and it's const AWS_S3_CUSTOM_DOMAIN which points to a bucket with resized images. Buckets and their objects have public access. Everything works almost well but I've got a strange bug: 404 error when trying to get image for the first time which turns into 200 OK after refresh. Has somebody got the same issue?
Above Lambda example to demonstrate Lambda capabilities to perform operations on S3 bucket. In this case huge data around 10TB+, we can perform the data transfer between buckets using one of the following option s: 1. Cross-region replication or same-region replication 2. S3 batch operation 3. S3DistCp with Amazon EMR 4. Use aws DataSync
my uploaded file was also not copying to the target bucket. I had inadvertently not attached the AWSS3FullAccess policy to the role I had created. I only noticed because I had also neglected to add the AWSLambdaBasicExecutionRole to the role, so monitoring wasn't working either. Attached them both and viola!, file was copied to the 2nd bucket.
@@davidcloes9048 i followed all the steps. but still didnt copy to destination. can you help pls. Anything to set permissions or enable at s3 bucket. One more observation that I dont see Enable check box while creating the trigger. Won't this work on AWS basic user login?
can i delete object from destination bucket as soon as object with same name deleted from source bucket using lambda function? if yes how can i do that?
Yes, we can tweak the lambda function code as per our requirement. We can delete an object, copy object, we can use copied object data to insert into mysql, postgreSQL, DynamoDB, we can also use this data for Alexa training data set and etc. If we want to delete an object from destination bucket as soon as source bucket object is deleted with object name. 1. Apply lambda function on source bucket with DELETE event. 2. As soon as you delete a file from source bucket, first we need to cross verify whether same object/file already exists in destination bucket then we can write a code snippet for deleting an object from destination bucket. s3.delete_object(Bucket=bucket, Key=destination_object_key) Please let me know if you need any help on the same.
While running above code I am getting this error lease somebody help Response { "errorMessage": "'Records'", "errorType": "KeyError", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 17, in lambda_handler source_bucket = event['Records'][0]['s3']['bucket']['name'] " ] }
This is literally the best solution walk-thru I have watched on YT. Clear, instructive, and it actually works. You are a hero!
Thank you for such a simple and good explanation
This's pretty awesome. Thanks! We can also do the same with CRR or SRR within S3. But very helpful video to understand Lambda. Thanks again :).
Great video, Thank you for the clear explanation!
Great Explanation. I would like to know that when Lambda copy data from source bucket to target bucket, where it stores the data ? And if data is let's say 1Tb, then how Lambda would work ?
There is a CSV file in the local computer and it is uploaded to AWS S3.
If any changes are made in that CSV file on the local computer, the changes should reflect in AWS S3 automatically using AWS Lambda function.
What are the steps to achieve this?
Fantastic video! Thank you! You are a genius!
Great Artice. It helped me a lot.
Thanks for your motivational words!!
Good tutorial. However, the first 15 lines of Python/Boto3 code for the Lambda trigger are not readable. Please share.
Great video sir u have explained it very well
But this can get easily failed for uploading large files, say for eg file size over 500GB . The lambda runtime execution timeout will happen.
Hi , that's a great info and thanks for the tutorial...i have question and if this can be answered, can solve my problem..so i have a custom app and we have integrated with AWS event bridge and wants events to be targeted out side of AWS ..one we are using is Google cloud storage..so will the similar python script which can solve my problem
can we see the log of this copy event..labda copying...you put print statement...does it publish to cloudwatch?
can you create a lambda function to compress images using python
Great demo, any chance there is a AWS LAMBDA to copy from S3 to FSx windows?
Create an AWS Lambda function to count the number of words in a text file. The general requirements are as follows:
Use the AWS Management Console to develop a Lambda function in Python and to create its required resources.
Report the word count in an email using an Amazon Simple Notification Service (SNS) topic. Optionally, also send the result in an SMS (text) message.
Format the response message as follows:
The word count in the file is nnn.
Replace textFileName with the name of the file.
Specify the email subject line as: Word Count Result
Automatically trigger the function when the text file is uploaded to an Amazon S3 bucket.
Test the function by uploading several text files with different word counts to the S3 bucket.
Forward the email produced by one of your tests to your instructor along with a screenshot of your Lambda function.
how can I copy only the file and not all the prefix where it resides?
Thanks for your clear explanation. I followed your steps, as you said, but I am getting errors while running the lambda function. Could you please help me ASAP?
Error:-
{
"errorMessage": "module 'urllib' has no attribute 'unquote_plus'",
"errorType": "AttributeError",
"requestId": "0cb2294e-a023-4ab2-8395-05f70689e10f",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 17, in lambda_handler
object_key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'])
"
]
}
Hi @Prabhakar
i need to unzip the zip file in the sub bucket, is this possible to extract the zip fin in its sub bucket, can you please inform
Error when trigger create "Unable to validate the following destination configurations"
Thank you 👍🏻, can we do similar copy using different aws account for input s3 bucket?
Sir I want to transfer a file from one aws s3 to different aws s3 using bash script .
Is the source_bucket name is obtained by the trigger?
nice one but python 2.7 is not supported on aws anymore and the code is not working for me for python 3+
Same
@@carlosperal5163 from __future__ import print_function
import boto3
import time, urllib
import json
"""Code snippet for copying the objects from AWS source S3 bucket to target S3 bucket as soon as objects uploaded on source S3 bucket
@author: Prabhakar G
"""
print ("*"*80)
print ("Initializing..")
print ("*"*80)
s3 = boto3.client('s3')
def lambda_handler(event, context):
# TODO implement
source_bucket = event['Records'][0]['s3']['bucket']['name']
object_key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'])
target_bucket = 'techhub-output-data-andy'
copy_source = {'Bucket': source_bucket, 'Key': object_key}
print ("Source bucket : ", source_bucket)
print ("Target bucket : ", target_bucket)
print ("Log Stream name: ", context.log_stream_name)
print ("Log Group name: ", context.log_group_name)
print ("Request ID: ", context.aws_request_id)
print ("Mem. limits(MB): ", context.memory_limit_in_mb)
try:
print ("Using waiter to waiting for object to persist through s3 service")
waiter = s3.get_waiter('object_exists')
waiter.wait(Bucket=source_bucket, Key=object_key)
s3.copy_object(Bucket=target_bucket, Key=object_key, CopySource=copy_source)
return response['ContentType']
except Exception as err:
print ("Error -"+str(err))
return e
This works for me with the newer version of PYTHON well 3.7 anyway:) cheers
How can run this program for long time
I have one doubt if the bucket contains multiple objects and any file from one particular folder is overwritten will it reflect in the new bucket as well ?
Thank you so much sir ... its really work.
where's the event json for this?
i tried the code but its not working for me i dont know why. its not getting copied to target bucket
can anyone help
I am getting key error: 'Records'.. What to do?
HI Sir, Do you teach as well ? I am looking for lambda coaching.
Sir I want to add data and fetch data to postgresql through c# using lambda. Kindly help me here
Thank you. I also use two buckets ( destination and incoming one)+ lambda function for resizing images.
On the server side I use django-storages and it's const AWS_S3_CUSTOM_DOMAIN which points to a bucket with resized images. Buckets and their objects have public access. Everything works almost well but I've got a strange bug: 404 error when trying to get image for the first time which turns into 200 OK after refresh. Has somebody got the same issue?
Good video, please upload a video what will you do if you want to add prefix and suffix, like TXT file to one bucket and jpg to another
{
"errorMessage": "'Records'",
"errorType": "KeyError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 17, in lambda_handler
source_bucket = event['Records'][0]['s3']['bucket']['name']
"
]
}
getting this error
you solve this error...?
What if the data size is huge 10 tb can we transfer the entire data within 15 min?
Above Lambda example to demonstrate Lambda capabilities to perform operations on S3 bucket.
In this case huge data around 10TB+, we can perform the data transfer between buckets using one of the following option s:
1. Cross-region replication or same-region replication
2. S3 batch operation
3. S3DistCp with Amazon EMR
4. Use aws DataSync
Your video is really helpful but the code keep giving me an issue line 16
great video, can you upload the text to secret manager instead of another s3 bucket?
Someone has a manual for the same procedure but with python version 3.x ??
import boto3
import time, urllib
import json
print ("*"*80)
print ("Initializing..")
print ("*"*80)
s3 = boto3.client('s3')
def lambda_handler(event, context):
# TODO implement
source_bucket = event['Records'][0]['s3']['bucket']['name']
object_key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'])
target_bucket = 'name of yout target bucket'
copy_source = {'Bucket': source_bucket, 'Key': object_key}
print ("Source bucket : ", source_bucket)
print ("Target bucket : ", target_bucket)
print ("Log Stream name: ", context.log_stream_name)
print ("Log Group name: ", context.log_group_name)
print ("Request ID: ", context.aws_request_id)
print ("Mem. limits(MB): ", context.memory_limit_in_mb)
try:
print ("Using waiter to waiting for object to persist through s3 service")
waiter = s3.get_waiter('object_exists')
waiter.wait(Bucket=source_bucket, Key=object_key)
s3.copy_object(Bucket=target_bucket, Key=object_key, CopySource=copy_source)
return 'Successfully copied files'
except Exception as err:
print ("Error -"+str(err))
return err
Nice Video.
I followed the video but on uploading to my source bucket my file is not copying to the target bucket
my uploaded file was also not copying to the target bucket. I had inadvertently not attached the AWSS3FullAccess policy to the role I had created. I only noticed because I had also neglected to add the AWSLambdaBasicExecutionRole to the role, so monitoring wasn't working either. Attached them both and viola!, file was copied to the 2nd bucket.
@@davidcloes9048 i followed all the steps. but still didnt copy to destination. can you help pls. Anything to set permissions or enable at s3 bucket. One more observation that I dont see Enable check box while creating the trigger. Won't this work on AWS basic user login?
you should maintain runtime python 2.7 only then only you got it.
I am getting key error:'Records'
Nice Video.
Thanks.
Any bodey help me to create website
In the website Dashboard ,have to pute ,aws,start,stop, options...
To give to user to use their own vps server
can i delete object from destination bucket as soon as object with same name deleted from source bucket using lambda function? if yes how can i do that?
Yes, we can tweak the lambda function code as per our requirement. We can delete an object, copy object, we can use copied object data to insert into mysql, postgreSQL, DynamoDB, we can also use this data for Alexa training data set and etc.
If we want to delete an object from destination bucket as soon as source bucket object is deleted with object name.
1. Apply lambda function on source bucket with DELETE event.
2. As soon as you delete a file from source bucket, first we need to cross verify whether same object/file already exists in destination bucket then we can write a code snippet for deleting an object from destination bucket.
s3.delete_object(Bucket=bucket, Key=destination_object_key)
Please let me know if you need any help on the same.
hua hi nahi :(
While running above code I am getting this error lease somebody help
Response
{
"errorMessage": "'Records'",
"errorType": "KeyError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 17, in lambda_handler
source_bucket = event['Records'][0]['s3']['bucket']['name']
"
]
}
I have the same error
upload file in s3 and execute it through trigger.
If I am not wrong. I guess you have used test to execute the labda
Same thing I am getting ..pls give me a solution
anyone fix the error
Super
thankyou
hi the solution didnot work for me can you help me can you shae me mail id so that i can share the error details