@@pixegami I think it would be super useful if you’ve shown a standard startup setup Python Lambda + S3 + DynamoDB + SQS/SNS. There are no tutorials that goes through a simple full web app workflow. Thanks for the content
This is the only and most helpful aws + lambda + dynamodb + python tutorial on youtube. Like literally youre the only one that shows and explains also the inbetween stuff and the only one where all the above mentioned components are all together shown! Thank you so much !
This is by far one of the best tutorials I've ever seen. Keep it going, Was good to make it local and the transformation to lambda and the connect with DynamoDB. Really like it. you got a subscriber!
I think you mean the auto format on save? Look into “black” for Python formatting :) or “Prettier” for everything else. You can configure it in VSCode to format on save.
Thanks! I think a lot of tutorials fall into the trap of trying to explain every single detail - but I think most people just care about solving a problem, so that's how I try to design my content :)
Neat video. Thanks for the clear explanation. If I have multiple python files to return my message should upload all those python files as a zip file? or should create those python files directly on Lambda code?
I think that's a matter of choice (and it seems to be a question of how to design your infrastructure). I'd probably go with whatever option is easier to reproduce, develop with, and to automate. There's also infrastructure packages that help you to deploy Python code a lot easier: docs.aws.amazon.com/cdk/api/v1/docs/aws-lambda-readme.html#bundling-asset-code
This tutorial is excellent! I have a question regarding a similar setup involving SQS, Lambda, and Timestream. I trigger Lambda each time the SQS queue accumulates approximately 100 messages. It then connects to Timestream to store some data. Currently, I'm using boto3.client('timestream-write', ...) within the lambda_handler() function. However, I'm starting to suspect that this approach increases execution time because it establishes a new connection each time the function is invoked. Is there a method to maintain database connections across multiple Lambda invocations?
You can't really maintain state across multiple Lambda invocations. So the main question here is - sure, the approach you take might incur some overhead to create the connection, but have you measured it and determined that the extra latency actually matters enough for you to care about? To me, this just sounds like an acceptable trade-off of not having to maintain live servers. In any case, it's also useful to know that Lambda function "containers" stay alive for up to 15 minutes, including anything in its memory. That means, if you create your connection as a singleton (e.g. global variable), then any time that container gets re-used, you should be able to access that same connection without having to reconnect.
Why DynamoDb access issue didn't happened when you run the .py program from CLI? But while running LAMBDA we need to add the policy. Why CLI allowed to put item without that DB policy? Please make this clear.
Very good question, I'm sorry I didn't cover that. All AWS resources (including DynamoDB) need a policy to access (read, write, etc). By default, everything is secure, which means that by default-nothing has access, and it must be granted explicitly (that's the case you saw with the Lambda function). But when you use the CLI, you've already configured it when you installed it, probably with `aws configure`. You can check your configuration at `~/.aws/credentials` (on Unix/Mac), and that is probably linked to a User with "Admin" access, which has access to everything. So in both cases, a policy is necessary. In the Lambda case, you need to grant one for the Lambda's invocation role. For the CLI, you gain access via the CLI user's policy (which is probably Admin).
Just the regular Python extension. But I may have also downloaded the AWS Lambda extension, I just don't think I've enabled or put effort into using it yet though.
@@pixegami i've been following along but when i try to implement this i receive this message "errorMessage": "'visit-count-table'", "errorType": "KeyError",
@@johnathanhorner6888 Hmm, that sounds like a problem where the table's primary key name, and the key you're trying to query it with, are different. When you go to your DDB table in the console, can you confirm the name of the primary key? Is it the same as in your code? In my code I used "user" as the key: github.com/pixegami/python-lambda-with-database/blob/main/lambda_function.py#L16 Or if that's not it - can you share the full stack trace of where that issue occurred?
@@pixegami yes that's what it was. But the issue I'm having now is with the counter. Everytime that i hit the test button it stays at 1 it never increases by +1.
@@johnathanhorner6888 Nice, you're making progress. The counter is incremented and saved on this line: github.com/pixegami/python-lambda-with-database/blob/main/lambda_function.py#L22 If you aren't seeing it being incremented, I think it's likely one of three scenarios: - The app is missing the code to update the counter (and save it to the DB). - There's an error in the code. If this is the case, you should be able to see errors and logs in the Lambda 'monitoring' tab. - The code works, but you are loading/saving the wrong user (or different users) - you can check this by inspecting your table in DDB directly.
Great video, exactly what I needed right now. The clearest explanation I've found on AWS Lambda + DynamoDB. Please keep on making those!
Thanks! Feel free to suggest any topics you'd like me to explore as well.
@@pixegami I think it would be super useful if you’ve shown a standard startup setup Python Lambda + S3 + DynamoDB + SQS/SNS. There are no tutorials that goes through a simple full web app workflow. Thanks for the content
This is the only and most helpful aws + lambda + dynamodb + python tutorial on youtube. Like literally youre the only one that shows and explains also the inbetween stuff and the only one where all the above mentioned components are all together shown! Thank you so much !
Thanks! I'm glad you enjoyed it and found it useful :) It's why I make these!
This is by far one of the best tutorials I've ever seen.
Keep it going, Was good to make it local and the transformation to lambda and the connect with DynamoDB.
Really like it. you got a subscriber!
Thank you for the kind words. Glad you enjoyed it!
Thanks for the video. I was really struggling to make a view counter using Lambda & DynamoDB until I saw this video. Appreciate the help :)
That's awesome! Well done :)
really loved it, I am non coder still I understood almost everything because the way you explained is very clear.
Great to hear! But it's a pretty technical video - what made you want to watch it as a non-coder? (I'm not gatekeeping, I'm just curious)
Watching on August 22nd, 2024... thanks a lot bro!
you made it so easy to understand man...
amazing!
pls keep it up!
Glad to hear that!
These videos of yours are from heaven. Thank you ;)
Glad you like them!
Excelent video, pls... what command you use to clean and organice the code?
I think you mean the auto format on save? Look into “black” for Python formatting :) or “Prettier” for everything else. You can configure it in VSCode to format on save.
Thanks for this simple video. Most aws videos are 3 hours long haha.
Thanks! I think a lot of tutorials fall into the trap of trying to explain every single detail - but I think most people just care about solving a problem, so that's how I try to design my content :)
Best explanation.
We will appreciate it, if you can make a video including S3 with DynamoDB and Lambda.
Thank you,
Thanks for the suggestion! It helps me decide what to work on next. I definitely plan on doing more Python + AWS videos in the coming months.
Great video! Does this handle lambdas that are running concurrently?
Yup, DynamoDB is pretty good at stuff like that, so it's fine with handling a ton of concurrent transactions.
Neat video. Thanks for the clear explanation. If I have multiple python files to return my message should upload all those python files as a zip file? or should create those python files directly on Lambda code?
I think that's a matter of choice (and it seems to be a question of how to design your infrastructure). I'd probably go with whatever option is easier to reproduce, develop with, and to automate. There's also infrastructure packages that help you to deploy Python code a lot easier: docs.aws.amazon.com/cdk/api/v1/docs/aws-lambda-readme.html#bundling-asset-code
@@pixegami Thank you. I went with EC2, and linked it to REST API in AWS API Gateway. This works well for my requirements.
VERY WELL EXPLANATION
Amazing, you should have way more subs!
Thanks! I hope so too. One day... 🎯
Hello, thanks for your video. What extension are you using for python please?
In VSCode?> I'm using GitHub Copilot (see my review on it here: ruclips.net/video/tG8PPne7ef0/видео.html)
I also use "black" for auto formatting.
This tutorial is excellent! I have a question regarding a similar setup involving SQS, Lambda, and Timestream.
I trigger Lambda each time the SQS queue accumulates approximately 100 messages. It then connects to Timestream to store some data.
Currently, I'm using boto3.client('timestream-write', ...) within the lambda_handler() function. However, I'm starting to suspect that this approach increases execution time because it establishes a new connection each time the function is invoked.
Is there a method to maintain database connections across multiple Lambda invocations?
You can't really maintain state across multiple Lambda invocations. So the main question here is - sure, the approach you take might incur some overhead to create the connection, but have you measured it and determined that the extra latency actually matters enough for you to care about? To me, this just sounds like an acceptable trade-off of not having to maintain live servers.
In any case, it's also useful to know that Lambda function "containers" stay alive for up to 15 minutes, including anything in its memory. That means, if you create your connection as a singleton (e.g. global variable), then any time that container gets re-used, you should be able to access that same connection without having to reconnect.
I love this, thank, 😁
Glad you enjoyed it!
Thanks!
You're welcome!
Super helpful, thank you
You're welcome!
thx so much! the way you explained is very perfect!!!!!
Hi, thank you for the video. What VS theme is that?
monokai.pro
Why DynamoDb access issue didn't happened when you run the .py program from CLI? But while running LAMBDA we need to add the policy. Why CLI allowed to put item without that DB policy? Please make this clear.
Very good question, I'm sorry I didn't cover that. All AWS resources (including DynamoDB) need a policy to access (read, write, etc). By default, everything is secure, which means that by default-nothing has access, and it must be granted explicitly (that's the case you saw with the Lambda function).
But when you use the CLI, you've already configured it when you installed it, probably with `aws configure`. You can check your configuration at `~/.aws/credentials` (on Unix/Mac), and that is probably linked to a User with "Admin" access, which has access to everything.
So in both cases, a policy is necessary. In the Lambda case, you need to grant one for the Lambda's invocation role. For the CLI, you gain access via the CLI user's policy (which is probably Admin).
Okay. I got it. Yes. Thank you for the clarification!!
Are you using the AWS Lambda extension in vs code? Or the regular python extension in vs code
Just the regular Python extension. But I may have also downloaded the AWS Lambda extension, I just don't think I've enabled or put effort into using it yet though.
@@pixegami i've been following along but when i try to implement this i receive this message "errorMessage": "'visit-count-table'",
"errorType": "KeyError",
@@johnathanhorner6888 Hmm, that sounds like a problem where the table's primary key name, and the key you're trying to query it with, are different. When you go to your DDB table in the console, can you confirm the name of the primary key? Is it the same as in your code? In my code I used "user" as the key: github.com/pixegami/python-lambda-with-database/blob/main/lambda_function.py#L16
Or if that's not it - can you share the full stack trace of where that issue occurred?
@@pixegami yes that's what it was. But the issue I'm having now is with the counter. Everytime that i hit the test button it stays at 1 it never increases by +1.
@@johnathanhorner6888 Nice, you're making progress. The counter is incremented and saved on this line: github.com/pixegami/python-lambda-with-database/blob/main/lambda_function.py#L22
If you aren't seeing it being incremented, I think it's likely one of three scenarios:
- The app is missing the code to update the counter (and save it to the DB).
- There's an error in the code. If this is the case, you should be able to see errors and logs in the Lambda 'monitoring' tab.
- The code works, but you are loading/saving the wrong user (or different users) - you can check this by inspecting your table in DDB directly.
hey, what's the setting to have the function declaration line appear as the top line in vscode? 13:57 for example
thanks
I think this is VSCode sticky scroll? code.visualstudio.com/updates/v1_70#_editor-sticky-scroll
@@pixegami many thanks my friend
thanks, great explanation & demo
Thanks! Glad you enjoyed it :)