Interviewer was asked me about Batch processing but I explained him Cron Job using Spring scheduler, but now I come to know what's batch processing from in detail, thanks a lot and it means a lot for me...
Thanks for the video, this really helps. Could you please suggest what are the options we could use to concatenate two file based on common column from S3. For this should I use Lambda or some other services may be Data Pipeline / Glue etc?
Great video and congrats on being the AWS Community Builder! AWS DataPipeline looks it's a competitor of Airflow, the architect UI is similar to Airflow DAGs, interesting!. Also, curious to know what you use for presentations Keynote or PowerPoint?
Nice one.One question.Next time when another file is uploaded in S3 and we activate pipeline then will it try to upload the first file as well..if yes how to avert that?
⏱ Chapter Timestamps
===================
0:00 - Introduction
00:28 - Architecture Flow
2:31 - Pre-requisite: Amazon S3 Buckets Creation
3:44 - Pre-requisite: Amazon DynamoDB Table Creation
5:15 - Pre-requisite: Amazon S3 file format Creation
8:20 - Pre-requisite: Amazon Data Pipeline Creation
12:40 - Data Format Configuration in Data Pipeline
14:14 - Activate Data Pipeline
17:52 - Data Loaded into DynamoDB
19:03 - Serverless Data Pipeline Trigger using EventBridge and Lambda
20:06 - Summary
Interviewer was asked me about Batch processing but I explained him Cron Job using Spring scheduler, but now I come to know what's batch processing from in detail, thanks a lot and it means a lot for me...
I love your dedication brother....late night doing great job for the dev community....👍👍👍 @Tech Primer
Thanks a ton
Nice demo, useful.
Wow! Great bro... I love ur dedication.. great job 🔥🙏
Thanks for the video, this really helps.
Could you please suggest what are the options we could use to concatenate two file based on common column from S3. For this should I use Lambda or some other services may be Data Pipeline / Glue etc?
As always, Amazing and Awesome.
Very informative & insightful. Thanks, @techprimers
Thanks Arvind
Great video and congrats on being the AWS Community Builder! AWS DataPipeline looks it's a competitor of Airflow, the architect UI is similar to Airflow DAGs, interesting!. Also, curious to know what you use for presentations Keynote or PowerPoint?
Thank you Vishal. I use Google Slides.
Great video Ajay once again :). Is the data ingestion continuous or done in intervals?
Since this is batch it's done in adhoc fashion. Whatever data is present in the file is injected all at once
Could you please give some clarity on EC2 instance role creation at the time of data pipeline creation?
Great video.. Thumbs up..
Nice video, is is possible to run multiple instances of a single lambda in parallel to perform serverless processing?
Nice one.One question.Next time when another file is uploaded in S3 and we activate pipeline then will it try to upload the first file as well..if yes how to avert that?
Great job...👍👍👍
Is it possible to do update item? Or it will only do put item in dynamo db?
can you build a scalable pipeline working on real time data and use services like kinesis
Brother, how do you know these many topics? Really great 👍👍
By learning anything which is new and trying to practice that in my free time
please upload airflow cluster setup and data pipeline
There are much cheaper way such as athena CTAS or even Glue Nice video concept is good
Guys if you're stuck save the file as .txt and change data type to small "s", "n", "l".
There is developer can do here