My thoughts exactly. Same with the second Kinesis stream. The data analytics pushes to that and then another lambda separately. You could just connect the lambda to the kinesis
Yes. Raw data comes from IoT core to Kinesis Firehose. Since Kinesis Firehose can directly deliver it to S3, Kinesis consumer is not necessary. KCL will add additional infrastructure to support
He stated that they do/could use transforms at the kinesis analytics stage. That means data leaving the service would be processed and no longer available to store as raw data.
Thank you for sharing this architecture!! It was explained so simply and thoroughly! I'm an AWS student and I understood it with ease! I always enjoy these videos!! Thanks again! Cheers!!!
This is a great architecture overview, I am curious to know why, Why for raw data storage(s3) you use the Kinesis firehose?, Can't we send the data directly from IoT core to S3 via additional IOT action?
I don't see the point of the second data firehose, Kinesis Data analytics should be able to write processed data to the data lake on S3 directly without needing another firehose for ingest or queuing.
why not use kinesis data stream instead of kinesis firehose at the beginning since we're not doing any processing? and when kinesis data stream is real time unlike kinesis firehose.
It would be more helpful if you explain on a business use-case example...
Great Video! So Kinesis Data Analysis In A Way Comes over and Around The Projected Messages
Wouldn’t it be better to write the raw events to Kinesis first and then store them to S3 via some Kinesis consumer?
My thoughts exactly. Same with the second Kinesis stream. The data analytics pushes to that and then another lambda separately. You could just connect the lambda to the kinesis
Yes. Raw data comes from IoT core to Kinesis Firehose. Since Kinesis Firehose can directly deliver it to S3, Kinesis consumer is not necessary. KCL will add additional infrastructure to support
He stated that they do/could use transforms at the kinesis analytics stage. That means data leaving the service would be processed and no longer available to store as raw data.
Thank you for sharing this architecture!! It was explained so simply and thoroughly! I'm an AWS student and I understood it with ease! I always enjoy these videos!! Thanks again! Cheers!!!
It's our pleasure! 😀 🙌 💻
Excellent, wonderful, absolutely!
This is a great architecture overview, I am curious to know why, Why for raw data storage(s3) you use the Kinesis firehose?, Can't we send the data directly from IoT core to S3 via additional IOT action?
If you send data directly from IoT core to S3, each message from every device will be one object in S3. With firehose, you can batch them together.
@@ZuyangLiu Thanks.
I don't see the point of the second data firehose, Kinesis Data analytics should be able to write processed data to the data lake on S3 directly without needing another firehose for ingest or queuing.
You need it for Lambda too
I wonder if I can do this with appsync? Web app pulling data every some second is bed solution for real time.
helpful! thanks
Glad you found the video helpful, Ajinkya! 😊 ^KS
3 minutes in and a dozen services and I still have zero idea what you guys actually did with those temperature data points
Display them on a Webserver on the right side
the guy in blue t-shirt acts like an intern.
Second guy completely useless ...
'yeah, absolutely'
That's quite shallow
why not use kinesis data stream instead of kinesis firehose at the beginning since we're not doing any processing? and when kinesis data stream is real time unlike kinesis firehose.
Kinesis Data Firehose can push data directly to S3 which Data Stream cannot, hence the choice