Very nice!! particularly useful for storing/processing large volumes of event type data possibly derived from an event platform like Kafka. The intelligent migration of data from in-memory storage to HDD, depending on expected access patterns, will also save a ton of time - was initially looking at writing a tool to do this from scratch for my use case.
Hello! This doc details how you can modify the retention duration in Amazon Timestream: go.aws/3t2Gpq4. If this isn't quite it, feel free to ask your questions directly on re:Post for experts to weigh-in: go.aws/aws-repost. 📝 ^LG
Very nice!! particularly useful for storing/processing large volumes of event type data possibly derived from an event platform like Kafka. The intelligent migration of data from in-memory storage to HDD, depending on expected access patterns, will also save a ton of time - was initially looking at writing a tool to do this from scratch for my use case.
A good set of initial assumptions about the audience
Is there a way we can ingest data older than 12 months?
Hello! This doc details how you can modify the retention duration in Amazon Timestream: go.aws/3t2Gpq4. If this isn't quite it, feel free to ask your questions directly on re:Post for experts to weigh-in: go.aws/aws-repost. 📝 ^LG
I understand duplicate records are rejected, are these still charged?
I would guess yes but wanted to confirm.
This is awesome!
Cool