AWS re:Invent 2020: Data modeling with Amazon DynamoDB - Part 1
HTML-код
- Опубликовано: 2 июл 2024
- Amazon DynamoDB is popular due to its flexible billing model and ability to scale without performance degradation. It is a common choice in serverless and high-scale applications. But modeling your data with DynamoDB requires a different approach than modeling in traditional relational databases. Alex DeBrie is an AWS Data Hero, recognized for his work with DynamoDB, and author of The DynamoDB Book, a comprehensive guide to data modeling. In Part 1 of this two-part session, see how modeling with DynamoDB is different than a traditional relational database, and learn some foundational elements of data modeling with DynamoDB.
Learn more about re:Invent 2020 at bit.ly/3c4NSdY
Subscribe:
More AWS videos bit.ly/2O3zS75
More AWS events videos bit.ly/316g9t4
#AWS #AWSEvents
This is such a freaking wonderful presentation. So nice and clear with the examples. Excellent.
Very engaging presentation with useful techniques outlined clearly. It does pose the question of where the dividing line between a design pattern and a workaround for missing functionality lies.
The best video on dynamodb modeling
Maybe Mr Rick Houlihan continue helping our AWS Community. And I hope he takes a look how he introduces any topic or guest. Entonation and speed, a little better, less flat talking. Despite all, I really thank to all of you for every detail in this video. This is not a Daily News report but an updating event. Thanks.
Awesome presentation. worth every minute!
Excellent video. I love DynamoDB
Thanks, this was very informative
its very informative in 2021
Beautiful
Is there a playlist for the "Re:Invent" videos? I see one for the "Re:Inforce" ones?
Dynamodb to the rescue! Its hard but its hawd
Interested to know how dynamo db can handle agile requirements. As schema and requirement changes...
How does AWS Amplify handle data access patterns since it doesn’t ask the user to define them when the user is defining relationships? I assume it just sets up a lot of global secondary indexes which will consume way more WCUs and make it way more expensive to add to your data than if you set everything up manually
DynamoDB has it's applications but I don't like the pitch as a general database.
It's basically: _yea but what if you want to scale your TODO application to the moon?? all it takes is just spewing inconsistent data everywhere and reinventing JOINs using string concatenation._
I want to see how that actor/movie key works when you have something like Eddie Murphy and The Nutty Professor. Maybe that just wasn't a good example table, or I'm missing something? FYI, in that movie (like many other movies), the actor plays more than 1 role.
I don't have a good understanding of joins in sql. At 10:55, which part of hashed partitions prevent nosql from having joins?
Because the data is split up in 10G chunks across multiple partitions, you can't fetch data from other partitions in the same operation. A SQL join combines information from multiple tables matching a specific condition, which means that it has to search all of the columns for items that meet that condition. In order to maintain it's speed, DynamoDB opts to not allow operations like this. Sorry this is 2 months after you asked the question, but I hope my answer makes sense!
@@zanderkrasny7132 Even if multiple partitions are split across multiple computers, nosql could still (technically) offer joins by fetching across the different partitions no?
Unless you are saying this would make it slow to point where it would be unusable and SQL solves this problem by having all tables on the same shard/partition/computer? From my understanding however the whole newSQL movement is about providing sharding to SQL databases. If they solve this problem, why can't noSQL databases also solve it?
@@samlaf92 From what I understand, it's not that they couldn't implement join operations across partitions, it's just that it would defeat the performance goals of partitioned noSQL. If you're searching every partition for what you're looking for, you lose the ability to run an O(1) hash, and then only searching 10G, because you have to go through all of the rest of your data, which can be massive. I don't know how newSQL gets around this; I haven't looked into it.
I love your work Alex. You just say “right” far too much.:)
Right?!
Kidding aside, I've noticed this recently though. Trying to work on it :)
@@alexbdebrie Actually, I noticed :) When I saw your re:Invent talk I immediately went and bought your book. keep being awesome!
@@alexbdebrie kudos for checking comment section. don't lose your enthusiasm when you get to be the most seniorest dev in your company or in world or smth.
@@Yusuf-ok5rk You're welcome! And if I ever lose the enthusiasm, don't let me give talks anymore :)
7:35 that flat line violates the second law of thermodynamics to say the least. :/
ruclips.net/video/fiP2e-g-r4g/видео.html - Meanwhile, Eddie Murphy - "Let me screw with that guy"
are you in a rush? hahaha
I think the video is sped up. I did a custom playback speed of 0.9x and it sounded much more natural