Great series, thanks! I feel chapter 4 is the fundamental building block of a lot of the followed lectures/concepts/algorithms and I'm going to review it again.
I watched the whole series from the beginning to this ending episode, I am not a Cambridge student but I want to fill out the evaluation form and rank you as the best lecturer
Hey Martin, I want to say thank you for this course. It's good base principles and algorithms, which didn't change my software architecture decisions but it made me more confident in the decisions.
Great series for distributed systems, fill some gaps with the amazing book you wrote. I hope to see more about the last chapter you wrote in the book. And the examples of implementations added much to me.
Great course and very clear teaching style! Thank you very much for making it publicly available. Looking forward to read your book on data-intensive applications.
9:39 is it true that there is no system to propagate time on the message? If the only interface the user has to the database is through a transaction - reads done within the transaction can be used to propagate the time stamp to the write through the transactional context, right?
are A and B replicas or shards? replicas need to be eventual consistent, in spanner they are linearizable so even stronger consistency guarantee. So, if user makes T1 to replica A -> T1 changes need to be propagated to replica B anyways so lamport timestamps can work right? I can understand why lamport timestamps won't work if A and B are shards and wont probably talk each other.
Awesome video! I have one question. At 13:08, How does spanner ensure that there is no overlapping between time uncertainty ranges between two transactions?
Great course for those who is too lazy to read your book (like me). Thanks a lot! It would be wonderful if you'll make "concurrent systems" lectures public too.
Is it that the uncertainty intervals cannot overlap, or is it that they’re just extremely unlikely to overlap? I couldn’t spot a reason why the time API wouldn’t return a large enough interval for the second transaction to overlap the first.
Super interesting. I suppose this type of design can support a few thousand transactions per second max with worldwide atomic MVCC. Also there seems to be a risk of corruption if a node suffers some kind of catastrophic crystal oscillator failure, it might commit a transaction too early.
I don't think the Truetime part is very clear. From another video from GCP, the truetime timestamp is used to reduce(not eliminate!) the communication with leader for a strong consistent read. ruclips.net/video/iKQhPwbzzxU/видео.html&ab_channel=Devoxx For example, a client sent a strong consistent read request hits one of the followers of db replicas, the follower instance will ONLY need to send the truetime timestamp(very small network load) to the leader to know if it has up to date data to directly reply to the client or how long it needs to wait until it can reply instead of sending the query itself to the leader.
I've read the Designing Data Intensive Applications book as well, these lectures complement the book very well. The only problem explained in the book which this design doesn't cater is GC/VM pauses. That's probably because Google knows how their hardware is configured and don't use VMs for example. Otherwise the 30 seconds interval between synchronizing the clock can be much longer.
I see a new Martin kleppmann video, I click
~3k folks finished this series. Good job everybody! Also thank you Martin for providing all these high quality content for free on the Internet!
10x now :)
@@filmfranz for high quality content like this, the numbers will keep growing for many more years to come 💓
Blows my mind every time I watch one of these videos
Great series for Distributed Systems.
This will be the greatest lecture series about distributed systems you'll ever see! Thank you Dr. Kleppmann.
This series is definitely a blessing to the world, no second thought.
Great series for distributed systems
You commented twice fool
@@quagmirecat yes because i absolutely love this lecture series!
Great series for Distributed Systems.
Great series for distributed systems
nice lecture course g
Google's Spanner
Consistency properties: [0:30]
techniques [1:22]
State machine replication (Paxos) within a shard
Two-phase locking for serializability
Two-phase commit for cross-shard atomicity
interesting: read-only transactions require no locks [2:20]
Consistent snapshots [3:27]
consistent with causality [4:25]
Approach: multi-version concurrency control (MVCC) [4:50]
Obtaining commit timestamps [7:13]
Lamport clocks may ❌ [7:54]
TrueTime: explicit physical clock uncertainty [10:15]
[t_earliest, t_latest]
Determining clock uncertainty in TrueTime [13:58]
Great series, thanks! I feel chapter 4 is the fundamental building block of a lot of the followed lectures/concepts/algorithms and I'm going to review it again.
for Distributed Systems.Great series
I watched the whole series from the beginning to this ending episode, I am not a Cambridge student but I want to fill out the evaluation form and rank you as the best lecturer
Hey Martin, I want to say thank you for this course. It's good base principles and algorithms, which didn't change my software architecture decisions but it made me more confident in the decisions.
very good video sir, i liked it very much thank you
Great series for distributed systems, fill some gaps with the amazing book you wrote. I hope to see more about the last chapter you wrote in the book. And the examples of implementations added much to me.
Just Completed , Thanks for keep this public to learn from the best
Great course and very clear teaching style! Thank you very much for making it publicly available. Looking forward to read your book on data-intensive applications.
Thanks alot. One of the best lecture series I‘ve ever done.
Super grateful for this series. Very cohesive set of concepts!
Thank you very much for making publicly available such a nice course!
thank you for you hard work, Martin
Thanks Martin for such high quality lecture.
Thanks so much for sharing these videos on RUclips. I learned a lot and really enjoyed your explanations.
very informative
Thank you, Martin! Very interesting course and easy to understand style of teaching!
Great lecture series in distributed systems which I work on. Thanks very much for sharing those videos. @Martin
Great series of distributed system!!
Thank you for sharing your knowledge for free sir.
Such a good lecture on distributed systems.
Enjoyed the series a lot! Thanks!
9:39 is it true that there is no system to propagate time on the message? If the only interface the user has to the database is through a transaction - reads done within the transaction can be used to propagate the time stamp to the write through the transactional context, right?
Thank you so much! I enjoyed this series!
amazing series on distributed systems, I learnt a lot of things thanks for sharing this.
What an excellent video - thank you!
Thank you so much for such an amazing series!
are A and B replicas or shards?
replicas need to be eventual consistent, in spanner they are linearizable so even stronger consistency guarantee.
So, if user makes T1 to replica A -> T1 changes need to be propagated to replica B anyways so lamport timestamps can work right?
I can understand why lamport timestamps won't work if A and B are shards and wont probably talk each other.
Awesome video! I have one question. At 13:08, How does spanner ensure that there is no overlapping between time uncertainty ranges between two transactions?
Great course for those who is too lazy to read your book (like me).
Thanks a lot!
It would be wonderful if you'll make "concurrent systems" lectures public too.
Is it that the uncertainty intervals cannot overlap, or is it that they’re just extremely unlikely to overlap? I couldn’t spot a reason why the time API wouldn’t return a large enough interval for the second transaction to overlap the first.
Amazing series, thanks a lot!
Nicely done! amazing lectures.
How does spanner handle clock synchronization across data centers?
Thanks for sharing such a good series.
how do we handle obtaining commit timestamp in a raft database without physical clock? just two phase commit?
Super interesting. I suppose this type of design can support a few thousand transactions per second max with worldwide atomic MVCC. Also there seems to be a risk of corruption if a node suffers some kind of catastrophic crystal oscillator failure, it might commit a transaction too early.
Thanks for your lectures!
Great intro to Spanner!
I like your books and thanks for the great explanation
Thank you very much.
Google is awesome
Thank you
TrueTime - its like Heisenberg Uncertainity Principle
Thanks Martin
Thank you!
Thanks!
Awesome!
I don't think the Truetime part is very clear. From another video from GCP, the truetime timestamp is used to reduce(not eliminate!) the communication with leader for a strong consistent read.
ruclips.net/video/iKQhPwbzzxU/видео.html&ab_channel=Devoxx
For example, a client sent a strong consistent read request hits one of the followers of db replicas, the follower instance will ONLY need to send the truetime timestamp(very small network load) to the leader to know if it has up to date data to directly reply to the client or how long it needs to wait until it can reply instead of sending the query itself to the leader.
I've read the Designing Data Intensive Applications book as well, these lectures complement the book very well.
The only problem explained in the book which this design doesn't cater is GC/VM pauses. That's probably because Google knows how their hardware is configured and don't use VMs for example. Otherwise the 30 seconds interval between synchronizing the clock can be much longer.
Great series for distributed systems
Great series for Distributed Systems.
Thank you!
Great series for distributed systems
Great series for Distributed Systems.
great series for distributed systems
Great series for Distributed Systems.
Great series for distributed systems.
Great series for distributed systems
Great series for Distributed Systems.
Great series for Distributed Systems.
Great series for distributed systems
Great series for Distributed Systems.
Great series for distributed systems
Great series for distributed systems
Great series for distributed systems.
great series for distributed systems
Great series for distributed systems
Great series for distributed systems
Great series for distributed systems
great series for distributed systems
Great series for distributed systems
Great series for distributed systems
Great series for distributed systems
Great series for distributed systems
Great series for distributed systems.
Great series for distributed systems
Great series for distributed systems
Great series for distributed systems