Folks, apologies for the background noise. I never seem to get the tech right 😅 Thank you for watching, I am looking forward to seeing you again soon 🦸
Hey Man, can we get a discount back on the course. I wanted to buy during Festival Sale, but missed it. It hasn't come down since then. Hoping in Diwali there would be some discount.
Great summary video Gaurav 👏 Few pointers you may consider to cover: * Lease concept to mitigate stale sets and thundering herd to persistent DB * McRouter intermediary component to batch invalidation requests and minimise network congestion * Remote Marker concept to tackle stale set problem arising from eventual consistency during cross-region replication from leader to follower
Gaurav, @25:35 what's the purpose of this standby Gutter instance when you already have (many) replicated instances? Assuming all the production replicated memcached instances are running at 60-70% capacity.
It seems like a way to limit the impact of failing requests. With a assigned gutter instance, the connections made by a client will be limited to (routed instances + gutter instances). Else it could hit every instance and choke up connections. That's just a guess. I will read the paper again to be sure :p
Most popular question these days, Design distributed counter where we can see burst of write on counter, multiple solutions 1. range distribution -> 1.1. Once range distributor exhausted all ranges and some range are available on other app server how to borrow order id from neighbor app servers. 1.2. Commit of order since we want atomicity as well. 2. Sharding It is Good topic to cover :)
In final section (data consistency ) at 29:30 , when we are using Bin Logs , how do they resolve data conflicts b/w ind server and us server ? does McSQUEAL handle that or its just rollback ?
Folks, apologies for the background noise. I never seem to get the tech right 😅
Thank you for watching, I am looking forward to seeing you again soon 🦸
Hey Man, can we get a discount back on the course. I wanted to buy during Festival Sale, but missed it. It hasn't come down since then.
Hoping in Diwali there would be some discount.
Thanks for the effort man….your videos are great and are of great help
Cheers!
Great summary video Gaurav 👏
Few pointers you may consider to cover:
* Lease concept to mitigate stale sets and thundering herd to persistent DB
* McRouter intermediary component to batch invalidation requests and minimise network congestion
* Remote Marker concept to tackle stale set problem arising from eventual consistency during cross-region replication from leader to follower
Thanks Piyush!
Beautifully explained, thanks a lot!
Amazing video! Loved it! Hoping to get more videos on whitepaper series soon!
thank you Gaurav for teaching us 🙏🏽 This is kind of knowledge is out of bound for us older and self-taught developers.
Cheers!
Thanks a lot Gaurav 🙏 It's always some value addition to my design knowledge 👌 Thanks a lot ❤
Thank you 😁
Great video. Could you explain about choosing cache sizes and if its use case dependent or how will it adapt to changing use cases .
What's better than Gaurav explaining a concept? Two Garurav's XD
Cheers :D
Thank you very much bro❤
Gaurav, @25:35 what's the purpose of this standby Gutter instance when you already have (many) replicated instances?
Assuming all the production replicated memcached instances are running at 60-70% capacity.
It seems like a way to limit the impact of failing requests.
With a assigned gutter instance, the connections made by a client will be limited to (routed instances + gutter instances). Else it could hit every instance and choke up connections.
That's just a guess. I will read the paper again to be sure :p
Amazing paper 😮 31:40
Great content!! Thank you!
Thank you!
How can sharding be replaced by replication?
❤ you brother.
why cant they use redis? was redis not there in 2010? or was it not feasible for their usecase?
Redis didn't exist at that time. Memcached came out in 2003, redis took till 2009.
The facebook team was well-versed with Memcached by 2010.
wht if they had used configuration provider like kafka for the sharding approach? obvio it was not available then... just a thought...
I don't see how that would help. Could you elaborate on the thought?
So, in replication - we will have replication of whole Facebook database in a cache (muiltple times)? Can you please clarify
We will have as much data from the DB as we can store in-memory.
@@gkcs Incase we don't have that in Cache - we will get it via DB query and get it updated in Cache ?
Most popular question these days,
Design distributed counter where we can see burst of write on counter,
multiple solutions
1. range distribution ->
1.1. Once range distributor exhausted all ranges and some range are available on other app server how to borrow order id from neighbor app servers.
1.2. Commit of order since we want atomicity as well.
2. Sharding
It is Good topic to cover :)
In final section (data consistency ) at 29:30 , when we are using Bin Logs , how do they resolve data conflicts b/w ind server and us server ?
does McSQUEAL handle that or its just rollback ?
They wait for the problem to resolve itself. Eventual consistency.
🙂👍🏻💯