12:55 Here in the cache you can store the data in a priority queue like data structure in low level, where the nearest, best rated and fairest top riders let say 10 will get the notification of the order. And If non of them accepts for say 1 minute, than next 10 will be shown. This way we can limit the size of the cache and fetch the top required data from the central db in the memory.
why do we want to shard based on city when all the data can easily fit in memory of a redis server? we can have 2-3 replicas of the primary redis server for availability & serving read requests model = rider id (uuid which takes 16 bytes), status (1 byte for online/offline), location (52 bits or 7 bytes as per redis geohash doc) assuming riders = 10^9 (1 billion), the size estimate becomes 10^9 * (16 + 1 + 7) = 24 GB this can easily fit in a redis server and serve 100k qps / replica which is a lot for this system. --- Liked the pushback on not maintaining a persistent connection. It makes sense and is logical. Clients polling every 2-3 seconds is a simple and effective solution. No need to complicate it and maintain websocket servers or long polling connections which take up memory without adding any business value
Along with Api call by person to RiderManager, there should be a limited socket connection as well. Like when the people is on tracking screen or page, the API call will hit, and that will fetch the latest updated location of the rider, but the API will also make a connection with webscoket, and send a message (start_tracking) to get consistent connection update from the Rider. Once he wish to leave that tracking screen or page, send a websocket message(stop_tracking) to disconnect the connection. I think, this will fulfill all the requirement in optimized way. What do you think?
don't you think for updating the location its better to have a seperate service where delivery partner sends location updates to which updates the cache instead of delivery matching service since it could have a lot of persistent connections and can be scaled independently, because there may be few orders in a day and some days can have large orders which would make the load on delivery matching service quite high, because anyway we have to update the location of drivers, or am i missing something?
Absolutely love recording this series! The discussions are so raw and real, hope you all are also liking it!
Yes... Great thanks
12:55 Here in the cache you can store the data in a priority queue like data structure in low level, where the nearest, best rated and fairest top riders let say 10 will get the notification of the order. And If non of them accepts for say 1 minute, than next 10 will be shown. This way we can limit the size of the cache and fetch the top required data from the central db in the memory.
Yes, thats great idea
Yes thats how it works
This introduces a bias.
Congrats Guys, could get a lot of insight from this talk! Rock on! 😁
why do we want to shard based on city when all the data can easily fit in memory of a redis server?
we can have 2-3 replicas of the primary redis server for availability & serving read requests
model = rider id (uuid which takes 16 bytes), status (1 byte for online/offline), location (52 bits or 7 bytes as per redis geohash doc)
assuming riders = 10^9 (1 billion), the size estimate becomes 10^9 * (16 + 1 + 7) = 24 GB
this can easily fit in a redis server and serve 100k qps / replica which is a lot for this system.
---
Liked the pushback on not maintaining a persistent connection. It makes sense and is logical. Clients polling every 2-3 seconds is a simple and effective solution. No need to complicate it and maintain websocket servers or long polling connections which take up memory without adding any business value
In term of implementation would you do another video which will be interesting.
Interesting design and discussion.
Amazing video as always, thanks Keerti and Gaurav
For delivery Updates you can use kafka call
First Save Data in Redish Cache and distributed cache later on save in your central DB and Delivery boys should be in order of priority queue
Can you do a video similar to a food delivery service but with a specific focus on an ordering workflow and restaurant integration?
@KeertiPurswani, Gaurav, what tool/website do you use to draw the shapes for system design, the one in the video seems really smooth here!!
Mirro / Gliffy
Along with Api call by person to RiderManager, there should be a limited socket connection as well.
Like when the people is on tracking screen or page, the API call will hit, and that will fetch the latest updated location of the rider,
but the API will also make a connection with webscoket, and send a message (start_tracking) to get consistent connection update from the Rider.
Once he wish to leave that tracking screen or page, send a websocket message(stop_tracking) to disconnect the connection.
I think, this will fulfill all the requirement in optimized way.
What do you think?
don't you think for updating the location its better to have a seperate service where delivery partner sends location updates to which updates the cache instead of delivery matching service since it could have a lot of persistent connections and can be scaled independently, because there may be few orders in a day and some days can have large orders which would make the load on delivery matching service quite high, because anyway we have to update the location of drivers, or am i missing something?
What is this tool you use to draw?
Which keyboard does gaurav has ?
Hey gaurav can u explain the persistent connection
Live connection between two, like you are online on whatsapp, you are getting live chat messages, you are sending live chat messages.
Thankyou
Which tool you using for designing?
It's called Miro
you guys designed a location based service without discussing geohashes!
Disappointing!
🙏🙂👍
😢nicd
1st