i was reading alex xu, i did not get good idea about sliding window and sliding window counter. now after i watched your explanation it is crystal clear and with pros and cons. thank you for doing this!!
Narendra, very informative video, keep it up. About locking in case of distributed token bucket you can use following technique Optimistic locking or conditional put - many no sql databases support conditional put. This is how it works * Read current value, say 9 * You do a conditional put with value 10 only if current value is 9. * When 2 concurrent requests try to update the value to 10, only one of them will succeed and other will fail as current value for that request will be 10.
My implementation takes advantage of Redis expiration.. When a call comes in, I create a record and the increment the value. Consequent calls will increment the value until the quota is reached. If the quota is not reached by the time the record expires, consequential request will cause a creation of new record and restart the counter.. This way I dont need to check and compare dates at any point. Code is very simple. Albeit, I am not maintaining a perpetual quota, I am only preventing abuse, which is really the main gist of request throttling
Great explanation. The pattern you followed is very good i.e. when you mention a problem with some approach, you also provide the solution for that instead of just identifying problems.
Hi Narendra - You are doing a good job in your knowledge transfer. I suggest you cover these topics as well - a) Job Scheduler b) Internals of Zoo Keeper c) Dist.Sys concepts like 2PC, 3PC, Paxos d) DB Internals.
For the last example with concurrency. How about optimistic locking on the counter. Number of req has a version. If you try to update from 2 different RL, one of them will have the NoReq version smaller than the current one and will fail. The RL can retry or drop
One additional case, were sliding logs should be used: limit a bitrate of video/audio/internet signal. In such case you need to store a packet size with a timestamp
Sliding window can be optimized by setting the size of the queue to Max Requests allowed and try to remove the old entries only if max size is reached by comparing timestamp
Hi Narendra, Relaxing Rate Limit and Local Memory + sync service is almost similar because in both the solution we might serve couple of extra request. what is your thought on my understanding?
Why are you using two caches? Your sync issues are solved by keeping one single cache. Then, coming to race conditions, redis automatically acquires a lock on the transaction since it is atomic and therefore, the other request(second) should get an updated value. For SPOF on one cache, we can keep a master slave nodes for redis
You can solve this with the help of increment or decrement method on redis which works atomically on any key so there is no chance for data inconsistencies and no need to put any lock 😊
2 services firing increment concurrently will still face the same problem, so i think it will not work without locking. Read + Write has to be an atomic transaction.
"compare and set" kind of logic works perfectly without explicit locking in simple operation case. But in complex situation, the app server may need a few requests. e.g. read the data first, the do some processing, then write back. and then two servers can do the same thing with same data at same time, thus race condition.
Hi Narendra, In token bucket scenario above, I would like to add one point that in order to reset the requests count after one minute to 5 again, we have to store the time(start time) of the first request so that we can check the difference of one minute to reset the count
Yes, I agree. If you simply reset the tokens to 5 when the minute changes, it would allow more than 5 requests/minute. Storing the start time and always comparing it with the current request time and if the delta is equal to or more than a minute, only then we can reset the tokens. @Eshwar, is this what you meant ?
the video was good, but i think token bucket wasn't explained clearly, we took example of 5 tokens per minute, but do we update the last request time everytime after receiving the request ? or we just keep the first request time so that we know whether 1 minute is elapsed after the first request, or since which second we started making a request that started getting deducted from the max limit ? for example what if 4 requests were made in the later half of the minute and 4 more requests were made in the first half of the next minute ? in that case we made 8 requests exceeding the threshold limit of 5, no clear explanation threre
token bucket is the number of tokens in a bucket, there is refill() happening in bucket after nth min/sec. Number of tokens represent number of request that can be served. with every new request, it keeps going down...but tokens keep increasing based on ratelimit as well. Fixed window counter is having User+TimeStamp as key and count as value for particular window and then start again. Essence of both alogos are very different.
@@paraschawla3757 But the underlying problem of both algorithms is the same is what the original comment meant. That they both might end up serving twice the amount of the desired RPM.
With token bucket algorithm we have control over cost of each operation(we can associate how many tokens an operation costs), where as in fixed window we dont, since we increase the counter by 1 each time
1. The tocken bucket discussed at start will has same problem as fixed window?Like if in a minute at last second 10 request comes and in 1sec of next window,10 more comes,then tocken bucket also will have 20 active request but allowed is 10.Am i correct? 2. In case of sticky session solution, there is still possibility of inconsistency. Consider user1 sends 3 request in parallel and our service can accept parallel requests,then they can read same counter value and the issue persists.And if the same user issues some hundreds of request and it is allowed 100 limit then if all of them read redis at same time then all will get same data.So it is not the case, that there will be 2-3 extra request, i think in worst case the extra requests will be max allowed concurrent connections by the server.Please correct me if i missed anything
Redis provides INCR and DECR commands which are atomic operations for increment and decrement of its Integer Data Type. Will this not take care of distributed access without any lock ?
Thanks for the great tutorial, but I have a question as how would a rate limit service obtain lock of a record in separate db affect another rate limiter service obtain the count from different db within a node?
@31:00 you have confused me here, if we use locks, region 1 will have lock in region 1 redis only. Still regions 2 call can read old data from region 2 redis and allow more requests. R1 should take lock of all regions DB theoretically if u say locking is one way to solve consistency?
00:04 Rate limiting is essential for managing API usage and protecting against misuse and attacks. 04:46 Rate-limiting algorithm for token management 09:30 The algorithm for managing tokens and requests can be memory efficient but may cause race conditions in a distributed environment. 13:57 Using sliding locks algorithm to calculate the rate in real time 18:28 Implement sliding-window counter for efficient memory usage 22:47 The solution optimizes memory usage by using counters instead of storing every request entry 27:11 Inconsistency in rate limiting leads to exceeding request limits 31:28 Syncing data between distributed systems can result in latency and race conditions.
Yes this explanation for token bucket doesn't seem correct as in token bucket tokens are added at a particular rate in a particular window time , also there might be chances of going over rate limit in certain scenarios.
With token bucket algorithm we have control over cost of each operation(we can associate how many tokens an operation costs), where as in fixed window we dont, since we increase the counter by 1 each time
@@uditagrawal6603 why can't we have a set and compare operation on the counter, or just a restriction that it can't go over a certain amount, and have requests try to increment number by 1 and reject them if it can't?
The key and value stores are different for the two. In the case of the fixed counter, the key is defined by the Userid+minute whereas for token bucket the key is userid. For value the FC is just number of reqs, for token you track the time and the number of requests so the checking algorithm has more to do.
token bucket is the number of tokens in a bucket, there is refill() happening in bucket after nth min/sec. Number of tokens represent number of request that can be served. with every new request, it keeps going down...but tokens keep increasing based on ratelimit as well. Fixed window counter is having User+TimeStamp as key and count as value for particular window and then start again.
Why do you say there's issues of race condition for token bucket but not for later counter methods? What if two requests come in at the same time and they both try to increment requests served?
why not use cache expiry to set rate limit? If the rate limit is set at 10 rpm, For a user, maintain a key in redis, set the cache expiry to 1 minute. Fetch the user key from redis for every API request, If the key is present, check if the count has exceeded. If yes, block the current request. If the count is under the rate limit, update the count for user. The cache will expire after a minute. Is there any problem with this approach?
at 10:37 video time, you mentioned that race condition may occur because of multiple requests coming from the different or same server. As you said, we are using Redis for this solution. Redis commands are atomic in itself and while executing atomic commands there is no scope of any data races. Did I get something wrong here?
Two request from the same user coming at the same time. Both get the same data one after the other. Both increment the count one after the other. The count ends up incremented only once.
Because here two operation are required. 1) Get the current counter value 2) And If its less than threshold then increment the counter. For example current counter value is 9 and threshold is 10 and if two request comes at the same time and both request see current value as 9 and so both request allowed but in real case one of the request must fail. You either has to take Lock implementation on Redis or have to write atomic operation using WATCH/MULTI or write LUA script for your usecase.
In sliding window logs, how are we able to serve 11 (requests) in last minute, if we're checking the rate in real time. Ideally it shouldn't allow for more than 10.
So Ideally, Token Bucket can have more request in particular time. Like if 5 request were made on 11:55:00 and the very next minute 11:56:00 5 more request are made, so total 10 request can be made in a minute. (or size of a bucket)? Right?
The inconsistency problem is basically a common DB problem called "lost update" due to two threads reading committed data concurrently and performing writes without any locks. Solution is to introduce locking to enforce ordering. Or enforce ordering by sticky session at a much higher level
In case of token bucket algorithm, isn't Redis thread safe or can't we enforce synchronization using locks if requests from multiple application servers are meant to be served concurrently?
For the Local Memory solution that you provided, how is it different from the solution that you explained just before (where the rate limiter is connected directly to the Redis)?
isnt the token bucket and fixed window has the same problem of boundary request problem... ? since even in token bucket you can request more token in end of the first request window and request more token to the second of the window.?
Hi Narendra great work I loved your content but i have one question . why not keep only one Redis DB instance instead of two in that case we dont have to sync them ? or is there any significance of having diff instances of Redis (per LB , RL , App instances) .
That's because the entire point of having multiple regions is to have fault tolerance. For a single region, we can have a primary-secondary model with asynchronous replication between them but for a multi-region setup, each component should have a replica. One approach to solve this is to use distributed locks that Redis provides.
the threshold is calculated per second, for example AWS API gateway 5000 req/sec .. we can just declare an Array Queue or Array stack and start pushing elements in to it and keep flushing it every second ... + or - 10/20 request would not matter .. if the stack/Queue fills up it would throw an error and that error could be propagated to the user !!
can we have sync service + memory between RL and Redis/Casandra ? So all RLs will go via sync service to get quick response. Sync service is responsible to write to Redis/Casandra. If sync service is not available, RL will make direct call to Redis/Casandra. Not sure how optimal this change is .
For the situation of distributed race limit, even though one user send two requests at the same time in one server, it dosen't mean that the actual two processing threads will deal them serially, so the inconsistency problem seems still exist. I think to address this problem we can make the read and update operation as atomic with redis+Lua.
@@prajwal9610 yea but in case of local memory suppose single user two request going to 2 regions and regions local cache first read from db and then update in cache and db. Then also there is inconsistency as both req operating parellely
In case of using locks in distributed system, If RL1 gets lock on redis/Casandra DB instance 1, How will it prevent RL2 to have lock on redis instance two. Since the database is distributed too, each wil have their own lock. Wouldn't RL2 can be able to access Redis instance 2 even while RL1 is having lock on redis instance 1
You have already served 8 instead of 5 at 28:34 , your intention is right, but Cache 1 = U1:3 and Cache 2 = U1:2, should be the case, instead of u1:4 in both.
I found this video very useful. One thing that can be improved is the way it is presented. At times the material seems unorganized. For example, there are flashes on the screen because the speaker forgot to mention it verbally. Adding a few notes before making the video may help the presenter have a good flow.
Question about using locks. In example where two requests come in, RL1 gets the lock first, and then "locks" the key. How does it lock the key though such that it somehow impacts RL2? I'd imagine it only has access to the db in its own region (the one on the right on your drawing), so how does it lock the key such that when RL2 access the db in its own region (the one on left), that it knows to wait for the lock to be released? Thanks for the video as always.
This is not efficient and optimized, cause it has to do linear O(N) time processing for each requests. The way it actually solves rate limiting is: 1. Create a container/ list of max N size, when you have to serve N requests/ min let say. 2. When a request comes: 2.1. If the container size is lesser than N, then add the timestamp. 2.2. If no, then do a binary search on the list with (TS - 1 min), this will return the index of the timestamp which got served at the beginning of the last minute. Get the index diff from that position and that is the number of requests you already served. 2.2.1. If that is more than or equal to N -> wait in the message queue with a signal or wait time. 2.2.2. If no, then add the TS entry at the list. 3. Keep a sanity check on each list size, that it should always contain the timestamps of last N requests. Keep on deleting the old requests. This way the response time reduces to O(logN) and also the latency is resolved.
i was reading alex xu, i did not get good idea about sliding window and sliding window counter. now after i watched your explanation it is crystal clear and with pros and cons. thank you for doing this!!
Came from same place. This video helped. :)
Narendra, very informative video, keep it up.
About locking in case of distributed token bucket you can use following technique
Optimistic locking or conditional put - many no sql databases support conditional put. This is how it works
* Read current value, say 9
* You do a conditional put with value 10 only if current value is 9.
* When 2 concurrent requests try to update the value to 10, only one of them will succeed and other will fail as current value for that request will be 10.
My implementation takes advantage of Redis expiration.. When a call comes in, I create a record and the increment the value. Consequent calls will increment the value until the quota is reached. If the quota is not reached by the time the record expires, consequential request will cause a creation of new record and restart the counter.. This way I dont need to check and compare dates at any point. Code is very simple. Albeit, I am not maintaining a perpetual quota, I am only preventing abuse, which is really the main gist of request throttling
This is the way I implemented for my org also. Simple and served its purpose well.
how would you manage the concurrancy here in redis.
@@shelendrasharma9680 redis is single thread.
this is indirectly fixed window counter
This is fixed window counter only.
04:16 Token bucket
10:40 Leaky bucket
12:50 Fixed window counter
16:15 Sliding logs
20:36 Sliding Window counter
25:21 Distributed system setup (Sticky sessions | locks)
Good Stuff Naren! Even famous profs are not able to explain this kind of stuff so clearly.
Thanks
Narendra, your video are great resources for learning system design. Your explanation of concepts is crystal clear. Big thumbs up for you
Great explanation.
The pattern you followed is very good i.e. when you mention a problem with some approach, you also provide the solution for that instead of just identifying problems.
I think you're easily the best youtuber for system design content
Best explanation, almost searched everywhere for my scenario, but found this tutorial very very helpful, once again thanks man.
This channel is just hidden Gem!
I love your cap.. Looks like a trademark for you.. Thanks for all your videos..
very underrated youtube channel for system design
I love your voice brother. It makes it exciting to listen to what you have to say about this very interesting design topic.
Hi Narendra - You are doing a good job in your knowledge transfer. I suggest you cover these topics as well - a) Job Scheduler b) Internals of Zoo Keeper c) Dist.Sys concepts like 2PC, 3PC, Paxos d) DB Internals.
Added to TODO, Thanks
Thanks for your response. Looking forward for her videos!!@@TechDummiesNarendraL
Great tutorial. Tricky part comes at 25:12:)
For the last example with concurrency. How about optimistic locking on the counter. Number of req has a version. If you try to update from 2 different RL, one of them will have the NoReq version smaller than the current one and will fail. The RL can retry or drop
One additional case, were sliding logs should be used: limit a bitrate of video/audio/internet signal. In such case you need to store a packet size with a timestamp
Great work Narendra..! I am currently planning to switch jobs and your videos on system design are amazing...!!
you have my respect Narendra.. great work! :)
Sliding window can be optimized by setting the size of the queue to Max Requests allowed and try to remove the old entries only if max size is reached by comparing timestamp
Bro, You saved my months. Love from Pakistan
very comprehensive video. Great work. subscribed
Distributed Systems, a necessary evil.. very nicely explained Narendra !!
Narendra L !! This is just superb ... keep going.
Hi Narendra,
Relaxing Rate Limit and Local Memory + sync service is almost similar because in both the solution we might serve couple of extra request. what is your thought on my understanding?
Thank you Narendra. The incredible content archive that you are building is invaluable. Thank you.
This video was a clear and concise explanation of these topics! Great job! You have a new subscriber.
Why are you using two caches? Your sync issues are solved by keeping one single cache. Then, coming to race conditions, redis automatically acquires a lock on the transaction since it is atomic and therefore, the other request(second) should get an updated value. For SPOF on one cache, we can keep a master slave nodes for redis
You can solve this with the help of increment or decrement method on redis which works atomically on any key so there is no chance for data inconsistencies and no need to put any lock 😊
2 services firing increment concurrently will still face the same problem, so i think it will not work without locking. Read + Write has to be an atomic transaction.
@@himanshu111284 in redis increment and decrement methods on id are atomic so no need for lock
@@SanjayKumar-di5db First time i am learning something new by going through RUclips comments bro. Thanks for it man.
"compare and set" kind of logic works perfectly without explicit locking in simple operation case. But in complex situation, the app server may need a few requests. e.g. read the data first, the do some processing, then write back. and then two servers can do the same thing with same data at same time, thus race condition.
Great work,
Searching for System design like leetcode or Hackerank...
Hi Narendra,
In token bucket scenario above, I would like to add one point that in order to reset the requests count after one minute to 5 again, we have to store the time(start time) of the first request so that we can check the difference of one minute to reset the count
Yes, I agree. If you simply reset the tokens to 5 when the minute changes, it would allow more than 5 requests/minute. Storing the start time and always comparing it with the current request time and if the delta is equal to or more than a minute, only then we can reset the tokens. @Eshwar, is this what you meant ?
@@nikhilneela yes Nikhil. That's right
Outstanding Explanation
Great work Narendra👍👍
Perfect! I wish I can give you 1,000,000 likes!
the video was good, but i think token bucket wasn't explained clearly, we took example of 5 tokens per minute, but do we update the last request time everytime after receiving the request ? or we just keep the first request time so that we know whether 1 minute is elapsed after the first request, or since which second we started making a request that started getting deducted from the max limit ? for example what if 4 requests were made in the later half of the minute and 4 more requests were made in the first half of the next minute ? in that case we made 8 requests exceeding the threshold limit of 5, no clear explanation threre
Awesome Narendra..
Thanks for the nice explanation. One question - What is the difference between fixed window counter and token bucket? Are they not doing the same?
token bucket is the number of tokens in a bucket, there is refill() happening in bucket after nth min/sec. Number of tokens represent number of request that can be served. with every new request, it keeps going down...but tokens keep increasing based on ratelimit as well.
Fixed window counter is having User+TimeStamp as key and count as value for particular window and then start again.
Essence of both alogos are very different.
@@paraschawla3757 But the underlying problem of both algorithms is the same is what the original comment meant. That they both might end up serving twice the amount of the desired RPM.
With token bucket algorithm we have control over cost of each operation(we can associate how many tokens an operation costs), where as in fixed window we dont, since we increase the counter by 1 each time
Great video. Well explained.
Narendra L! You doing good job! I watched your couple of videos. Keep it up!
1. The tocken bucket discussed at start will has same problem as fixed window?Like if in a minute at last second 10 request comes and in 1sec of next window,10 more comes,then tocken bucket also will have 20 active request but allowed is 10.Am i correct?
2. In case of sticky session solution, there is still possibility of inconsistency. Consider user1 sends 3 request in parallel and our service can accept parallel requests,then they can read same counter value and the issue persists.And if the same user issues some hundreds of request and it is allowed 100 limit then if all of them read redis at same time then all will get same data.So it is not the case, that there will be 2-3 extra request, i think in worst case the extra requests will be max allowed concurrent connections by the server.Please correct me if i missed anything
Great lesson! Thank you!
Hello Narendra, Fixed window counter looks the same as token bucket for me - only the concept is different but the effect will be the same, right?
Redis provides INCR and DECR commands which are atomic operations for increment and decrement of its Integer Data Type. Will this not take care of distributed access without any lock ?
I think his assumption is redis is seperate
Yes this will be taking care as they are atomic.
@@victoryang7734 what does separate redis mean. Is distributed redis not a shared cache?
Your content is good. But please try to change your voice modulation. It really helps for long videos.
One of best explanation, thanks man :)
Thanks for the great tutorial, but I have a question as how would a rate limit service obtain lock of a record in separate db affect another rate limiter service obtain the count from different db within a node?
@31:00 you have confused me here, if we use locks, region 1 will have lock in region 1 redis only. Still regions 2 call can read old data from region 2 redis and allow more requests. R1 should take lock of all regions DB theoretically if u say locking is one way to solve consistency?
Great content. Thanks for sharing.
Just one question, there should be only 1 LB which will send the request to either A1 or A2?
Great video, congrats!!
00:04 Rate limiting is essential for managing API usage and protecting against misuse and attacks.
04:46 Rate-limiting algorithm for token management
09:30 The algorithm for managing tokens and requests can be memory efficient but may cause race conditions in a distributed environment.
13:57 Using sliding locks algorithm to calculate the rate in real time
18:28 Implement sliding-window counter for efficient memory usage
22:47 The solution optimizes memory usage by using counters instead of storing every request entry
27:11 Inconsistency in rate limiting leads to exceeding request limits
31:28 Syncing data between distributed systems can result in latency and race conditions.
Can you please let us know the books which you have read to prepare for the video?
Token Bucket and Fixed Window counter, what's the difference?
Yes this explanation for token bucket doesn't seem correct as in token bucket tokens are added at a particular rate in a particular window time , also there might be chances of going over rate limit in certain scenarios.
With token bucket algorithm we have control over cost of each operation(we can associate how many tokens an operation costs), where as in fixed window we dont, since we increase the counter by 1 each time
@@uditagrawal6603 why can't we have a set and compare operation on the counter, or just a restriction that it can't go over a certain amount, and have requests try to increment number by 1 and reject them if it can't?
Great work! Would you be able to system design Elevators? Parking Lot?
Well explained Narendra
What's the difference between token bucket and fixed window? they seem so similar
The key and value stores are different for the two. In the case of the fixed counter, the key is defined by the Userid+minute whereas for token bucket the key is userid. For value the FC is just number of reqs, for token you track the time and the number of requests so the checking algorithm has more to do.
Burst problem at boundary seem to exist in token bucket as well right?
@@preety202 yes
seems they are about the same to be functionally, maybe a bit diff implement wise?
token bucket is the number of tokens in a bucket, there is refill() happening in bucket after nth min/sec. Number of tokens represent number of request that can be served. with every new request, it keeps going down...but tokens keep increasing based on ratelimit as well.
Fixed window counter is having User+TimeStamp as key and count as value for particular window and then start again.
20:36 Sliding Window counter
The rate limit is 10R/M
While in explanation , he considered 10R/S so please don't get confuse and think he is wrong
Sir, for amazon any particular series of questions you want to suggest.
Excellent..hats off
Great video.. Thanks for the knowledge.
Why do you say there's issues of race condition for token bucket but not for later counter methods? What if two requests come in at the same time and they both try to increment requests served?
why not use cache expiry to set rate limit?
If the rate limit is set at 10 rpm,
For a user, maintain a key in redis, set the cache expiry to 1 minute.
Fetch the user key from redis for every API request,
If the key is present, check if the count has exceeded. If yes, block the current request. If the count is under the rate limit, update the count for user.
The cache will expire after a minute.
Is there any problem with this approach?
at 10:37 video time, you mentioned that race condition may occur because of multiple requests coming from the different or same server.
As you said, we are using Redis for this solution. Redis commands are atomic in itself and while executing atomic commands there is no scope of any data races. Did I get something wrong here?
same question here!
Two request from the same user coming at the same time. Both get the same data one after the other. Both increment the count one after the other. The count ends up incremented only once.
@@musheerahmed5815 Use optimistic locking by adding version column to avoid lost update
Because here two operation are required. 1) Get the current counter value 2) And If its less than threshold then increment the counter. For example current counter value is 9 and threshold is 10 and if two request comes at the same time and both request see current value as 9 and so both request allowed but in real case one of the request must fail. You either has to take Lock implementation on Redis or have to write atomic operation using WATCH/MULTI or write LUA script for your usecase.
Using redis lock or lua scripts increases latency to user request.
In sliding window logs, how are we able to serve 11 (requests) in last minute, if we're checking the rate in real time. Ideally it shouldn't allow for more than 10.
Excellent videos, just lacking good sound system.
So Ideally, Token Bucket can have more request in particular time. Like if 5 request were made on 11:55:00 and the very next minute 11:56:00 5 more request are made, so total 10 request can be made in a minute. (or size of a bucket)? Right?
Yes. If it's implemented as explained you are right.
So if no two request arrive at the same time, then sliding window counter will have the same issues as the sliding window algorithm in terms of memory
04:15 Rate Limting Algorithms
25:11 Race Conditions in distributed systems
Nice explanation. Could you please make a video for Google ad sense analytics collection system ?
The inconsistency problem is basically a common DB problem called "lost update" due to two threads reading committed data concurrently and performing writes without any locks.
Solution is to introduce locking to enforce ordering.
Or enforce ordering by sticky session at a much higher level
You have a new subscriber. Thanks for making this video.
Why he is looking like varun singla sir from Gate smashers , btw nice lecture
In case of token bucket algorithm, isn't Redis thread safe or can't we enforce synchronization using locks if requests from multiple application servers are meant to be served concurrently?
Narendra from where do you get such a great understanding of system
For the Local Memory solution that you provided, how is it different from the solution that you explained just before (where the rate limiter is connected directly to the Redis)?
What a guy!! bless you bro
Is it true that token bucket works effecitvely the same as sliding logs?
isnt the token bucket and fixed window has the same problem of boundary request problem... ? since even in token bucket you can request more token in end of the first request window and request more token to the second of the window.?
Hi Narendra great work I loved your content but i have one question . why not keep only one Redis DB instance instead of two in that case we dont have to sync them ? or is there any significance of having diff instances of Redis (per LB , RL , App instances) .
@mritunjay yadav - in ditributed system you cannot have single point of failure
That's because the entire point of having multiple regions is to have fault tolerance. For a single region, we can have a primary-secondary model with asynchronous replication between them but for a multi-region setup, each component should have a replica. One approach to solve this is to use distributed locks that Redis provides.
Isn't token bucket the same as fixed window algorithm? it is just that (limit-no of reqs served VS no of reqs served).
the threshold is calculated per second, for example AWS API gateway 5000 req/sec .. we can just declare an Array Queue or Array stack and start pushing elements in to it and keep flushing it every second ... + or - 10/20 request would not matter .. if the stack/Queue fills up it would throw an error and that error could be propagated to the user !!
Don't the fixed window counter also run into concurrency issue like the first scenario ?
can we have sync service + memory between RL and Redis/Casandra ?
So all RLs will go via sync service to get quick response.
Sync service is responsible to write to Redis/Casandra.
If sync service is not available, RL will make direct call to Redis/Casandra.
Not sure how optimal this change is .
What happens if the rate limiter not working. Does it become a single point of failure?. How do we make sure it wont affect system performance?
There is one con to all your videos. If you skip 10 sec of this video, you are doomed :-P Exceptional work, Narendra.
I love your videos. Thank you for making such detailed videos which explain the concepts so clearly. :)
in token bucket algorithm, what happens if requests are came at 55th second of previous minute and 5th second of current minute?
If we are using an API Gateway for all the user requests, and redis updates are atomic, still race condition can happen ?
For the situation of distributed race limit, even though one user send two requests at the same time in one server, it dosen't mean that the actual two processing threads will deal them serially, so the inconsistency problem seems still exist. I think to address this problem we can make the read and update operation as atomic with redis+Lua.
Redis does this by having a lock which is already suggested in the video
@@prajwal9610 yea but in case of local memory suppose single user two request going to 2 regions and regions local cache first read from db and then update in cache and db. Then also there is inconsistency as both req operating parellely
Great video, keep up the good work :)
In case of using locks in distributed system, If RL1 gets lock on redis/Casandra DB instance 1, How will it prevent RL2 to have lock on redis instance two. Since the database is distributed too, each wil have their own lock. Wouldn't RL2 can be able to access Redis instance 2 even while RL1 is having lock on redis instance 1
How is token bucket different than fixed window algo? According to your explanation they seem same.
This is really nice.. Do you have anything on how a coupon system works, for example : Doordash, Ubereats or rideshare coupons
Is Token Bucket the same as Fixed Window Counter?
You have already served 8 instead of 5 at 28:34 , your intention is right, but Cache 1 = U1:3 and Cache 2 = U1:2, should be the case, instead of u1:4 in both.
Hi Naren, what's the essential differences between sliding window log and leaky bucket. They look like pretty much the same in functionality.
I found this video very useful. One thing that can be improved is the way it is presented. At times the material seems unorganized. For example, there are flashes on the screen because the speaker forgot to mention it verbally. Adding a few notes before making the video may help the presenter have a good flow.
Question about using locks. In example where two requests come in, RL1 gets the lock first, and then "locks" the key. How does it lock the key though such that it somehow impacts RL2? I'd imagine it only has access to the db in its own region (the one on the right on your drawing), so how does it lock the key such that when RL2 access the db in its own region (the one on left), that it knows to wait for the lock to be released? Thanks for the video as always.
Can't we use a sorted Redis set to avoid the concurrency issue?
This is not efficient and optimized, cause it has to do linear O(N) time processing for each requests. The way it actually solves rate limiting is:
1. Create a container/ list of max N size, when you have to serve N requests/ min let say.
2. When a request comes:
2.1. If the container size is lesser than N, then add the timestamp.
2.2. If no, then do a binary search on the list with (TS - 1 min), this will return the index of the timestamp which got served at the beginning of the last minute. Get the index diff from that position and that is the number of requests you already served.
2.2.1. If that is more than or equal to N -> wait in the message queue with a signal or wait time.
2.2.2. If no, then add the TS entry at the list.
3. Keep a sanity check on each list size, that it should always contain the timestamps of last N requests. Keep on deleting the old requests.
This way the response time reduces to O(logN) and also the latency is resolved.
Good one bro !