Complete System Design Playlist for Interview preparation: Low Level Design from Basics to Advanced: ruclips.net/p/PL6W8uoQQ2c61X_9e6Net0WdYZidm7zooW High Level Design from Basics to Advanced: ruclips.net/p/PL6W8uoQQ2c63W58rpNFDwdrBnq5G3EfT7 You can also connect 1:1 with me on topmate: lnkd.in/duwBKSy7
sir, could you please provide me a template how to start with system design questions with answering justified reasons, by gods grace i practiced consistently leetcode questions topic wise and got some good idea and concepts on dsa, now i am struggling to answser system design questions as i have given interviews to some big companies outside of india, they drilled with only system design concepts like url shortener, he asked whats your plan of action if u have design, why u have chose this db, how you design the service, why cann't we use hashmap key-value pair, i am struggling a lot how to start and which area to start, like db ? providing valid reason why i am picking sql vs nosql, i got the reasons and i know the difference between them but on that moment, i am struggling a lot, please sir, it will help me to move ahead in my career, there few course byte byte to go which not so affordable to my case
@@ChristForAll-143 Hi, buddy i can totally understand. Ony thing which can help you if aware of the concepts in HLD. I am not sure if you followed my HLD playlist. All the questions which you have asked, i have properly defined in step by step. Watching or reading answers of particular question is okay, but you can not go one level deep if you don't know much about the concepts. That's why i am covering from start but totally interview perceptive.
@@ConceptandCoding oh ok sir, i will revisit once again how you approching step by step , improving the design and how you are approaching Thanks a lot sir
No proper introduction having the "what" and "why" about a Rate Limiter. All algorithm explanations seem incomplete and rushed - no implementation has been discussed.
he explained sliding window counter and told that it has a certain disadvantage and then explain sliding window log and said this eliminates the disadvantage of sliding window counter. He didn't explain how does it eliminates actually. Your point is extremely valid
Fine with the Algorithms, where is the Design ? In a HLD interview, an interviewer won't be simply asking about the details of algorithms we have for rate-limiter. You need to tell, where are you going to place the rate limiter, How are you going to handle the request load, which DB are you going to use to store the request count, how will you avoid the rate limiter being overhead. You will have to answer all these questions, not just various algorithms.
Hi, I have been watching your videos since a good time now. I really find them very nice but may be in this video, you could have explained the rate limiting algos a bit better, specially Sliding Window Counter. It might be just me, but I had to google a bit to understand it. Also, since this question is also discussed in LLD Rounds, one video of that will help a lot.
Seems token bucket has also disadvantage similarly fixed window counter. I.e request > threshold can also be served with in time period.. let's suppose bucket size is 3/min and refill is 2/min. Lets at 10.00.00 --> bucket size is 3 At 10.00.55 --> 3 request served so bucket size is 0 now At 10.01.01 --> bucket size will be 2 (from refill) . And 2 request served here. So bucket size is empty now... So with in 6 sec 5 request served . But the threshold is 3/min
Use case: Allow 10 requests/minute/user Token bucket: 1. User makes the first request. Make an entry in redis with user_id and current timestamp(uid: { tokens: 10, timestamp: 10:01:10 }) 2. User makes another request at 10:01:25. Update redis entry (uid: { tokens: 9, timestamp: 10:01:10 }). Do not update timestamp. 3. Repeat above process and keep updating unless tokens reaches 0 within a 1 minute window (till 10:02:10). If number of tokens reaches 0 and any other requests are made from 10:01:10 to 10:02:09, then drop those requests. 4. If any requests are made from 10:02:10. Make another entry in redis (uid: { tokens: 10, timestamp: 10:01:10 }) This can be implemented from Server's perspective too where you have a CRON job that keeps on filling 10 tokens every minute. But this is not a good approach as all tokens can be consumed by one user and not left for any other users in that timeframe. Cons: Cannot prevent bursts of requests. What if all 10 requests are made at 10:01:25? There is no uniform distribution of requests. Leaky bucket: 1. Maintain a queue of size 10. Any request that comes to the server, add it to the queue. When the request is over, remove it from the queue. 2. There can be up to 10 requests at any given timestamp. Any more requests coming in that timestamp will be dropped. This is implemented from Server's perspective and can be used in combination to other algorithms. Cons: If server takes long time to process requests, then it will be in the queue for a long time. Thus leading to requests being dropped unnecessarily. Fixed window counter: 1. User makes first request at 10:01:10. An entry is made in redis for every minute. (uid_10:00: { tokens: 10 }). 2. User can make 10 requests every minute (from 10:00 - 10:01, 10:01 - 10:02 and so on). Cons: No uniform rate of requests Sliding logs: Literally, logs every requests coming from every user. 1. User makes 5 requests: a. 10:01:00 b. 10:01:01 c. 10:01:10 d. 10:01:19 e. 10:01:39 2. For every request at any given timestamp, the rate limiter adds all the requests that was made by the user in last minute. If the number exceeds 10, then requests are dropped else processed. Cons: Memory inefficient as it needs to log all the requests information to solve the problem Sliding window counter: Modification of above algorithm. Instead of logging all requests, maintain a counter of requests every 10 seconds (configurable). 1. Maintain one counter for any request in the window (10:01:00 to 10:01:10) 2. Maintain another counter for any request in the dinwo (10:01:10 to 10:01:20) .. and so on Basically, an array of counters. The size of which is determined by the number of requests allowed every minute. In our case, its 6 since we are logging once for every 10 seconds and a minute is 6 * 10. This is the best known algorithm for rate limiter.
Hi so the issue with the fixed window was that all the requests could come at the last sec and in the first second of the next time window and that would mean we are handling more requests than that are allowed. How is this solved in the sliding window logs. I still see that this issue is very much still possible. I don't see any use to storing the request log when it is not going to be used.
The distinction lies in one having a set starting time, while the other begins from the next log pointer. In the fixed window, regardless of incoming requests, the fixed starting point time is checked. Conversely, in the sliding window, the time of the first element is checked, blocking requests for a period, then moving to the next log time until the period concludes.
Hi Shrayansh, i am an upcoming intern at microsoft.. i am in 3rd year of my college. I love watching your videos ! they give a feel of the real world engineering that happens out there. I want to say Thank you and keep up the good work :) Had it not been for this channel , I wouldn't have discovered system design this early. Thank you!
Your videos are pretty good generally, but in this one i found your sliding window algorithms as if you were in some hurry, may be we can get a seperate indepth video on that. Thank you for this playlist.
Hi. Thank you for these videos. But would like to point out that sliding window log and sliding window counter sections have been cut mid way. Have you uploaded a separate video on them? Thank you
on the sliding window once the window goes of that range, timestamps can be deleted so where is the issue for the space here, if they are temporary limestamps
Hi Rahul, there multiple disadvantage: 1. In the sliding window log algorithm, we are storing timestamp even request is rejected. So think from the perceptive that for each request you have to keep so Many amount of space, and what if in DDOs attack million of request comes in, even we are failing the request,we are storing there timestamp. - 2nd disadvantage is, for every request, we have to compute the range, and delete the old timestamp if they are out of range. That's also and expensive in case of cluster of servers
This diagram is very simple. Does it work for a distributed environment? I think there will be a lot of problems, if we see this design with a distributed POV
Hey sharyansh can you again make one video about sliding window counter algorithm, I could not understand from this video, it feels like you are in some hurry. BTW good content
I am almost certain that Interviewer will definitely ask how would you maintain atomicity with Redis. If we cannot answer it the solution is unsolved because this is crux of the current HLD problem. Redis supports transactions. So, atomicity shouldn't be a problem, right?
That's the issue with Leaky bucket. So here bucket is nothing but a Queue. If queue is full, first problem is request rejected. Second the because of constant rate of processing the request, chances that request wait for long in the queue.
Hi Shreyansh, isn't token bucket same as fixed window counter algorithm here? In both cases we initialize counter starting of window to fixed value(max requests allowed per window) and we reduce it as requests come in, and when it becomes zero, we reject the request.
I have a doubt here. In Sliding Log Algorithm, why do we have to store the log of the requests that are already rejected? Considering a rate limiter which processes 2 req/min Req A:00:00:12(accepted) Req B:00:00:20(accepted) Req C:00:00:50 (rejected but entry is present in log) Req D:00:01:40(accepted) Req E: 00:01:45(The time range would be (00:00:45 to 00:01:45) and within this time range we find 3 requests in the log including 00:01:45) So, we end up rejecting Req E. However, Req C was not even processed and accepting Req E would be within the desired rate limit only (Req D and Req E within this range). So, why don't we just delete Req C as soon as it is rejected?
Hi Anisha, i am also inline with you that, logging the rejected request timestamp doesn't make sense. But that's the actual implementation of sliding log algo. And disadvantage also tell this that it unnecessary log extra logs which cause memory issue. But still i am not clear why originally they decided to put rejected log in this. I am thinking out loud that this algo will perform better in case of DDOs attack. For example: a system has configuration like 2000requeat/min, and let's say DDOs attack happened, after 2k request not a single request will get processed but for other algo after 1 min again 2k request will get hit up. This is my guess but still i don't see any official answer. What do you think?
@@ConceptandCoding Thank you for the reply. Yeah, DDos attack can be prevented for sure using this technique. However, the problem apart from storing the logs is we might miss out processing on some requests even though they are within our rate limit. But I think removing the rejected requests from the log would be more efficient as we can process more requests. The log store is increasing bcoz we are storing unnecessary logs. If we don't store them, only the accepted requests will be present and the ones that are not within the time range are anyhow removed. So, at any instance, only the actual number of requests that can be processed are present in the store. But storage overhead is always there.
Hey Shrayansh, One quick follow up question, the issue you mentioned with Fixed window counter like if lots of request comes from edges, will the same counter not be there for Token bucket too?
In the token bucket, all extra requests are denied with status code 429(client-side error), but in the fixed window, all the requests are within the limit of the individual fixed window (edge case) so this is the disadvantage of this algorithm as it sends a server error(5xx).
@@ConceptandCoding sorry I still didn't get it as each token would be a large string that is used by one user per token time. How we can reduce it by making it a number?
Complete System Design Playlist for Interview preparation:
Low Level Design from Basics to Advanced:
ruclips.net/p/PL6W8uoQQ2c61X_9e6Net0WdYZidm7zooW
High Level Design from Basics to Advanced:
ruclips.net/p/PL6W8uoQQ2c63W58rpNFDwdrBnq5G3EfT7
You can also connect 1:1 with me on topmate:
lnkd.in/duwBKSy7
sir, could you please provide me a template how to start with system design questions with answering justified reasons, by gods grace i practiced consistently leetcode questions topic wise and got some good idea and concepts on dsa, now i am struggling to answser system design questions as i have given interviews to some big companies outside of india, they drilled with only system design concepts like url shortener, he asked whats your plan of action if u have design, why u have chose this db, how you design the service, why cann't we use hashmap key-value pair, i am struggling a lot how to start and which area to start, like db ? providing valid reason why i am picking sql vs nosql, i got the reasons and i know the difference between them but on that moment, i am struggling a lot, please sir, it will help me to move ahead in my career, there few course byte byte to go which not so affordable to my case
@@ChristForAll-143 Hi, buddy i can totally understand. Ony thing which can help you if aware of the concepts in HLD. I am not sure if you followed my HLD playlist.
All the questions which you have asked, i have properly defined in step by step. Watching or reading answers of particular question is okay, but you can not go one level deep if you don't know much about the concepts. That's why i am covering from start but totally interview perceptive.
@@ConceptandCoding oh ok sir, i will revisit once again how you approching step by step , improving the design and how you are approaching
Thanks a lot sir
No proper introduction having the "what" and "why" about a Rate Limiter. All algorithm explanations seem incomplete and rushed - no implementation has been discussed.
Much needed criticism
he explained sliding window counter and told that it has a certain disadvantage and then explain sliding window log and said this eliminates the disadvantage of sliding window counter. He didn't explain how does it eliminates actually. Your point is extremely valid
You did not explain sliding window properly. Please articulate clearly when you expect people to pay to watch your content as a member.
Fine with the Algorithms, where is the Design ?
In a HLD interview, an interviewer won't be simply asking about the details of algorithms we have for rate-limiter.
You need to tell, where are you going to place the rate limiter, How are you going to handle the request load, which DB are you going to use to store the request count, how will you avoid the rate limiter being overhead.
You will have to answer all these questions, not just various algorithms.
Hi Deepak, I'm struggling to find a proper resource to learn HLD and LLD. Can you suggest me some resources to learn the same?
In Groww , I have faced this question
atlassian most famous interview question of this company
Thanks for this info Buddy 👍
I was also expecting the same but to my surprise, as a fresher, they asked me to design a google sheet
@@ventorz5066 was it HLD or LLD?
@@RishitaBoseLLD
In sliding log, we don't store the timestamp of request which wasn't served.
Hi, I have been watching your videos since a good time now. I really find them very nice but may be in this video, you could have explained the rate limiting algos a bit better, specially Sliding Window Counter. It might be just me, but I had to google a bit to understand it. Also, since this question is also discussed in LLD Rounds, one video of that will help a lot.
Thank you for the feedback, i will try to set up one live session and cover the algos in detail again. Sorry for the part which is not clear.
Blinkit
Appreciation for Shryanash. Bro explains like my hostel mate used to. Will always be remembered the concepts he explained.
Seems token bucket has also disadvantage similarly fixed window counter. I.e request > threshold can also be served with in time period.. let's suppose bucket size is 3/min and refill is 2/min.
Lets at 10.00.00 --> bucket size is 3
At 10.00.55 --> 3 request served so bucket size is 0 now
At 10.01.01 --> bucket size will be 2 (from refill) . And 2 request served here. So bucket size is empty now...
So with in 6 sec 5 request served . But the threshold is 3/min
agree please help here @ConceptandCoding
Rate limiter system design in Hindi : ruclips.net/video/khhe7avsw1g/видео.html
Easy to understand...
If the number of hosts is limited~100-200 gossip protocol is an alternative for replication of counters
My request is to again create this video, No proper explaination , No proper Design, Very disappointing
Microsoft asked me this question in 3rd round of the interview for SDE, I was able to explain the sliding window approach to the interviewer.
this video is not good in this series.. there are lot of there videos explaining better
Seems very rushed. Theres is no clear explanation of anything
Use case: Allow 10 requests/minute/user
Token bucket:
1. User makes the first request. Make an entry in redis with user_id and current timestamp(uid: { tokens: 10, timestamp: 10:01:10 })
2. User makes another request at 10:01:25. Update redis entry (uid: { tokens: 9, timestamp: 10:01:10 }). Do not update timestamp.
3. Repeat above process and keep updating unless tokens reaches 0 within a 1 minute window (till 10:02:10). If number of tokens reaches 0 and any other requests are made from 10:01:10 to 10:02:09, then drop those requests.
4. If any requests are made from 10:02:10. Make another entry in redis (uid: { tokens: 10, timestamp: 10:01:10 })
This can be implemented from Server's perspective too where you have a CRON job that keeps on filling 10 tokens every minute. But this is not a good approach as all tokens can be consumed by one user and not left for any other users in that timeframe.
Cons: Cannot prevent bursts of requests. What if all 10 requests are made at 10:01:25? There is no uniform distribution of requests.
Leaky bucket:
1. Maintain a queue of size 10. Any request that comes to the server, add it to the queue. When the request is over, remove it from the queue.
2. There can be up to 10 requests at any given timestamp. Any more requests coming in that timestamp will be dropped.
This is implemented from Server's perspective and can be used in combination to other algorithms.
Cons: If server takes long time to process requests, then it will be in the queue for a long time. Thus leading to requests being dropped unnecessarily.
Fixed window counter:
1. User makes first request at 10:01:10. An entry is made in redis for every minute. (uid_10:00: { tokens: 10 }).
2. User can make 10 requests every minute (from 10:00 - 10:01, 10:01 - 10:02 and so on).
Cons: No uniform rate of requests
Sliding logs:
Literally, logs every requests coming from every user.
1. User makes 5 requests:
a. 10:01:00
b. 10:01:01
c. 10:01:10
d. 10:01:19
e. 10:01:39
2. For every request at any given timestamp, the rate limiter adds all the requests that was made by the user in last minute. If the number exceeds 10, then requests are dropped else processed.
Cons: Memory inefficient as it needs to log all the requests information to solve the problem
Sliding window counter:
Modification of above algorithm.
Instead of logging all requests, maintain a counter of requests every 10 seconds (configurable).
1. Maintain one counter for any request in the window (10:01:00 to 10:01:10)
2. Maintain another counter for any request in the dinwo (10:01:10 to 10:01:20)
.. and so on
Basically, an array of counters. The size of which is determined by the number of requests allowed every minute. In our case, its 6 since we are logging once for every 10 seconds and a minute is 6 * 10.
This is the best known algorithm for rate limiter.
Thanks for writing this
While Interviewing at Jupiter in Jaipur Hackathon.
Hi so the issue with the fixed window was that all the requests could come at the last sec and in the first second of the next time window and that would mean we are handling more requests than that are allowed. How is this solved in the sliding window logs. I still see that this issue is very much still possible. I don't see any use to storing the request log when it is not going to be used.
The distinction lies in one having a set starting time, while the other begins from the next log pointer. In the fixed window, regardless of incoming requests, the fixed starting point time is checked. Conversely, in the sliding window, the time of the first element is checked, blocking requests for a period, then moving to the next log time until the period concludes.
I also had this exact question brother
Hi Shrayansh, i am an upcoming intern at microsoft.. i am in 3rd year of my college. I love watching your videos ! they give a feel of the real world engineering that happens out there. I want to say Thank you and keep up the good work :) Had it not been for this channel , I wouldn't have discovered system design this early. Thank you!
Thank you, and i am sure, you will definitely get the benefits for sure, not only in interviews but also in your day day office work.
@@ConceptandCoding yes , sir bring more videos of HLD system design....keep up the good work
6/60 * 10 is for 1 sec and not for 10sec for 10sec it its 6/60 brother!
Your videos are pretty good generally, but in this one i found your sliding window algorithms as if you were in some hurry, may be we can get a seperate indepth video on that. Thank you for this playlist.
Sure buddy and sorry for that
Hi. Thank you for these videos. But would like to point out that sliding window log and sliding window counter sections have been cut mid way. Have you uploaded a separate video on them? Thank you
Yeah, could have been in more detail
Details of algorithm are missing
In Signzy, I have faced this question.
Sir aap hindi mai hi de do. Ya fir English thoda refine kr lo esp grammer.
Why do we need config file, can't we store those rules in redis cache ?
In Lowes I faced this question recently
This was asked to me in Rubrik interview
what did you answer. Did you answer what he told. I believe this video looks incomplete. Foucsing more on algorithms rather design
on the sliding window once the window goes of that range, timestamps can be deleted so where is the issue for the space here, if they are temporary limestamps
Hi Rahul, there multiple disadvantage:
1. In the sliding window log algorithm, we are storing timestamp even request is rejected. So think from the perceptive that for each request you have to keep so Many amount of space, and what if in DDOs attack million of request comes in, even we are failing the request,we are storing there timestamp.
- 2nd disadvantage is, for every request, we have to compute the range, and delete the old timestamp if they are out of range. That's also and expensive in case of cluster of servers
finelabs
Razorpay asked me this but with little variation.
Thank you
This was asked to me in Oracle
Are you referring to race condition when you mentioned redis doesn't support atomicity?
This diagram is very simple. Does it work for a distributed environment? I think there will be a lot of problems, if we see this design with a distributed POV
can you pls elaborate the atomicity part which you mentioned in redis in the last part of the video
Redis support atomicity with Lua script. So Redis nowadays support atomicity with little Latency as tradeoff
asked in Adobe
Sir, can you please upload notes?
Sure will do
What are the resources you are following to understand these concepts?
System Design Volume 1 by Alex Xu
👍👍
Hey sharyansh can you again make one video about sliding window counter algorithm, I could not understand from this video, it feels like you are in some hurry.
BTW good content
noted. will plan for it
I have faced this question in Design round interview of Groww
Thanks for the information
why are we storing log in sliding window
Informative video, Thank you..! Can you do a little favor please, give me a name of the software that you are using to present the flow and drwaing.
One note and Wacom tablet
Why redis, isn't redis SPOF? if redis goes down, all the counter values will go and it will again start from scratch.
Redis has multiple servers running, which sync up regularly with each other
In service now for manager role
I am almost certain that Interviewer will definitely ask how would you maintain atomicity with Redis. If we cannot answer it the solution is unsolved because this is crux of the current HLD problem.
Redis supports transactions. So, atomicity shouldn't be a problem, right?
Lu script is the one through which, Redis support atomicity.
DDIA(Design Data Intensive application) book is which i refer very thoroughly
what is the use of the other config file?
Moengage
Great content! But what you write on board in not readable most of the time.
Yes, that's why i do not share this notes afterwards, it's become very rough kind while explaining.
where i can find the notes of this playlist please does anyone know
I will add it in the video description section buddy
Didnt understood the redis atomicity part.. isnt reads and writes from redis atomic in nature?
Redis support atomicity with Lua script. So Redis nowadays support atomicity with little Latency as tradeoff
@@ConceptandCoding thanks for clarification
How leaked bucket will benefit genuine users if their request is rejected due to bucket size?
That's the issue with Leaky bucket. So here bucket is nothing but a Queue. If queue is full, first problem is request rejected.
Second the because of constant rate of processing the request, chances that request wait for long in the queue.
@@ConceptandCoding got you
Sir as i asked previously even please please make a complete and indepth roadmap for LLD and HDL with topics yet to cover.
Sure Lucifer, i already have the roadmap need to update a bit. Will share it
@@ConceptandCoding tq so much bro.
Hi Shreyansh, isn't token bucket same as fixed window counter algorithm here? In both cases we initialize counter starting of window to fixed value(max requests allowed per window) and we reduce it as requests come in, and when it becomes zero, we reject the request.
In token bucket we are increasing the counter after every min. but in fixed counter it is always fixed
Asked to me in LOCO.
Actual Question how will you implement rate limiter using Redis.
I have a doubt here. In Sliding Log Algorithm, why do we have to store the log of the requests that are already rejected?
Considering a rate limiter which processes 2 req/min
Req A:00:00:12(accepted)
Req B:00:00:20(accepted)
Req C:00:00:50 (rejected but entry is present in log)
Req D:00:01:40(accepted)
Req E: 00:01:45(The time range would be (00:00:45 to 00:01:45) and within this time range we find 3 requests in the log including 00:01:45) So, we end up rejecting Req E.
However, Req C was not even processed and accepting Req E would be within the desired rate limit only (Req D and Req E within this range). So, why don't we just delete Req C as soon as it is rejected?
Ack of your query.
Will reply you by tomm eod
Hi Anisha, i am also inline with you that, logging the rejected request timestamp doesn't make sense.
But that's the actual implementation of sliding log algo.
And disadvantage also tell this that it unnecessary log extra logs which cause memory issue.
But still i am not clear why originally they decided to put rejected log in this.
I am thinking out loud that this algo will perform better in case of DDOs attack.
For example: a system has configuration like 2000requeat/min, and let's say DDOs attack happened, after 2k request not a single request will get processed but for other algo after 1 min again 2k request will get hit up.
This is my guess but still i don't see any official answer.
What do you think?
@@ConceptandCoding Thank you for the reply. Yeah, DDos attack can be prevented for sure using this technique. However, the problem apart from storing the logs is we might miss out processing on some requests even though they are within our rate limit. But I think removing the rejected requests from the log would be more efficient as we can process more requests. The log store is increasing bcoz we are storing unnecessary logs. If we don't store them, only the accepted requests will be present and the ones that are not within the time range are anyhow removed. So, at any instance, only the actual number of requests that can be processed are present in the store. But storage overhead is always there.
@@anishakollipara5193 totally agree 💯
Hey Shrayansh,
One quick follow up question, the issue you mentioned with Fixed window counter like if lots of request comes from edges, will the same counter not be there for Token bucket too?
Ack
In the token bucket, all extra requests are denied with status code 429(client-side error), but in the fixed window, all the requests are within the limit of the individual fixed window (edge case) so this is the disadvantage of this algorithm as it sends a server error(5xx).
Yes the same problem will be there
Cricinfo hld and lld please.
noted
Thank you so much sir thank you for teach something that is realistic and for showing useful implementation ❤❤❤❤❤
Thank you, pls do share this video with your connections buddy 🙏
a deep thank you time bahut bachadiya mera
thanks
How can a single user has different tokens?
Consider token as Counter Value.
Ex: Mayank and have counter =10.
So you have 10 token. And we every request, i can reduce it by 1.
@@ConceptandCoding sorry I still didn't get it as each token would be a large string that is used by one user per token time. How we can reduce it by making it a number?
@@mayankmaheshwari2544 Can you pls elaborate what you mean by token?
As per my understanding Token is a Counter. Which can be increased or decreased
@@ConceptandCoding i thought this about user auth token.
No no these are not auth tokens. Auth tokens are totally different.
Thank you Shryanash for these videos.
Thanks Harsh, pls share karna apke connections se, it would be great help 🙏
Excellent content
Thank you buddy
Hi ,
a quick question . don't you think token bucket and fixed size window is exactly same ?
Hi. Do you have any idea if rate limiter design is asked to college freshers or for 5+ years experienced developers also?
Hi it's a HLD question, it is very unlikely to be asked to freshers but once are eligible for HLD round, this question is very likely to come :)
@@ConceptandCoding thanks for the info 👍😊