Rate Limiting system design | TOKEN BUCKET, Leaky Bucket, Sliding Logs

Поделиться
HTML-код
  • Опубликовано: 25 сен 2018
  • Rate limiting protects your APIs from overuse by limiting how often each user can call the API.
    In this video following algorithms are discussed
    Token Bucket
    Leaky Bucket
    Sliding Logs
    Sliding window counters
    Race Conditions in distributed systems
    Donate/Patreon: / techdummies

Комментарии • 268

  • @rabbanishahid
    @rabbanishahid 3 года назад +2

    Best explanation, almost searched everywhere for my scenario, but found this tutorial very very helpful, once again thanks man.

  • @vcfirefox
    @vcfirefox 2 года назад +1

    i was reading alex xu, i did not get good idea about sliding window and sliding window counter. now after i watched your explanation it is crystal clear and with pros and cons. thank you for doing this!!

  • @logeshkumar8333
    @logeshkumar8333 4 года назад +10

    This channel is just hidden Gem!

  • @khalidelgazzar
    @khalidelgazzar Год назад +7

    04:16 Token bucket
    10:40 Leaky bucket
    12:50 Fixed window counter
    16:15 Sliding logs
    20:36 Sliding Window counter
    25:21 Distributed system setup (Sticky sessions | locks)

  • @nikhilchopra9247
    @nikhilchopra9247 5 лет назад +3

    Good Stuff Naren! Even famous profs are not able to explain this kind of stuff so clearly.

  • @terigopula
    @terigopula 5 лет назад +24

    you have my respect Narendra.. great work! :)

  • @prabudasv
    @prabudasv 3 года назад +3

    Narendra, your video are great resources for learning system design. Your explanation of concepts is crystal clear. Big thumbs up for you

  • @r3jk8
    @r3jk8 4 года назад +1

    This video was a clear and concise explanation of these topics! Great job! You have a new subscriber.

  • @CrusaderGeneral
    @CrusaderGeneral 2 года назад +20

    My implementation takes advantage of Redis expiration.. When a call comes in, I create a record and the increment the value. Consequent calls will increment the value until the quota is reached. If the quota is not reached by the time the record expires, consequential request will cause a creation of new record and restart the counter.. This way I dont need to check and compare dates at any point. Code is very simple. Albeit, I am not maintaining a perpetual quota, I am only preventing abuse, which is really the main gist of request throttling

    • @varshard0
      @varshard0 2 года назад +2

      This is the way I implemented for my org also. Simple and served its purpose well.

    • @shelendrasharma9680
      @shelendrasharma9680 2 года назад +1

      how would you manage the concurrancy here in redis.

    • @bazzalseed
      @bazzalseed 2 года назад

      @@shelendrasharma9680 redis is single thread.

    • @dhruveshkhandelwal8104
      @dhruveshkhandelwal8104 2 года назад +3

      this is indirectly fixed window counter

    • @sid007ashish
      @sid007ashish 3 месяца назад

      This is fixed window counter only.

  • @abasikhan100
    @abasikhan100 Год назад +2

    Great explanation.
    The pattern you followed is very good i.e. when you mention a problem with some approach, you also provide the solution for that instead of just identifying problems.

  • @ajaypuri1837
    @ajaypuri1837 5 лет назад +3

    Narendra L! You doing good job! I watched your couple of videos. Keep it up!

  • @JoshKemmerer
    @JoshKemmerer 3 года назад +1

    I love your voice brother. It makes it exciting to listen to what you have to say about this very interesting design topic.

  • @Awaarige
    @Awaarige 4 года назад +3

    Bro, You saved my months. Love from Pakistan

  • @valeriiryzhuk4126
    @valeriiryzhuk4126 5 лет назад +5

    One additional case, were sliding logs should be used: limit a bitrate of video/audio/internet signal. In such case you need to store a packet size with a timestamp

  • @RandomShowerThoughts
    @RandomShowerThoughts Год назад

    I think you're easily the best youtuber for system design content

  • @1qwertyuiop1000
    @1qwertyuiop1000 2 года назад

    I love your cap.. Looks like a trademark for you.. Thanks for all your videos..

  • @PankajKumar-mv8pd
    @PankajKumar-mv8pd 4 года назад +2

    One of best explanation, thanks man :)

  • @praveenakarapu
    @praveenakarapu 5 лет назад +14

    Narendra, very informative video, keep it up.
    About locking in case of distributed token bucket you can use following technique
    Optimistic locking or conditional put - many no sql databases support conditional put. This is how it works
    * Read current value, say 9
    * You do a conditional put with value 10 only if current value is 9.
    * When 2 concurrent requests try to update the value to 10, only one of them will succeed and other will fail as current value for that request will be 10.

  • @sbylk99
    @sbylk99 5 лет назад +5

    Great tutorial. Tricky part comes at 25:12:)

  • @ishansoni8494
    @ishansoni8494 3 года назад

    Great work Narendra..! I am currently planning to switch jobs and your videos on system design are amazing...!!

  • @rajeshd7389
    @rajeshd7389 3 года назад

    Narendra L !! This is just superb ... keep going.

  • @princenarayana
    @princenarayana 3 года назад +10

    Sliding window can be optimized by setting the size of the queue to Max Requests allowed and try to remove the old entries only if max size is reached by comparing timestamp

  • @divyeshgaur
    @divyeshgaur 5 лет назад

    thank you for sharing the video. neatly explained.

  • @rangak7502
    @rangak7502 5 лет назад +1

    Awesome work sir.. 👍🏼

  • @vinodcs80
    @vinodcs80 2 года назад

    very comprehensive video. Great work. subscribed

  • @mohammadfarseenmanekhan4820
    @mohammadfarseenmanekhan4820 2 года назад

    very underrated youtube channel for system design

  • @mostaza1464
    @mostaza1464 5 лет назад +1

    Great video! Thank you!

  • @saip7137
    @saip7137 4 года назад

    You have a new subscriber. Thanks for making this video.

  • @ShivamSingh-jw8ey
    @ShivamSingh-jw8ey 3 года назад +2

    04:15 Rate Limting Algorithms
    25:11 Race Conditions in distributed systems

  • @keatmin
    @keatmin 3 года назад +4

    Thanks for the great tutorial, but I have a question as how would a rate limit service obtain lock of a record in separate db affect another rate limiter service obtain the count from different db within a node?

  • @rationalthinker3223
    @rationalthinker3223 Год назад

    Outstanding Explanation

  • @krishankantsharma3655
    @krishankantsharma3655 4 года назад +3

    Sir, for amazon any particular series of questions you want to suggest.

  • @imranhussain8700
    @imranhussain8700 3 года назад

    Great content. Thanks for sharing.
    Just one question, there should be only 1 LB which will send the request to either A1 or A2?

  • @aeb242
    @aeb242 9 месяцев назад

    Great lesson! Thank you!

  • @saurabhchako89
    @saurabhchako89 Год назад

    Great video. Well explained.

  • @amitchaudhary6199
    @amitchaudhary6199 4 года назад

    Great work Narendra👍👍

  • @themynamesb
    @themynamesb 3 года назад

    Great video.. Thanks for the knowledge.

  • @VirgiliuIonescu
    @VirgiliuIonescu 4 года назад +3

    For the last example with concurrency. How about optimistic locking on the counter. Number of req has a version. If you try to update from 2 different RL, one of them will have the NoReq version smaller than the current one and will fail. The RL can retry or drop

  • @nishathussain3672
    @nishathussain3672 4 года назад

    I love your videos. Thank you for making such detailed videos which explain the concepts so clearly. :)

  • @cantwaittowatch
    @cantwaittowatch 4 года назад

    Well explained Narendra

  • @helishah6719
    @helishah6719 3 года назад +1

    For the Local Memory solution that you provided, how is it different from the solution that you explained just before (where the rate limiter is connected directly to the Redis)?

  • @dragonmohammad
    @dragonmohammad 4 года назад

    Distributed Systems, a necessary evil.. very nicely explained Narendra !!

  • @madhusogam5823
    @madhusogam5823 4 года назад

    very nice tutorial .. great work :)

  • @arun5741
    @arun5741 5 лет назад +1

    As usual naren rocks !!!

  • @vishalkohli3953
    @vishalkohli3953 3 года назад

    What a guy!! bless you bro

  • @rbsrafa
    @rbsrafa Год назад

    Great video, congrats!!

  • @md.abdullahal-alamin8059
    @md.abdullahal-alamin8059 5 лет назад

    Great tutorial :)

  • @adityagoel123able
    @adityagoel123able 3 года назад

    Awesome Narendra..

  • @screen189
    @screen189 5 лет назад +6

    Hi Narendra - You are doing a good job in your knowledge transfer. I suggest you cover these topics as well - a) Job Scheduler b) Internals of Zoo Keeper c) Dist.Sys concepts like 2PC, 3PC, Paxos d) DB Internals.

    • @TechDummiesNarendraL
      @TechDummiesNarendraL  5 лет назад +6

      Added to TODO, Thanks

    • @screen189
      @screen189 5 лет назад +1

      Thanks for your response. Looking forward for her videos!!@@TechDummiesNarendraL

  • @SanjayKumar-di5db
    @SanjayKumar-di5db 3 года назад +10

    You can solve this with the help of increment or decrement method on redis which works atomically on any key so there is no chance for data inconsistencies and no need to put any lock 😊

    • @himanshu111284
      @himanshu111284 3 года назад +2

      2 services firing increment concurrently will still face the same problem, so i think it will not work without locking. Read + Write has to be an atomic transaction.

    • @SanjayKumar-di5db
      @SanjayKumar-di5db 3 года назад +5

      @@himanshu111284 in redis increment and decrement methods on id are atomic so no need for lock

    • @rajsekharmahapatro
      @rajsekharmahapatro 2 года назад

      @@SanjayKumar-di5db First time i am learning something new by going through RUclips comments bro. Thanks for it man.

    • @xuanwang7400
      @xuanwang7400 2 года назад

      "compare and set" kind of logic works perfectly without explicit locking in simple operation case. But in complex situation, the app server may need a few requests. e.g. read the data first, the do some processing, then write back. and then two servers can do the same thing with same data at same time, thus race condition.

  • @anand2009ish
    @anand2009ish 2 года назад

    Excellent..hats off

  • @NAVEENkumar-vz6qv
    @NAVEENkumar-vz6qv 4 года назад

    This is really nice.. Do you have anything on how a coupon system works, for example : Doordash, Ubereats or rideshare coupons

  • @knimr3
    @knimr3 3 года назад

    Hi Naren, what's the essential differences between sliding window log and leaky bucket. They look like pretty much the same in functionality.

  • @DharaVisual
    @DharaVisual 3 года назад

    Great work! Would you be able to system design Elevators? Parking Lot?

  • @ayaskanta100
    @ayaskanta100 5 лет назад

    Sir thank you what is internal implementation of token bucket from data structure side

  • @shrimpo6416
    @shrimpo6416 2 года назад

    Perfect! I wish I can give you 1,000,000 likes!

  • @poojachauhan1509
    @poojachauhan1509 3 года назад

    Great work,
    Searching for System design like leetcode or Hackerank...

  • @DenisG631
    @DenisG631 5 лет назад

    A good one. Thanks

  • @ashutoshbang5836
    @ashutoshbang5836 2 года назад

    Great video, keep up the good work :)

  • @codetolive27
    @codetolive27 5 лет назад

    Informative !!

  • @anuraggupta6890
    @anuraggupta6890 4 года назад +4

    Narendra from where do you get such a great understanding of system

  • @javacoder1986
    @javacoder1986 4 года назад

    Thanks for great video, very informative, however last several minutes of video is not very clear and crisp like other part of the video.

  • @feitongyin3291
    @feitongyin3291 4 года назад

    Question about using locks. In example where two requests come in, RL1 gets the lock first, and then "locks" the key. How does it lock the key though such that it somehow impacts RL2? I'd imagine it only has access to the db in its own region (the one on the right on your drawing), so how does it lock the key such that when RL2 access the db in its own region (the one on left), that it knows to wait for the lock to be released? Thanks for the video as always.

  • @dev-skills
    @dev-skills 5 лет назад +6

    Redis provides INCR and DECR commands which are atomic operations for increment and decrement of its Integer Data Type. Will this not take care of distributed access without any lock ?

    • @victoryang7734
      @victoryang7734 4 года назад

      I think his assumption is redis is seperate

    • @Priyam_Gupta
      @Priyam_Gupta 3 года назад

      Yes this will be taking care as they are atomic.

    • @abcdef-fo1tf
      @abcdef-fo1tf Год назад

      @@victoryang7734 what does separate redis mean. Is distributed redis not a shared cache?

  • @DeepakMishra117
    @DeepakMishra117 5 лет назад

    Is there any way to use sticky session for a user on the seconds level, so that we can use sliding window counter and the nodes can sync themselves over a minute?

  • @mukulchakravarty8788
    @mukulchakravarty8788 5 лет назад

    Hi is it possible to lock a key across replicas ? They are essentially copies of each other right ?

  • @lolnikal6851
    @lolnikal6851 4 месяца назад

    20:36 Sliding Window counter
    The rate limit is 10R/M
    While in explanation , he considered 10R/S so please don't get confuse and think he is wrong

  • @michael4799
    @michael4799 3 года назад

    For the situation of distributed race limit, even though one user send two requests at the same time in one server, it dosen't mean that the actual two processing threads will deal them serially, so the inconsistency problem seems still exist. I think to address this problem we can make the read and update operation as atomic with redis+Lua.

    • @prajwal9610
      @prajwal9610 2 года назад

      Redis does this by having a lock which is already suggested in the video

    • @rekhakalasare4910
      @rekhakalasare4910 Год назад

      ​@@prajwal9610 yea but in case of local memory suppose single user two request going to 2 regions and regions local cache first read from db and then update in cache and db. Then also there is inconsistency as both req operating parellely

  • @biboswanroy6699
    @biboswanroy6699 4 года назад

    Amazing content

  • @manasranjan4
    @manasranjan4 2 года назад

    Good bro. Awesome

  • @akshaytelang4532
    @akshaytelang4532 4 года назад +1

    can't we use Zookeeper for synchronization to manage requests along multiple regions

  • @OmarGuntaue
    @OmarGuntaue 4 года назад

    Is it ok that the LB is before the RL? Shouldn't be the other way around?

  • @DebasisUntouchable
    @DebasisUntouchable 4 года назад

    thanks for this video

  • @santoshdl
    @santoshdl 4 года назад

    thanks Narendra

  • @pratyushprateek2503
    @pratyushprateek2503 Год назад

    In case of token bucket algorithm, isn't Redis thread safe or can't we enforce synchronization using locks if requests from multiple application servers are meant to be served concurrently?

  • @srivsrivastava
    @srivsrivastava 5 лет назад

    Hi Naren..do you have sample code for the rate limiters?

  • @Sudarshansridhar
    @Sudarshansridhar 4 года назад

    can we have sync service + memory between RL and Redis/Casandra ?
    So all RLs will go via sync service to get quick response.
    Sync service is responsible to write to Redis/Casandra.
    If sync service is not available, RL will make direct call to Redis/Casandra.
    Not sure how optimal this change is .

  • @raghugrinus4779
    @raghugrinus4779 2 года назад

    Can you please let us know the books which you have read to prepare for the video?

  • @reaganrosario
    @reaganrosario 4 года назад

    Why cant we use the distributed counter method that you had explained in some other video? Like having zookeeper manager the counters?

  • @ravisoni9610
    @ravisoni9610 4 года назад

    great explanation (y)

  • @molugueshwar1
    @molugueshwar1 4 года назад

    Hi Narendra,
    In token bucket scenario above, I would like to add one point that in order to reset the requests count after one minute to 5 again, we have to store the time(start time) of the first request so that we can check the difference of one minute to reset the count

    • @nikhilneela
      @nikhilneela 4 года назад

      Yes, I agree. If you simply reset the tokens to 5 when the minute changes, it would allow more than 5 requests/minute. Storing the start time and always comparing it with the current request time and if the delta is equal to or more than a minute, only then we can reset the tokens. @Eshwar, is this what you meant ?

    • @molugueshwar1
      @molugueshwar1 4 года назад

      @@nikhilneela yes Nikhil. That's right

  • @dataguy7013
    @dataguy7013 4 года назад +2

    @Naren, even with local memory, you can have inconsistency. It just is a bit faster. Do I have that right?

    • @Priyam_Gupta
      @Priyam_Gupta 3 года назад

      yes it won't work. if we are even talking about updating it all the time its better to rely on redis cluster to do the copy then our application server.

  • @ramlakhan-fp7ct
    @ramlakhan-fp7ct 5 лет назад

    Thanks Man

  • @RandomShowerThoughts
    @RandomShowerThoughts Год назад

    31:00 and you can't even lock across the nodes. If you are sharding then maybe, but as soon as you introduce replication, I don't think it'll just work like that

  • @mritunjayyadav3788
    @mritunjayyadav3788 4 года назад +2

    Hi Narendra great work I loved your content but i have one question . why not keep only one Redis DB instance instead of two in that case we dont have to sync them ? or is there any significance of having diff instances of Redis (per LB , RL , App instances) .

    • @anirbanghosh1176
      @anirbanghosh1176 2 года назад +2

      @mritunjay yadav - in ditributed system you cannot have single point of failure

    • @namanmishra08
      @namanmishra08 Год назад

      That's because the entire point of having multiple regions is to have fault tolerance. For a single region, we can have a primary-secondary model with asynchronous replication between them but for a multi-region setup, each component should have a replica. One approach to solve this is to use distributed locks that Redis provides.

  • @abcd12272
    @abcd12272 3 года назад

    Could there be race conditions for the window method too?

  • @VasQuezadilla
    @VasQuezadilla 5 лет назад +1

    Can you do system design for Groupon?

  • @andriidanylov9453
    @andriidanylov9453 Год назад

    Thank You

  • @akhashramamurthy8774
    @akhashramamurthy8774 4 года назад

    Thank you Narendra. The incredible content archive that you are building is invaluable. Thank you.

  • @thejaswiniuttarkar620
    @thejaswiniuttarkar620 3 года назад

    the threshold is calculated per second, for example AWS API gateway 5000 req/sec .. we can just declare an Array Queue or Array stack and start pushing elements in to it and keep flushing it every second ... + or - 10/20 request would not matter .. if the stack/Queue fills up it would throw an error and that error could be propagated to the user !!

  • @chikudholu
    @chikudholu 5 лет назад +1

    Awesome!

  • @rahulsharma5030
    @rahulsharma5030 3 года назад

    @31:00 you have confused me here, if we use locks, region 1 will have lock in region 1 redis only. Still regions 2 call can read old data from region 2 redis and allow more requests. R1 should take lock of all regions DB theoretically if u say locking is one way to solve consistency?

  • @andresantos-yx3bh
    @andresantos-yx3bh Год назад

    amazing video

  • @cbest3678
    @cbest3678 3 года назад

    isnt the token bucket and fixed window has the same problem of boundary request problem... ? since even in token bucket you can request more token in end of the first request window and request more token to the second of the window.?

  • @bhaskargurram94
    @bhaskargurram94 4 года назад +4

    Thanks for the nice explanation. One question - What is the difference between fixed window counter and token bucket? Are they not doing the same?

    • @paraschawla3757
      @paraschawla3757 3 года назад

      token bucket is the number of tokens in a bucket, there is refill() happening in bucket after nth min/sec. Number of tokens represent number of request that can be served. with every new request, it keeps going down...but tokens keep increasing based on ratelimit as well.
      Fixed window counter is having User+TimeStamp as key and count as value for particular window and then start again.
      Essence of both alogos are very different.

    • @curiousbhartiya8410
      @curiousbhartiya8410 3 года назад

      @@paraschawla3757 But the underlying problem of both algorithms is the same is what the original comment meant. That they both might end up serving twice the amount of the desired RPM.

    • @PABJEEGamer
      @PABJEEGamer 3 года назад

      With token bucket algorithm we have control over cost of each operation(we can associate how many tokens an operation costs), where as in fixed window we dont, since we increase the counter by 1 each time

  • @shrikantkhadilkar4019
    @shrikantkhadilkar4019 2 года назад

    Hello Narendra, Fixed window counter looks the same as token bucket for me - only the concept is different but the effect will be the same, right?

  • @singhalvikash
    @singhalvikash 3 года назад

    Nice explanation. Could you please make a video for Google ad sense analytics collection system ?

  • @shreysom2060
    @shreysom2060 5 месяцев назад

    Can't we use a sorted Redis set to avoid the concurrency issue?

  • @153deep
    @153deep 4 года назад

    Consider this scenario for token bucket: We can only serve 5 request/5 min. One request (10.05), Two request(10.06), Two request(10.07) we have served all the 5 requests so at 10.07 we will have 0. Now when we get new request at 10.11 it should be the valid request because request at 10.05 & 10.06 should be removed but as per token bucket it won't be served because 10.07 is set to 0 & will be reset at 10.12

    • @vaidyanathanpk9221
      @vaidyanathanpk9221 4 года назад

      Not really. Read about the token bucket algorithm.
      Before serving the operation at 10.12, it'll try to figure out the time elapsed so far ( 10.12 - 10:07 ) Then it'll figure out the number of tokens to add for this time elapsed ( For 5 minutes, we need to add 5 tokens ) So before doing the serving calculation, these addition of tokens will be done and then when you do the calculation, you should be able to serve these requests.
      The key point is maintaining something called as lastUpdateTime in the bucket.

  • @sethuramanramaiah1132
    @sethuramanramaiah1132 2 года назад

    Don't the fixed window counter also run into concurrency issue like the first scenario ?