The reason your videos are good is because it doesn’t feel like a normal RUclips tutorial, it feels like a co-worker showing you some cool tricks he came up with or how he solved an issue.
Seems like rate limiting based on IP can be done before the request even reaches the application server, on reverse proxy or load balancer level. But if you need rate limiting based on business logic (let's say which package the user bought) then it needs to live in the app logic. Layer 4 vs Layer 7 rate limiting
Please make a video on: 1/ implementing background jobs in nextjs (e.g., your app can generate an entire movie script, but it takes 20 minutes, so you want to implement it as a background job -- the customer can leave the site and get an email notification when it's done) 2/ how to address situations where your app depends on external APIs that have very low concurrency limits
I'm currently doing an internship for my Bachelor degree in Applied Computer Science and I happen to be working with NextJS for the first time which has been a great experience so far. And for some of your recent videos align almost perfectly with what I am working on in the internship. It has helped me out a lot and it's interesting to see a pov of another developer. Keep doing the greate videos
Great video as always. I like the considerations and drawbacks you mention when explaining why you make certain decisions. One thing I noticed that could be fixed is that the rate limiting windowing does not slide. What I mean by that is if you configure it to allow two requests every ten seconds, your current logic would allow requests through at t = 0s, 9s, 12s, 14s. The last three requests occur within a span of five seconds, but your logic would reset the count to zero at t = 12s and allow all requests through. Seemingly simple mechanisms like this can often end up being somewhat complex.
Hey cody, suggestion for a video using this one as a segway: "let's automate testing for the rate limiting by using playwright" where you automate the part of spamming logins with the same account with an e2e test (can be any library/framework, playwright was an example) this test would obviously give more contractual coverage than simply for rate limiting, but that would be the point aswell Thank you for all the videos!
if you have their user id, you should just do that like I showed near the end. If you need ip rate limiting, you should probably just have cloudfront limiting by ip in the first place, or hash the ip so that you only store hashes inside that js map
Looks convenient. One disadvantage of fixed bulk limiting though is it can sometimes interfere with workflow of normal (not malicious) users. If users exceed limit in half of window, then they would have to wait for remaining half of window even if they need only one more operation. Is there simple way to make adaptive rate limiter that handles it? Not sure how to word it better. A moving window perhaps? A queue?
Yeah I mean you can code it however you want. You can add burst capabilities if you want so that they have a baseline of rps but also grant them like X extra requests every 5 minutes and that also refills. I guess though id ask why you want one if you’ve set your rate limit thresholds up correctly from the start
Think I'd rather host a redis instance for this to avoid issues with multiple servers and memory issues on the server (less important as you mentioned) Introducing redis early gives app nice caching tooling too
In all of my projects i make a "rate limit action" model and use that to rate limit, then a cronjob to clean them up after two days. Seems like a simpler approach to me
Where the data saves? I'm confused, u just make an empty array of object then putting a value to it every time the function calls. Does it save in the server?
This was an in memory rate limiter. If your system needs to scale to multiple VPSs instance, you’ll end up needing to use redis and store the rate limit keys and counts there
seems pretty easy, really appreciate that, we just may need to clear the trackers object, cause it may get a ddos as you said thanks again, now at least we got an idea of how it works 🤝
A couple of improvements. Consider using an actual Map they are much faster than plain objects when you need to insert and remove keys constantly. You should also consider removing old keys after they are expired for some time.
Rate limiting on IP can be easily bypassed by some proxies, and the other one, I fell like they can request to api with a random string as user id, since it doesn't check anywhere if the user id is real.
what's the even real use of this ? anyways request is coming to the server and the server is now have to handle that load which at first place shoudn't have had.. or am i missing something ?
I thought I said the use case. If a user decide to make an account and flood your system with creating resources, you’d at the very least want to limit how fast they can do that. Then you can ban their account once you find they are abusing your system. Additionally, I have an invite system which sends out emails. If a user abuses that, then I’ll be charged for all the emails sent out. I want to limit their ability to send out tons of emails and cost me a lot of money
@@WebDevCody I did watch the video, you partially mentioned it the issue related to using it on a serverless platform or AGW. I would have preferred you to also mention that this wouldn’t work for users that are on the same network
@runners4tme, he basically explained how he did it. You could easily expand it to use it however you want. Like in the video, he showcased using user.id as a key instead of the IP address. I mean, you got the idea behind it, and that's what matters after all.
@@runners4tme that’s a good point about people being on the same network. Maybe a better approach instead of ip would be to set a uuid when they first load the app which acts as a unique identifier for that public user.
The reason your videos are good is because it doesn’t feel like a normal RUclips tutorial, it feels like a co-worker showing you some cool tricks he came up with or how he solved an issue.
agreedd
Seems like rate limiting based on IP can be done before the request even reaches the application server, on reverse proxy or load balancer level. But if you need rate limiting based on business logic (let's say which package the user bought) then it needs to live in the app logic. Layer 4 vs Layer 7 rate limiting
Yeah, I usually rate limit in cloudflare. Limiting by userId is probably more important
Please make a video on:
1/ implementing background jobs in nextjs (e.g., your app can generate an entire movie script, but it takes 20 minutes, so you want to implement it as a background job -- the customer can leave the site and get an email notification when it's done)
2/ how to address situations where your app depends on external APIs that have very low concurrency limits
Use bullmq
I'm currently doing an internship for my Bachelor degree in Applied Computer Science and I happen to be working with NextJS for the first time which has been a great experience so far. And for some of your recent videos align almost perfectly with what I am working on in the internship. It has helped me out a lot and it's interesting to see a pov of another developer. Keep doing the greate videos
I just cant thank you enough. Excellent content ❤.
Great video as always. I like the considerations and drawbacks you mention when explaining why you make certain decisions. One thing I noticed that could be fixed is that the rate limiting windowing does not slide. What I mean by that is if you configure it to allow two requests every ten seconds, your current logic would allow requests through at t = 0s, 9s, 12s, 14s. The last three requests occur within a span of five seconds, but your logic would reset the count to zero at t = 12s and allow all requests through. Seemingly simple mechanisms like this can often end up being somewhat complex.
Something’s are not worth implementing for that 1% edge case, but that might be an interesting interview question.
@@WebDevCody agreed. It’s not worth it.
Great video and thanks for answering my previous question here more in-depth.
Hey cody, suggestion for a video using this one as a segway:
"let's automate testing for the rate limiting by using playwright" where you automate the part of spamming logins with the same account with an e2e test (can be any library/framework, playwright was an example)
this test would obviously give more contractual coverage than simply for rate limiting, but that would be the point aswell
Thank you for all the videos!
Going to implement this tonight, love these videos
IP addresses can sometimes be considered personal under strict GDPR rules. Is there any reason to prefer rate limiting via IP over session-based?
if you have their user id, you should just do that like I showed near the end. If you need ip rate limiting, you should probably just have cloudfront limiting by ip in the first place, or hash the ip so that you only store hashes inside that js map
thanks for the video! comes at a perfect time cause I've been looking into rate limiting
Thanks for the video man ❤ you have seen my comment and made this gem for me
Thank you, being waiting for this 🤝👏🏼👏🏼
Looks convenient. One disadvantage of fixed bulk limiting though is it can sometimes interfere with workflow of normal (not malicious) users. If users exceed limit in half of window, then they would have to wait for remaining half of window even if they need only one more operation. Is there simple way to make adaptive rate limiter that handles it? Not sure how to word it better. A moving window perhaps? A queue?
Yeah I mean you can code it however you want. You can add burst capabilities if you want so that they have a baseline of rps but also grant them like X extra requests every 5 minutes and that also refills. I guess though id ask why you want one if you’ve set your rate limit thresholds up correctly from the start
I can suggest to use Map insead of Object. Map is optimized for setting and getting keys. Little hint from me😉
What about multiple devices under a NAT network? They all exit with the same IP
Think I'd rather host a redis instance for this to avoid issues with multiple servers and memory issues on the server (less important as you mentioned)
Introducing redis early gives app nice caching tooling too
Works fine, but that means you have yet another thing to manage
what vscode theme do you use?
First!!! Good job as always love ❤
thanks babe!
In all of my projects i make a "rate limit action" model and use that to rate limit, then a cronjob to clean them up after two days. Seems like a simpler approach to me
Where the data saves? I'm confused, u just make an empty array of object then putting a value to it every time the function calls. Does it save in the server?
This was an in memory rate limiter. If your system needs to scale to multiple VPSs instance, you’ll end up needing to use redis and store the rate limit keys and counts there
Thank you so much
Why not arcjet
seems pretty easy, really appreciate that,
we just may need to clear the trackers object, cause it may get a ddos as you said
thanks again, now at least we got an idea of how it works 🤝
Just adding rate limiter in nginx how that sounds like?
Sure if you can rate limit each individual api endpoints separately
Do you know if there is any data privacy law concerns with regards to storing ips in transient memory?
Probably, you could always just hash them maybe?
Lol
A couple of improvements. Consider using an actual Map they are much faster than plain objects when you need to insert and remove keys constantly. You should also consider removing old keys after they are expired for some time.
yeah some type of interval that removes expired keys would be useful
made THE SAME EXACT THING just a week ago for my side project 🤯
Thanks again ❤
Reading you being unhinged on X then coming to see a very usefull video of yours almost immediately is so bizarre 😂😂😂😂😂😂
Thanks for the knowledge
😆 RUclips is much closer to my own personality. On X I just post things for fun to ruffle some feathers
@WebDevCody hey i aint complainin iv been a subscriber for 2 or so years i cnt remeber, and i enjoyed every iteration,
Even discord community is cool
@WebDevCody sorry to jump on this some more but damn i just saw your subscriber count and congrats mate
@@omomer3506 thanks man, it's been a hustle
You will run out of memory on huge amount of users by not clearing old expirations
I’ll add an interval to clear it out
Rate limiting on IP can be easily bypassed by some proxies, and the other one, I fell like they can request to api with a random string as user id, since it doesn't check anywhere if the user id is real.
The user id came from the authenticated session. You must make an account to get a user id
what's the even real use of this ?
anyways request is coming to the server and the server is now have to handle that load which at first place shoudn't have had..
or am i missing something ?
I agree, shouldn't this be a job for a firewall?
Ofc, a firewall can't know your user IDs, but filtering by ip should be doable in a firewall, right?
I thought I said the use case. If a user decide to make an account and flood your system with creating resources, you’d at the very least want to limit how fast they can do that. Then you can ban their account once you find they are abusing your system. Additionally, I have an invite system which sends out emails. If a user abuses that, then I’ll be charged for all the emails sent out. I want to limit their ability to send out tons of emails and cost me a lot of money
redudancy is not handled sir. There are similar groups
Not sure what you mean
@@WebDevCody there are repeatative entries. Same groups can be created multiple times?
What's the difference between a utils and lib folder
This doesn’t scale
Right, I talked about this in the video. Did you watch or just comment?
@@WebDevCody I did watch the video, you partially mentioned it the issue related to using it on a serverless platform or AGW. I would have preferred you to also mention that this wouldn’t work for users that are on the same network
@runners4tme, he basically explained how he did it. You could easily expand it to use it however you want. Like in the video, he showcased using user.id as a key instead of the IP address. I mean, you got the idea behind it, and that's what matters after all.
@@runners4tme that’s a good point about people being on the same network. Maybe a better approach instead of ip would be to set a uuid when they first load the app which acts as a unique identifier for that public user.
@@WebDevCody what would be stopping someone from clearing cookies?
thank you