Nginx Rate limiting for incoming requests

In this article we will discuss how to configure Nginx rate limiting for incoming requests.

Nginx “rate-limiting” feature allows you to limit the amount of HTTP requests a Client/IP can make in a given period of time. This is helpful when you want to protect your server from brute force attacks or DDOS. We use this feature for rate limiting requests on our api server. Let us see how to configure Nginx Rate limiting

Rate limiting is configured with two directives, “limit_req_zone” and “limit_req”.

See following example

1 2 3 4 5 6 7 8 9 10   limit_req_zone $binary_remote_addr zone=apilimit:10m rate=10r/s;   server { location /login/ { limit_req zone=apilimit; proxy_pass http://127.0.0.1:8080; } }  

In the above example , we created a new memory zone called “apilimit” with 10M Memory limit. This zone is used to store the state of a client IP and how often it access an rate-limited resource. With 10M, you can store the states of about 1.6 million IPs. If you have busy website, you might need to increase the zone size. Also, we set a limit of 10 requests per second on this zone. That means , one request in every 100ms. If there are more than one request reached in 100ms timespan, Nginx will throw error. This is probably not what we want, because our Webapps  tend to be bursty in nature. We can handle this with a new directive “burst” . Let us see how to use it

1 2 3 4 5 6 7 8 9 10   limit_req_zone $binary_remote_addr zone=apilimit:10m rate=10r/s;   server { location /login/ { limit_req zone=apilimit burst=10; proxy_pass http://127.0.0.1:8080; } }  

You can see the new directive “burst=10;” in the configuration block. With this , if multiple requests reach server within a 100ms span, nginx will serve first request immediately and put the remaining into the queue. Nginx then process these queued requests every 100 milliseconds, and returns 503 to the client only if an incoming request makes the number of queued requests go over 10.

Problem with the above method is, it induce delay. To tackle this, nginx has another directive “nodelay”. With this , requests immediately processed (forwarded to proxy ) by nginx till it reaches the burst/rate limits.

1 2 3 4 5 6 7 8 9 10   limit_req_zone $binary_remote_addr zone=apilimit:10m rate=10r/s;   server { location /login/ { limit_req zone=apilimit burst=10; proxy_pass http://127.0.0.1:8080; } }  

How to Apply rate limiting for a single IP?

If you want to apply rate-limiting to IP 192.168.1.131, you can use the following configuration

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17   ### Inside “http” block   geo $limit { 192.168.1.131/32 $binary_remote_addr; default “”; }   ##### Inside the “server” block   limit_req_zone $limit zone=apilimit:20m rate=5r/s;   location / {   limit_req zone=apilimit burst=20 nodelay; }  

As always, feel free to drop us a note if you have any queries or feedbacks using our comment form below. Always happy to help you ????

 

Author: , 0000-00-00