Nginx by examples: DOS protection

If your web server is directly exposed to internet traffic it’s always a good idea to have some sort of Denial of Service protection enabled

Nginx alone cannot protect from more complex and Distributed DOS attacks (that would require a CDN) but this is no reason for not having some basic protection in place, which is also very easy to setup.

 Connection Limiting

It is a sensitive precaution to avoid too many connections from a single IP and it’s first line of defence against trivial DOS attacks (i.e. a simple script flooding our backend from 1 server with 1 IP)

limit_conn_zone $binary_remote_address zone=addr:10m;
limit_conn servers 1000;

This simple snippet enforces that there can be max 1000 connections per IP at any time.

10 MB (10m) will give us enough space to store a history of 160k requests

The 1000 limit can be tweaked and lowered if necessary always considering that our clients may be behind proxies and many connections could hit our server from very few IPs

 Rate Limiting

Rate limiting works very similarly to connection limiting but from the perspective of how many requests per second are accepted by a single IP address

limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
limit_req zone=one burst=10;
limit_req_status 503;  # default value anyway

In this example:

Alternatively we can add the nodelay option

limit_req zone=one burst=10 nodelay;

so that between 5 and 10 req/sec all new incoming requests are served as fast as possible making burst (10 req/sec) the hard and soft limit after which requests get refused

 Bandwidth limiting

Bandwidth limiting is particularly useful in bandwidth restricted environments like Amazon EC2 where bandwidth exhaustion causes all traffic to virtually grind to a halt (i.e. your backend goes offline)

limit_rate 50k;
limit_rate_after 500k;

With Nginx we can limit the bandwidth to 50kb/sec only after the first 500kb have been served so that the web server selectively slows down requests for big payloads while allowing other requests at full speed

 Caching

Last but not the least caching is the single most effective way to add resiliency to your app by leveraging the super optimized caching layer Nginx offers which can easily serve thousands of req/sec for small payloads

If you haven’t setup caching yet, you should.

 
79
Kudos
 
79
Kudos

Now read this

Object oriented hell

I stumbled upon this quora answer a couple of days ago and I can’t but agree more with the points made: OO didn’t lead to better code It’s just an abstraction like others (i.e. functional programming) to be honest. Not a magic wand that... Continue →