Gentle backups with mysqldump

More often than not we are required to make backups of mysql databases running in production.

If the db size is small there are relatively little problems. Once the database starts growing in size the risk is stealing resources away from your app and hence impact on customer experience.

1. Out of the box #

Mysqldump, out of the box can be invoked as follows

mysqldump -h something.com -u user -pPassword > /backup/backup.sql

2. Switching to repeatable-reads #

If your underlying storage engine is InnoDB and your table structure is not going to change during the backup we can safely add the --single-transaction option: this way the export will run in a single transaction with repeatable-read isolation level giving you a consistent backup

mysqldump -h something.com -u user -pPassword --single-transaction > /backup/backup.sql

PS the --quick option is enabled by default since Mysql 5.1 and there is no need explicitly specify it

3. Slowing mysqldump down #

Even with the --single-transaction mysqldump still tries to read as much data as possible from Mysql in the shortest amount of time. If you are exporting big databases this effectively means hammering mysql as your backup is running.

Luckly enough the small cstream tool comes to our rescue: if you can’t limit the speed of mysqldump you can limit its network speed! effectively slowing down the number or queries per second on the db

mysqldump -h something.com -u user -pPassword --single-transaction dbName tableName | cstream -t 1000000  > /backup/backup.sql

Adding cstream -t 100000 we limit the maximum speed of the mysql dump tool to roughly 1mb/sec. This number can be tweaked (lower or higher) to achieve gentler (and slower) backups or more aggressive and quicker ones

Thanks to Brandon for suggesting this on Stackoverflow

4. Saving space #

If you are backing up a lot of data, chances are that you want to gzip the output of mysqldump

mysqldump -h something.com -u user -pPassword --single-transaction dbName tableName | cstream -t 1000000  | gzip > /backup/backup.sql.gz

Out of the box gzip uses a compression level of 6 on a 1 to 9 scale (where 1 is the lowest/fastest compression and 9 the highest/slowest one)

If you play around with the compression levels long enough you will soon realize that as compression level grows he return in term of final file size is diminishing whereas the CPU usage ramps up more than linearly.

In most cases a compression level between 1 - 4 works better than the default value (6) and is significantly gentler on the host running the gzip command.

In our case we are happy with 1 (or --fast)

mysqldump -h something.com -u user -pPassword --single-transaction dbName tableName | cstream -t 1000000  | gzip --fast > /backup/backup.sql.gz

Results #

Trying the above command against an m1.small Amazon RDS instance and a 1gb database I achieved the following results

 
26
Kudos
 
26
Kudos

Now read this

Nginx by examples: HTTPS

Setting up SSL/TLS on Nginx is a very simple exercise. A typical setup will look like this: server { root /var/www/mydomain.com/web/; index index.php; server_name mydomain.com; # we enable SSL listen 443 ssl; ssl_certificate... Continue →