In this article, I will show you how to set up an incremental backup of the MySQL database to Amazon S3 using Percona Xtrbackup. But first, I will talk about what approaches to MySQL backup exist now in addition to Xtrabackup.

Types of backups

1) Mysqldump Command

This is the most popular MySQL backup method. It is perfect when you change the version of the MySQL server, or you need to fix something in the SQL dump, or you, for example, need to import a database from one hosting to another.

Mysqldump essentially executes queries like “select * from table_name” and writes the result to a text file. And then you just send this dump to your other MySQL server for execution and restoration. This approach has two main drawbacks:
– restoration can take a very long time. If your database is very large, then running queries will take a very long time to complete.
– MySQL locks MyISAM tables from writing. You can work around this for InnoDB tables
using the –single-transaction option. This flag guarantee no changes that occur to InnoDB
tables during the dump will be included in the dump. But this doesn’t work for MyISAM
– on the production server, the query execution speed will significantly drop, since your MySQL will use the InnoDB buffer pool to perform backups. And your active queries can use the disk, and this will slow down the speed of their execution.

2) Percona Xtrabackup tool

Xtrabackup – it’s an open-source MySQL backup utility from Percona. Xtrabackup doesn’t lock your database during the backup process. This is the main advantage of Percona xtrabackup. During the backup, xtrabackup physically copies the MySQL data directory and files underneath it. This does not create additional load on the MySQL server. For large databases (100+ GB), it is much faster compared to mysqldump.

Percona XtraBackup uses the log sequence number (LSN) and a background process for tracking changes in the transaction log. Then it reflects those changes in the backup. And all
this happens while Xtrabackup is running and until the end of the backup.

3) MySQL Enterprise Backup

MySQL Enterprise Backup delivers hot, online, non-blocking backups. It’s not open source backup tool, but it offers a lot of useful features. The standard license cost is $5000.
MySQL Enterprise Backup is essentially an analog of Percona Xtrabackup and it works the same way.

4) Binary Logging

You can enable binary logs in MySQL and then MySQL will log all statements that update data. A binary log is usually used for master-slave configurations so that the master can send SQL statements to all of its slave servers. And these slave servers execute those statements to make the same data changes that were made on the master MySQL server.
Binary logs are also used for incremental backups. After a backup file has been restored, the events in the binary log that were recorded after the backup was made are re-executed. These events bring databases up to date from the point of the backup.

Read more about binary logs here:

5) Master Slave Replication

You can setup master slave configuration and then you will have a complete copy on slave servers at any time. More on that here.

Setting Up MySQL Backup to Amazon S3 using Percona Xtrabackup

This tutorial is written with Debian or Ubuntu in mind. If you have other Linux distributions – it will require small changes. Try adapting it yourself and if you have any questions please feel free to ask them in the comments section below.

1) Select the required version of Percona Xtrabackup

First you need to decide which version of Percona Xtrabackup is right for you. If you have Mysql 8 then you need Percona Xtrabackup 8.0. If you have MySQL 5.1, 5.5, 5.6 and 5.7, then you need Percona Xtrabackup 2.4.

2) Install Percona Xtrabackup

In this article, I will talk about version 2.4. If you need version 8, then write in the comments – I will describe how to do it, because there are some significant differences in the bash script that makes the backup.

Run these commands:

wget$(lsb_release -sc)_all.deb
dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
percona-release setup pxb24
apt-get install percona-xtrabackup-24

3) Install s3cmd tools

S3cmd tools allow you to work with Amazon S3 from the command line. We are installing tools from sources.

– open it in a browser –
– select the folder with the latest version. At the time of this writing, this is 2.1.0. Go to the folder and copy the link to the file with the tar.gz extension.
– upload it to the server –


– install setup tools –

apt-get install python3-setuptools


apt-get install python-setuptools

if you have an old version of python (up to version 3).

4) Create S3 bucket

Now we need to create an Amazon S3 bucket where we will store our MySQL backups.
Here you can find instructions on how to create an Amazon S3 bucket.

5) Create a user with write access to your Amazon S3 bucket

We go to the IAM service and create a user with a name, for example, backup-user
– select Programmatic access
– select “attach existing policies directly” and create a new policy using the template below and click Create Policy.
Here is the text of the policy. You need to replace with the name of your S3 bucket:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
                "s3: ListBucket"
            "Resource": [
                "arn: aws: s3 ::: "
            "Effect": "Allow",
            "Action": [
                "s3: PutObject",
                "s3: GetObject",
                "s3: DeleteObject"
            "Resource": [
                "arn: aws: s3 :::  / *"

– then we apply this policy to the user
– NB. Save Access key ID and Secret access key after creating a user – you will need them later

6) Configuring s3cmd tools


s3cmd --configure

Answer to all questions, provide your Access Key ID and Secret access key you previously saved. In the end, the system will try to access your Amazon S3, but it will not be able to get a list of your S3 buckets, because your user does not have the required permission – he can only work with a specific S3 bucket. But we don’t need to list S3 buckets because we will save our MySQL backups to one specific Amazon S3 bucket.

IMPORTANT: Despite the 403 access error, save the settings.

7) Configuring the bash script

– download the script somewhere in your server, like /backups/

chmod 755 /backups/

– edit variables inside the script which are used for accessing your MySQL server and your Amazon S3 bucket:
#Path for full backup and differential backup. Must end with /
BACKUP_PATH = /backups/
# Name of backup dir, within BACKUP_PATH

You also need to add the path to s3cmd tool to PATH variable in order to execute it from cron.
– First, find out the path where s3cmd is located – which s3cmd. For instance, it could be – /usr/local/bin/.
– Then insert the path anywhere near the top in the script – PATH=$PATH:/usr/local/bin/

You also need to change a piece of code:
– change these two lines: week_days = (day_1 day_2 day_3 day_4 day_5 day_6 day_7)
for i in “$ {week_days [@]}”
to one line of code – “for i in day_1 day_2 day_3 day_4 day_5 day_6 day_7”

8) Test your backups manually

Now we’re all set. Let’s test our configuration in manual mode.

#set a new monthly backup, stored as “month_n” folder under your s3 bucket, where n is the month number within the year

/backups/ month

#set a new weekly backup, stored as “week_n” folder under your s3 bucket, where n is the week number within the year

/backups/ week

#set a new daily Differential backup, stored as “day_n” folder under your s3 bucket, where n is the day number of the week (1-7)


Then go to your Amazon S3 in the browser and look at your bucket. If folders and files are there, then everything is working. My congratulations to you!

9) Automate it

Now it’s time to make it run without us in the parallel reality.
– run

crontab -e

to edit your cron
– and insert there – 0 3 * * * sh /backups/ auto

In this case, your MySQL backup will run every 3 hours every day of every week.

I also recommend checking periodically if it’s still working properly by opening Amazon S3 in the browser and finding there last backups. If they are fresh then it’s all OK but if not – go and fix it ASAP.