How to Set Up Automated Backups
Schedule automated backups of files and databases using rsync, rclone, and mysqldump. Send backups offsite to S3 or Backblaze B2.
Backups that depend on someone remembering to click a button don’t happen. This guide is for GoZen VPS and dedicated server users. The only backups that work are the ones that run themselves. This guide sets up automated file and database backups with offsite storage, so when something breaks at 3 AM, you have a clean copy to restore from.
What to Back Up
| Data | Where It Lives | How Often |
|---|---|---|
| Website files | /var/www/ or /home/*/public_html/ | Daily |
| Databases | MySQL/MariaDB, PostgreSQL | Daily (or more often for busy sites) |
| Server config | /etc/nginx/, /etc/apache2/, /etc/php/ | Weekly or after changes |
| SSL certificates | /etc/letsencrypt/ | Weekly |
| Cron jobs | crontab -l output | Weekly |
| Mail data | /var/mail/ or /home/*/Maildir/ | Daily (if hosting email) |
Backups on the same server aren’t real backups. If the disk dies, your backups die with it. Always copy backups offsite - another server, S3, Backblaze B2, or at minimum a different disk.
Database Backups with mysqldump
Single Database
mysqldump -u root -p database_name > /backup/database_name_$(date +%Y%m%d).sql
All Databases
mysqldump -u root -p --all-databases > /backup/all_databases_$(date +%Y%m%d).sql
Skip the Password Prompt
Create a credentials file so scripts can run without interaction:
sudo nano /root/.my.cnf
[mysqldump]
user=root
password=your_mysql_root_password
sudo chmod 600 /root/.my.cnf
Now mysqldump reads credentials automatically:
mysqldump --all-databases > /backup/all_databases_$(date +%Y%m%d).sql
Compress on the Fly
Database dumps compress extremely well - 90%+ reduction is common:
mysqldump --all-databases | gzip > /backup/all_databases_$(date +%Y%m%d).sql.gz
File Backups with rsync
rsync only copies files that changed since the last run, which makes incremental backups fast.
Local Backup
# Back up websites to a local backup directory
rsync -avz --delete /var/www/ /backup/www/
# Back up server configs
rsync -avz /etc/nginx/ /backup/nginx-config/
rsync -avz /etc/letsencrypt/ /backup/letsencrypt/
| Flag | What It Does |
|---|---|
-a | Archive mode - preserves permissions, timestamps, symlinks |
-v | Verbose - shows what’s being copied |
-z | Compress during transfer |
--delete | Remove files in destination that no longer exist in source |
Remote Backup (to Another Server)
rsync -avz -e "ssh -p 22" /var/www/ user@backup-server:/backup/www/
Set up SSH key authentication between servers so this runs without a password prompt (see Connecting via SSH).
Offsite Backups with rclone
rclone syncs files to cloud storage - S3, Backblaze B2, Google Drive, Wasabi, and 40+ others. It’s like rsync for cloud storage.
Install rclone
sudo apt install rclone -y # Ubuntu/Debian
sudo dnf install rclone -y # Rocky/AlmaLinux
# Or install the latest version directly
curl https://rclone.org/install.sh | sudo bash
Configure a Remote
rclone config
This walks you through an interactive setup. For Backblaze B2 (cheap, reliable, great for backups):
- Choose
nfor new remote - Name it
b2 - Select
Backblaze B2from the list - Enter your Application Key ID and Application Key
- Leave other settings as defaults
For Amazon S3:
- Choose
nfor new remote - Name it
s3 - Select
Amazon S3from the list - Enter your Access Key ID and Secret Access Key
- Choose your region
Test the Connection
# List buckets
rclone lsd b2:
# List files in a bucket
rclone ls b2:my-backup-bucket
Sync Files to Cloud Storage
# Push local backup directory to B2
rclone sync /backup/ b2:my-backup-bucket/servername/ --progress
# Sync only website files
rclone sync /var/www/ b2:my-backup-bucket/www/ --progress
# Sync with bandwidth limit (don't saturate your connection)
rclone sync /backup/ b2:my-backup-bucket/ --bwlimit 50M
rclone sync makes the destination match the source. Files deleted locally get deleted remotely too. If you want to keep old versions, use rclone copy instead or enable versioning on your bucket.
The Complete Backup Script
Here’s a production-ready script that backs up files and databases, compresses everything, and syncs to cloud storage:
sudo nano /usr/local/bin/backup.sh
#!/bin/bash
set -euo pipefail
# --- Configuration ---
BACKUP_DIR="/backup"
RETENTION_DAYS=14
DATE=$(date +%Y%m%d-%H%M%S)
RCLONE_REMOTE="b2:my-backup-bucket/$(hostname)"
LOG="/var/log/backup.log"
echo "=== Backup started at $(date) ===" >> "$LOG"
# --- Create backup directories ---
mkdir -p "$BACKUP_DIR/databases"
mkdir -p "$BACKUP_DIR/files"
mkdir -p "$BACKUP_DIR/config"
# --- Database backup ---
echo "Backing up databases..." >> "$LOG"
mysqldump --all-databases --single-transaction | gzip > "$BACKUP_DIR/databases/all_db_$DATE.sql.gz"
echo "Database backup: $(du -sh "$BACKUP_DIR/databases/all_db_$DATE.sql.gz" | cut -f1)" >> "$LOG"
# --- File backup ---
echo "Backing up website files..." >> "$LOG"
tar -czf "$BACKUP_DIR/files/www_$DATE.tar.gz" /var/www/ 2>/dev/null
echo "File backup: $(du -sh "$BACKUP_DIR/files/www_$DATE.tar.gz" | cut -f1)" >> "$LOG"
# --- Config backup ---
echo "Backing up server config..." >> "$LOG"
tar -czf "$BACKUP_DIR/config/config_$DATE.tar.gz" \
/etc/nginx/ \
/etc/letsencrypt/ \
/etc/php/ \
/etc/systemd/system/*.service \
2>/dev/null
echo "Config backup: $(du -sh "$BACKUP_DIR/config/config_$DATE.tar.gz" | cut -f1)" >> "$LOG"
# --- Clean old local backups ---
echo "Cleaning backups older than $RETENTION_DAYS days..." >> "$LOG"
find "$BACKUP_DIR" -type f -name "*.gz" -mtime +$RETENTION_DAYS -delete
# --- Sync to cloud ---
echo "Syncing to cloud storage..." >> "$LOG"
rclone copy "$BACKUP_DIR" "$RCLONE_REMOTE" --log-file="$LOG" --log-level INFO
echo "=== Backup completed at $(date) ===" >> "$LOG"
echo "" >> "$LOG"
sudo chmod +x /usr/local/bin/backup.sh
Key Details
--single-transaction- dumps InnoDB tables without locking them, so your sites stay online during backupset -euo pipefail- the script stops on any error instead of silently continuing- Old backups get cleaned up after 14 days
rclone copy(notsync) keeps old offsite backups even if they’re deleted locally
Schedule It with Cron
sudo crontab -e
# Run full backup daily at 3:00 AM
0 3 * * * /usr/local/bin/backup.sh
# Backup databases every 6 hours (for busy sites)
0 */6 * * * mysqldump --all-databases --single-transaction | gzip > /backup/databases/db_$(date +\%Y\%m\%d-\%H\%M).sql.gz
Verify Backups Are Running
# Check the log
tail -50 /var/log/backup.log
# Check when backup last ran
ls -la /backup/databases/ | tail -5
# Check cloud storage
rclone ls b2:my-backup-bucket/$(hostname)/databases/ | tail -5
Test Your Backups
A backup you’ve never tested is a backup that might not work. Test a restore at least once:
# Test database restore (use a temporary database)
zcat /backup/databases/all_db_20260413-030000.sql.gz | mysql -u root test_restore_db
# Test file restore
mkdir /tmp/restore-test
tar -xzf /backup/files/www_20260413-030000.tar.gz -C /tmp/restore-test
ls /tmp/restore-test/var/www/
# Clean up
rm -rf /tmp/restore-test
mysql -u root -e "DROP DATABASE test_restore_db;"
Backup Costs
Offsite storage is cheap. For a typical VPS with 20 GB of sites and databases:
| Provider | Monthly Cost (20 GB) | Notes |
|---|---|---|
| Backblaze B2 | ~$0.10 | $0.005/GB storage. Free egress to Cloudflare. |
| Wasabi | ~$1.40 | $0.0069/GB. No egress fees. 1 TB minimum billing. |
| AWS S3 (Glacier) | ~$0.08 | Cheapest for long-term archival. Slow retrieval. |
| Another VPS | Varies | Good if you already have a second server. |
Troubleshooting
| Problem | Fix |
|---|---|
| mysqldump: “Access denied” | Check /root/.my.cnf credentials. Or specify user: mysqldump -u root -p. |
| rclone: “Failed to create file” | Check bucket permissions. Your API key needs read/write access. |
| Backup script runs but produces empty files | Check disk space: df -h. The backup directory might be full. |
| Cron job doesn’t run | Check cron syntax. Use crontab -l to verify. Check /var/log/syslog for cron errors. |
| Backups are too large | Exclude cache directories: add --exclude='wp-content/cache' to tar or rsync. |
| rclone is too slow | Add --transfers 4 for parallel uploads. Use --bwlimit to avoid saturating bandwidth during peak hours. |
Related Articles
Last updated 21 Apr 2026, 08:08 +0300.