Introduction

Backups are your last line of defense against data loss. While Hostxpeed provides snapshots, offsite backups add extra protection. This guide shows how to automate backups using rsync (fast, incremental) and cron (scheduling) to a second VPS or cloud storage.

Why Rsync for Backups?

Rsync transfers only changed parts of files, making it efficient for frequent backups. It supports compression (--compress), encryption via SSH, and preserves permissions, ownership, timestamps. Compared to scp or FTP, rsync is 10-100x faster for subsequent backups. It also can delete remote files that no longer exist locally (--delete), keeping backup clean.

Prerequisites

Two VPS (or VPS + external storage): source VPS (your production server) and destination VPS (backup server). Destination should be in different data center for disaster recovery. Both running SSH with key authentication. Ensure destination has enough storage (at least source size × number of backups kept). Install rsync on both: sudo apt install rsync.

Step 1: Set Up SSH Key for Passwordless Access

On source VPS (backup user): ssh-keygen -t ed25519 -f /home/backupuser/.ssh/id_ed25519 -N "". Then ssh-copy-id backupuser@destination_ip. Test: ssh backupuser@destination_ip (no password). For security, restrict backup user on destination: edit ~/.ssh/authorized_keys, prefix with command="rsync --server ..." (optional but advanced). Ensure only rsync commands allowed.

Step 2: Create Backup Script

Create /usr/local/bin/backup.sh on source VPS: #!/bin/bash, SOURCE_DIRS="/etc /home /var/www /var/lib/mysql", DEST_USER="backupuser", DEST_HOST="backup.example.com", DEST_PATH="/backups/$(hostname)/$(date +%Y%m%d_%H%M%S)". Then rsync -avz --delete --link-dest=../latest $SOURCE_DIRS $DEST_USER@$DEST_HOST:$DEST_PATH. Then update "latest" symlink: ssh $DEST_USER@$DEST_HOST "ln -snf $DEST_PATH /backups/$(hostname)/latest". The --link-dest uses hard links to previous backup (incremental, but appears full).

Step 3: Exclude Unnecessary Directories

As root, you may want to exclude: --exclude=/proc --exclude=/sys --exclude=/dev --exclude=/tmp --exclude=/var/cache --exclude=/var/log (or rotate logs). Also exclude /backup if you backup from within. Example: rsync --exclude={"/proc/*","/sys/*","/dev/*","/tmp/*","/var/cache/*","/var/log/*"} -avz / $DEST_USER@$DEST_HOST:$DEST_PATH. For databases, backup via mysqldump first: mysqldump --all-databases > /tmp/all-db.sql then include in rsync.

Step 4: Encrypt Backup (Optional)

If destination not trusted, encrypt before transfer: tar -czf - /etc /home | gpg --symmetric --cipher-algo AES256 --passphrase-file /root/backup.passphrase | ssh backupuser@dest "dd of=backup.tar.gz.gpg". Or use rsync over SSH with encryption (already encrypted). For data at rest on destination, use LUKS or ecryptfs on destination. Or use cloud storage with server-side encryption (AWS S3 with KMS).

Step 5: Automate with Cron

Schedule daily backup: sudo crontab -e. Add: 0 2 * * * /usr/local/bin/backup.sh (2am daily). For weekly full + daily incremental, modify script. Add logging: >> /var/log/backup.log 2>&1. Monitor log for errors. Also use cron to rotate backups on destination (delete older than 30 days): find /backups/$(hostname)/ -type d -mtime +30 -exec rm -rf {} ;.

Step 6: Backup Database Consistency

For MySQL, use --single-transaction and --skip-lock-tables during dump. Create script: mysqldump -u root -p"$MYSQL_PASSWORD" --all-databases --single-transaction > /tmp/mysql_dump.sql. Then include /tmp/mysql_dump.sql in backup. After backup, remove temp file. For PostgreSQL: pg_dumpall > /tmp/pg_dump.sql. For MongoDB: mongodump --out /tmp/mongodump. Ensure dump happens before rsync, and lock tables briefly if needed.

Step 7: Backup Restoration Procedure

Test restoration regularly. To restore full system: rsync -avz backupuser@dest:/backups/hostname/latest/ /. For database: mysql -u root -p < /path/mysql_dump.sql. For selective file: locate in backup directory. Document procedure. Consider restoring to new VPS (Hostxpeed offers instant deployment from backup). Use dry-run first: rsync -avzn --dry-run ... to see what would change.

Step 8: Offsite to Cloud Storage (S3, B2, etc.)

Instead of second VPS, use s3cmd or rclone to push backups to cloud. Install rclone, configure remote (e.g., s3:mybucket). Modify script: tar -czf /tmp/backup.tar.gz /etc /home && rclone copy /tmp/backup.tar.gz s3:mybucket/$(hostname)/$(date +%Y%m%d). More expensive but no maintenance. Hostxpeed also provides built-in backups (included) but offsite adds extra redundancy.

Step 9: Monitoring Backup Success

Add email alert: if [ $? -ne 0 ]; then echo "Backup failed" | mail -s "Backup Error" admin@example.com; fi. Also integrate with Netdata (custom alarm checks file age). Use healthchecks.io (cron monitoring) to alert if backup didn't run. Hostxpeed dashboard shows snapshot status but not custom backup.

Step 10: Backup Speed Optimization

Use rsync -z (compression) if network slow. Limit bandwidth with --bwlimit=5000 (5 Mbps). Use --partial so interrupted transfers resume. For very large data, consider using borgbackup (deduplication, compression, encryption) instead of raw rsync. Borg creates mountable backups and is space-efficient. Install borgbackup, initialize repo, then borg create --stats --progress ::{now} /etc /home.

Real-World Examples

WordPress site: backup /var/www/html + /etc/nginx + /etc/php + MySQL dump. 10GB site, daily changes 200MB. Rsync incremental takes 2 minutes. Retention 30 days. E-commerce: add /var/log and payment gateway configs. Database backup before rsync ensures consistency. Game server: backup world files only (exclude logs). Use cron every 4 hours. Media site: backup metadata only, media stored on separate object storage (backup separately).

Troubleshooting Common Backup Issues

Incremental not working: check --link-dest path, must be absolute on destination. Permission denied: ensure backupuser has read access to source files (run as root). rsync error 23: partial transfer due to file changes during backup; use --ignore-errors or remount. Destination full: implement rotation. SSH connection refused: check firewall, SSH config. For large files (over 2GB), rsync handles but ensure destination filesystem supports.

Advanced: Encrypted Incremental Backups with Duplicity

Duplicity (uses rsync + GPG) provides encrypted incremental backups. Install: sudo apt install duplicity. Example: duplicity --encrypt-key KEYID --sign-key KEYID /etc scp://user@dest//backup. It automatically handles increments and cleanup. Downside: slower for many files (50k+). For most, rsync + hard links + separate encryption easier.

Conclusion

Automated backups with rsync and cron provide reliable, efficient data protection. Test restoration quarterly. Keep at least 30 days of backups. Store offsite in different data center. Hostxpeed snapshots are fast but complement (not replace) custom backups. Start with daily backups of critical directories and database dumps. Add monitoring to catch failures early.