# Do you back up your VPS?



## cbrace (Oct 26, 2012)

The title says it all. Do you make backups of your VPS? If so, how?

I used tarsnap for awhile. Great concept, great support, the price is right... but the implementation is so gawd-awful slow that is basically unusable.

Your solutions?


----------



## SirDice (Oct 26, 2012)

I mainly only backup the data in the MySQL database. Everything else can simply be restored by installing. I don't care if it takes me a while but then again, my site isn't that important.


----------



## xtaz (Oct 26, 2012)

I don't have a VPS as such, it's a proper dedicated server. But I use duplicity along with a few front end shell scripts I wrote to backup my stuff to Amazon S3 cloud storage. I basically have a filelist.txt file which duplicity reads to find out what to backup. This file contains a dump of my MySQL databases, my kernel config file, /boot/loader.conf, copies of my crontabs, a list of which ports are installed, my home directory (which contains my web docroot), /usr/local/etc, /etc, /var/db/ports, and a couple of other config files dotted around the place like /usr/local/www/phpMyAdmin/config.inc.php.

The basic idea is that I would rather install a new server from scratch, compile the custom kernel, install the ports from scratch allowing them to drag in all the dependencies again, and then copy back all the config files and my home directory, and restore the MySQL databases. I've actually done this recently after replacing my server hardware and it took about four hours in total.

I perform a full backup every 30 days, and incremental backups every day. And every 2 months I delete the oldest backup chain so that I have two full+incremental backups I can use.

My Amazon S3 storage costs are approx 13 US cents per month because I am only backing up the important things and not just every binary on the system and other useless junk.


----------



## olav (Oct 26, 2012)

Yes, I use rsync and take hourly ZFS snapshots on the backup server.


----------



## swa (Oct 27, 2012)

Couple of weeks ago I started using Amanda for doing daily offline backups. Amanda is configured with dump for / /usr /var. Directories I want to exclude (like /usr/ports) have the nodump fllag set. For jails Amanda is configured to use gnutar with an exclude list.


----------



## cbrace (Oct 27, 2012)

Thanks for the helpful info.





			
				xtaz said:
			
		

> This file contains a dump of my MySQL databases


I try not to ask questions here which might reveal that I am complete imbecile, but what is the difference between a MySQL dump and the contents of /var/db/mysql? Previously I thought I was safe simply backing up that directory. My interaction with MySQL is mostly limited to using databases/phpMyAdmin to set up up DBs for apps like www/joomla25 and email/roundcube.


----------



## AlexJ (Oct 28, 2012)

cbrace said:
			
		

> but what is the difference between a MySQL dump and the contents of /var/db/mysql?


There could be a changes in a database while you coping MySQL's database files(MySQL extensively using RAM to cache data and slowly synchronize it with files). It is dangerous to just copy files if MySQL are running, but if you stop it before coping then it is most fastest way to for backup. Use *mysqldump* if you won't shutdown temporally MySQL while doing backup, it is lock database tables and guarantee atomic operation.

As about VPS backup, take a look at /usr/ports/sysutils/rsnapshot it is /usr/ports/net/rsync wrapped in perl script that allow automated backups.


----------



## gkontos (Oct 28, 2012)

cbrace said:
			
		

> Thanks for the helpful info.I try not to ask questions here which might reveal that I am complete imbecile, but what is the difference between a MySQL dump and the contents of /var/db/mysql? Previously I thought I was safe simply backing up that directory. My interaction with MySQL is mostly limited to using databases/phpMyAdmin to set up up DBs for apps like www/joomla25 and email/roundcube.



It really depends on what engine your databases are running. With MyISAM you can simply backup the whole directory. InnoDB uses a different way and it keeps a buffer pool in memory.  

On large databases even mysqldump is not safe without locking the tables first. 

AutoMysqlBackup is very nice utility that can let you setup a database backup plan.


----------



## _martin (Oct 28, 2012)

I'm not running DB on my FreeBSD boxes. I'm using tar for backing up system settings, application settings and some of the private data (not audio/video). I've created small script for it .. nothing fancy, but it does serve it purpose: 

create_backup.sh

```
#!/bin/sh
#
# $FreeBSD
#
CONTENT_FILE="/hq/sysbk/archive_content"
EXCLUDE="/hq/sysbk/exclude_content"
CUR_FILENAME="$(hostname -s)-`date +'%Y%m%d'.tar`"

# /* we are using append option in tar, make sure that .tar with this name does not exist */
if [ -f ${CUR_FILENAME} ] ; then
        printf "removing old backup - $CUR_FILENAME\n"
        rm  ${CUR_FILENAME}
fi

for item in `grep -vE '^$|^#' $CONTENT_FILE`; do
        tar -vpr --check-links -X $EXCLUDE -f ${CUR_FILENAME} $item
done
```

Where CONTENT_FILE dictates what to save, EXCLUDE what to exclude (if anything).

As an example: 

CONTENT_FILE

```
/root/
/usr/home/
```

EXCLUDE

```
/usr/home/martin/portal/
```


----------



## NewGuy (Oct 28, 2012)

Yes, I do weekly backups. The databases are archived and downloaded using curl. The rest of the data on the server is backed up using a combination of curlftpfs and rsync. The whole backup process is handled by a script run via cron so I don't have to think about it. I just check every few weeks to make sure everything continues to run on schedule.

Once a backup has been downloaded/rsynced, I take a snapshot of the ZFS volume in case I want to roll back at any point to a previous week's backup. Again, the snapshot is handled by a cron job.

Just as important as the backup itself, make sure you test the restore process. Every so often it is a good idea to take your archive, set up a server (maybe in a VM) and try to get your server up and running. See how long it takes, what pieces might be missing, can you do it all quickly or would the process benefit from documentation? Doing one of these restores and pretending you've lost your server is a good way to find out what works and what doesn't.


----------



## joesmoe (Oct 28, 2012)

Irrespective of the actual hardware involved, any important file I have anywhere I religiously backup. 

Tarsnap isn't bad, but imho its a bit over priced for the storage.

I take a 4 times daily mysqldump of all the databases on my primary machines (they don't have large databases and have fast hard drives, so this is only a 1 minute process). 

Then I use duplicity to encrypt and all my files (including the mysqldumps) to both S3 and a backup machine that I have sitting on a mediocre connection (the only reason I use S3 is to speed in recovery of large files). 

I used rsnapshot for a long time, with its infamous zillion symlinks and while it does get the job done, I find that for more than a server or two to be backed up, it just becomes a PITA. 

The other nice thing about duplicity is that it supports snapshots, compression, backup over many protocalls (rsync, ftp, S3...), and if only a few things are changed, only a few things are transferred. 

The only issue I have right now is that S3 storage is way too expensive, but my backup server is only on a cable connection, thus way too slow for restore purposes (some of my servers are pushing 100GB+ of important data).


----------



## jb_fvwm2 (Oct 28, 2012)

FWIW, another stumbling block to a successful restore sometimes is preparing the target medium (new disk?) properly so that a rsync  or something has the same filesystem scheme on it.  Could be problematic with recent changes to GEOM, drivers, etc... especially if for some reason your backed up v8 needs restore to a new v9 disk or something...


----------



## cbrace (Oct 28, 2012)

xtaz said:
			
		

> This file contains a dump of my MySQL databases, my kernel config file, /boot/loader.conf, copies of my crontabs, *a list of which ports are installed*


What is the best way to backup a listing of ports installed? Ideally in the form of a file one can use to reinstall all ports in at one go. I already make a back up of /var/db/pkg, so I don't have to go through all the settings again.

In the wiki, I found this:


> TODO
> Add portsbackup and portsrestore utilites. They will save ports data (/etc/make.conf, /var/db/pkg, /var/db/ports, /usr/local/etc) and restore all ports on another computer by rebuilding them from the ports tree.


Sounds like a great idea, but it appears to be still in the realm of wishful thinking. FWIW, there is a very simple utility included in Linux Mint, which I have installed on my Asus netbook, which allows you to save a list of all the packages you have installed. After doing a clean reinstall you can then load the file and install all the .deb packages again. Is there anything comparable for FreeBSD?


----------



## joesmoe (Oct 28, 2012)

Maybe pkg_info > some-file

In order to catelog all your currently installed stuff. Not sure how you'd safe all the config options though.


----------



## wblock@ (Oct 28, 2012)

joesmoe said:
			
		

> Maybe pkg_info > some-file
> 
> In order to catelog all your currently installed stuff. Not sure how you'd safe all the config options though.



Make a backup copy of /usr/local/etc/.  Also /etc/, mostly for rc.conf.

The ports-mgmt/portmaster man page (portmaster(8)) has a complete procedure for deleting and reinstalling all ports.  It can be used to install ports on a new machine, just copy installed-port-list to that machine from the old one.


----------



## xtaz (Oct 28, 2012)

cbrace said:
			
		

> What is the best way to backup a listing of ports installed? Ideally in the form of a file one can use to reinstall all ports in at one go. I already make a back up of /var/db/pkg, so I don't have to go through all the settings again.




```
portmaster --list-origins > ports-list
portmaster `cat ports-list`
```


----------



## cbrace (Oct 28, 2012)

Hmmm.... "portmaster --list-origins" doesn't seem to catch everything. The resulting "ports-list" contains only 43 ports, but I have more than that installed.

For example:

```
$ cat ports-list | grep php53        
ftp/php53-curl
graphics/php53-exif
sysutils/php53-fileinfo
```

But:

```
$ ls -d1 /var/db/pkg/php53*        
/var/db/pkg/php53-5.3.17
/var/db/pkg/php53-bz2-5.3.17
/var/db/pkg/php53-ctype-5.3.17
/var/db/pkg/php53-curl-5.3.17
/var/db/pkg/php53-dom-5.3.17
/var/db/pkg/php53-exif-5.3.17
/var/db/pkg/php53-fileinfo-5.3.17
/var/db/pkg/php53-filter-5.3.17
/var/db/pkg/php53-gd-5.3.17
/var/db/pkg/php53-gettext-5.3.17
/var/db/pkg/php53-hash-5.3.17
/var/db/pkg/php53-iconv-5.3.17
/var/db/pkg/php53-json-5.3.17
/var/db/pkg/php53-ldap-5.3.17
/var/db/pkg/php53-mbstring-5.3.17
/var/db/pkg/php53-mcrypt-5.3.17
/var/db/pkg/php53-mysql-5.3.17
/var/db/pkg/php53-mysqli-5.3.17
/var/db/pkg/php53-openssl-5.3.17
/var/db/pkg/php53-session-5.3.17
/var/db/pkg/php53-simplexml-5.3.17
/var/db/pkg/php53-xml-5.3.17
/var/db/pkg/php53-zip-5.3.17
/var/db/pkg/php53-zlib-5.3.17
```
Seems kinda random or what?


----------



## wblock@ (Oct 28, 2012)

Those are "leaf" ports.  When installed, dependencies they require will also be installed.  At the end, everything is back the way it was.


----------



## SirDice (Oct 30, 2012)

cbrace said:
			
		

> I already make a back up of /var/db/pkg, so I don't have to go through all the settings again.


Packages don't have settings. The only thing you backed up is a list of installed ports/packages. If you built from ports the settings you're looking for are in /var/db/ports/.


----------

