# Best installation for back-up/recovery purpose



## al mello (Dec 1, 2018)

Guys,

I've been running my FreeBSD server for an year now, actually two as, although I thrust ZFS a lot - main reason to move to FreeBSD - I'm from a time that I kept three reels for each system... you guessed on main frame.

I do keep a good journal of all my configuration, so be able to back track and re-install in case of a system failure, but that would require a painful and long recovery process.

I'm not sure if getting my root on a ZFS mirror, like https://wiki.freebsd.org/RootOnZFS/ZFSBootPartition would reduce my risk, but I think that will require a fresh install (?).

Also looked at https://www.freebsd.org/cgi/man.cgi?dump(8) as an option to get me back faster and https://clonezilla.org/, but the latter didn't like my FreeBSD partitions and I couldn't make it work yet.

Any recommendation on a good back-up/recovery process? Is root on ZFS a option I should consider?

Thanks in advance for your thoughts!


----------



## obsigna (Dec 1, 2018)

There is a Howto about Cloning ZFS disks on this Forums: https://forums.freebsd.org/threads/clone-a-zfs-disk.57970/

Note, I merely touched ZFS and I am not able to tell anything definite on it. My take is, that for backup purposes you want to use zfs send instead of zfs clone because the latter produces the clone in the same ZFS hierarchy and for obvious reasons you cannot simply take away the backup volume in order to store it to a safe place. I am missing a synchronize option in zfs send, and perhaps after doing the initial backup by using this tool you want to use net/rsync for keeping the backup up to date. (see rsync(1): `rsync -axAHX --fileflags`).

I run a FreeBSD home server having 7 TB on 3 separate UFS disks. The initial file system cloning of 2.7 TB of data from the 3 TB disk took about 9 hours. I wrote my own file system cloning tool clone(1) and synchronizing a backup clone with the live file system with this one takes usually only  15 minutes. The tool is well tested on UFS, perhaps it might work with ZFS, however I guess it would be better to stay with rsync in your case.

Note, dump(8)/restore(8) and dd(1) are not well suited for cloning live file system, while zfs send, rsync and clone are agnostic whether the file system is busy or not, since these tools copy files atomically.

My backup strategy is as follows:

I want to maintain 2 exact clones for each of the operating internal drives of my server,
the clone disks are the same models of the internal ones, and the point is, that I would need to only swap the internal disk by a clone and the system would be up and running again in almost no time,
I wrote a shell script, which is invoked as nightly cron jobs for each of the file systems to be backed-up, and which checks whether a known backup drive is attached to the server (in my case to an USB port), and in case yes, it mounts the respective backup disk at the mount point /back and starts my cloning tool clone in synchronization mode,
The script unmounts the disk, and at the other day, I simply pull the USB plug from the server and put the cloned disk to a safe place,
I tell the cron jobs to write backup diagnostics to /var/log/backup.log.
On my home server my backup frequency is weekly. When I was responsible for servers of a company, the frequency was nightly, and the nice thing of USB backup disks, which are automatically mounted and unmounted by the backup system, is that plugging disk A - unplugging disk A - plugging disk B - storing away disk A, can be done by a secretary.

The backup script for UFS disks follows (you would need to replace the clone command by the respective rsync one):

```
#!/bin/sh
#
#  Script für das automatische Backup
#
#  Usage: backup.sh LAP MPA LBP MPB
#
#    LAP ($1)  GPT-Label der Arbeitspartition   -- Beispiel: "server"
#    MPA ($2)  Mount-Point der Arbeitspartition -- Beispiel: "/"
#    LBP ($3)  GPT-Label der Backup-Partition   -- Beispiel: "BACK"
#    MPB ($4)  Mount-Point der Backup-Partion   -- Beispiel: "/back"
#

if [ "$1" != "" ] && [ "$2" != "" ] && [ "$3" != "" ] && [ "$4" != "" ] ; then

   date "+%Y-%m-%d %H:%M:%S %Z: Backup-Start..."

   # 1. Überprüfen ob die Arbeitspartition und ihr Mount-Point korrekt sind.
   LAP="$1"
   MPA=`echo "$2" | sed s:/:\\\\\\\/:g`
   atyp=`mount | sed -n "/\/dev\/gpt\/$LAP on $MPA (/{s///;s/, .*//;p;}"`

   if [ "$atyp" == "" ] ; then
      date "+%Y-%m-%d %H:%M:%S %Z: Abbruch - die Arbeitspartition und/oder ihr Mount-Point sind inkorrekt.%n"
      exit 1
   fi

   # 2. Überprüfen ob das GPT-Label der Backup-Partition existiert.
   LBP="$3"
   if ! [ -c "/dev/gpt/$LBP" ] ; then
      date "+%Y-%m-%d %H:%M:%S %Z: Abbruch - das GPT-Label der Backup-Partition ist nicht vorhanden.%n"
      exit 2
   fi

   # 3. Überprüfen ob die Backup-Partition bereits in einen Mount-Point eingehängt worden ist.
   btyp=`mount | sed -n "/\/dev\/gpt\/$LBP on .* (/{s///;s/, .*//;p;}"`
   if [ "$btyp" != "" ] ; then
      date "+%Y-%m-%d %H:%M:%S %Z: Abbruch - die Backup-Partition war bereits in einen Mount-Point eingehängt.%n"
      exit 3
   fi

   # 4. Einhängen der Backup-Partition in den angegebenen Mount-Point ...
   mount -o noatime "/dev/gpt/$LBP" "$4"
   MPB=`echo "$4" | sed s:/:\\\\\\\/:g`
   btyp=`mount | sed -n "/\/dev\/gpt\/$LBP on $MPB (/{s///;s/, .*//;p;}"`
   # ... und überprüfen ob das Einhängen erfolgreich war.
   if [ $? -ne 0 ] || [ "$btyp" == "" ] ; then
      date "+%Y-%m-%d %H:%M:%S %Z: Abbruch - die Backup-Partition konnte nicht in ihren Mount-Point eingehängt werden.%n"
      exit 4
   fi

   /usr/local/bin/clone -c rwoff -s -v0 -x .snap:.sujournal $2 $4
   rc=$?
   if [ $rc -ne 0 ] ; then
      date "+%Y-%m-%d %H:%M:%S %Z: Backup-Ende - es sind $rc Fehler aufgetreten.%n"
   else
      date "+%Y-%m-%d %H:%M:%S %Z: Backup-Ende ohne Fehler.%n"
   fi
   umount $4

   exit $rc

else

   echo "Usage: $0 LAP MPA LBP MPB"
   exit -1

fi
```


----------



## ShelLuser (Dec 1, 2018)

First keep well in mind that 'best' is highly subjective. Still, in the end it all boils down to reducing the risk of losing important data and in the event of a calamity getting things back up & running asap. It might have helped if you would have told us more about your current setup. Am I right to assume that you're using ufs for your OS and combined that with a zfs (mirrored) pool?

To be honest I'm not too sure I see much added value to putting root onto ZFS, that is: not within the context of your setup. You might be able to pull this off without re-installing, but that would still make your current root partition obsolete, thus you'd be looking at resizing the other partition(s) which can be quite a drag. "_If it isn't broke..._".

`zfs send` and dump should be quite suitable to maintain solid backups.


----------



## al mello (Dec 1, 2018)

Thanks guys for your replies.

As requested, below specs for the server:

X9DRi-LN4F+
Xeon(R) CPU E5-2650 v2, 32 cores
172Gb RAM
Chelsio T320
FreeBSD 11.2-RELEASE-p4 on amd64

My main server is configured with one boot ssd:



> root@mellonas:~ # gpart show
> =>        63  1875384945  ada0  MBR  (894G)
> 63           1        - free -  (512B)
> 64  1874853888     1  freebsd  [active]  (894G)
> ...



and a ZFS raidz2 volume for the data:



> root@mellonas:~ # zpool list
> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
> raid  50.5T  25.8T  24.7T        -         -     4%    51%  1.00x  ONLINE  -



The data is rsync'ed to the back-up server with the same configuration and as for the bhyve VMs, a daily vm snapshot and a weekly full copy keeps them safe.



obsigna said:


> I run a FreeBSD home server having 7 TB on 3 separate UFS disks. The initial file system cloning of 2.7 TB of data from the 3 TB disk took about 9 hours.



I assume you are referring to back-up your data disks? If so, my strategy was to implement a second server and rsync all data to it. In my case I'm trying to find a way to back-up/clone the boot disk, so it can be restored to another one and get the server back running. To ShelLuser 's point I didn't provide too much information on my setup.



obsigna said:


> that for backup purposes you want to use zfs send



Again, I'm in the initial stage as a FreeBSD user, so I assume that could be used to _back-up_ a ZFS volume. I'm using several rsync tasks to maximize the bandwidth utilization of my 10Gb network.



obsigna said:


> clone



Are you referring to the one from cyclaero? The only clone I've found on FreeBSD was an UPS driver.



ShelLuser said:


> First keep well in mind that 'best' is highly subjective.



You got a point.



ShelLuser said:


> It might have helped if you would have told us more about your current setup.



Hopefully the information at the beginning of this reply will provide a better idea.



ShelLuser said:


> To be honest I'm not too sure I see much added value to putting root onto ZFS, that is: not within the context of your setup.



Agree, besides the fact that it would be a lot of work to get the current one in a mirror'ed ZFS, so cloning it would be my goal. Understand that would require a ~30 minutes downtime if done with Clonezzila, but for a home setup that wouldn't be a problem (If it only worked ... The question is hanging at their forum for couple weeks already).



			
				[BGCOLOR=#dee3e7 said:
			
		

> zfs send[/BGCOLOR] and dump should be quite suitable to maintain solid backups.



If I understood correctly, *zfs send *would create a copy of one zfs volume on another, and zfs receive would be used to restore it back to the original volume. Not sure if would work with ufs and for a boot device, so maybe *dump* would be the preferred method here? The only caveat of it is that it would require a separated back-up of some directories when used from the root directory as per Backup Basics, but
I might setup a test server to test that.

Thanks again for your thoughts and apologies for the n00b questions.

Edit: obsigna, noted your alias is obsigna, so clone it's yours  Small world.


----------



## obsigna (Dec 1, 2018)

al mello said:


> Are you referring to the one from cyclaero? The only clone I've found on FreeBSD was an UPS driver.



Yes, the repository on GitHub is the upstream for the FreeBSD port sysutils/clone - (indirectly by the way of a snapshot for now, the next release will come directly from upstream). From the principle of operation clone(8) is file system agnostic, only I didn’t test it with ZFS, and there might be some obstacles with ACL’s and Extended Attributes. It works well with HFS+ (macOS), UFS (FreeBSD) and NFS (both).


----------



## Phishfry (Dec 1, 2018)

Seeing how you are using tape I doubt this will interest you but I am enjoying net/rclone.
In addition to online storage it can use local storage as well. Has a syntax similar to rsync.
https://rclone.org/

I have been using FreeBSD for 3 years now and just the other day I finally used `dd` for backing up a USB stick install to an image file and burning it out to another USB stick. I couldn't believe how easy it was and now I have a backup image file.

```
dd if=/dev/da0 of=bsd.img bs=1M conv=sync
dd if=bsd.img of=/dev/da0 bs=1M conv=sync
```
The only caveats are you need to restore onto disk of equal or larger size as dd(1) copies the whole disk including empty space.
So on a sparse disk it is probably a waste of cycles.
I found diskinfo(8) good for determining sectors of a disk.
`diskinfo -v /dev/da0`
The reason I needed this is because the receiving 16GB stick was slightly smaller than the original 16GB stick causing boot failure.
dd is not very verbose about these type failures.


----------

