# Cloning a live FreeBSD system for easy disaster recovery



## carmik (Apr 19, 2019)

I'm using a FreeBSD system as the main authoritative DNS/firewall/UTM system for a 100+ user (local) network. I do not have a backup of this critical system, which makes me nervous. A lot!

At some point I want to virtualize this installation. But I lack the time to do so. Please also note that even though this system runs with absolutely minimal care for around 15 years now, apart from the usual updates I would not call myself BSD- (or Linux for that matter) savvy.

To make a long story short: got hold of a POS WD Mycloud EX4100 NAS unit. It must definitely be some sort of curse upon sysadmins anywhere, but one has to play the hand he was dealt, so...

My FreeBSD installation is not a huge one:

```
# gpart list
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 976773134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,2b90efa1-21a7-11e6-8af2-6805ca3f4651,0x28,0x400)
   rawuuid: 2b90efa1-21a7-11e6-8af2-6805ca3f4651
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: ada0p2
   Mediasize: 21475360768 (20G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(2,GPT,809c0a1a-21a8-11e6-8af2-6805ca3f4651,0x428,0x2800400)
   rawuuid: 809c0a1a-21a8-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwrootfs
   length: 21475360768
   offset: 544768
   type: freebsd-ufs
   index: 2
   end: 41945127
   start: 1064
3. Name: ada0p3
   Mediasize: 4294967296 (4.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e0
   efimedia: HD(3,GPT,a7a54cde-21a8-11e6-8af2-6805ca3f4651,0x2800828,0x800000)
   rawuuid: a7a54cde-21a8-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: gwswap
   length: 4294967296
   offset: 21475905536
   type: freebsd-swap
   index: 3
   end: 50333735
   start: 41945128
4. Name: ada0p4
   Mediasize: 21474836480 (20G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(4,GPT,0b93fdb9-21a9-11e6-8af2-6805ca3f4651,0x3000828,0x2800000)
   rawuuid: 0b93fdb9-21a9-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwvarfs
   length: 21474836480
   offset: 25770872832
   type: freebsd-ufs
   index: 4
   end: 92276775
   start: 50333736
5. Name: ada0p5
   Mediasize: 21474836480 (20G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(5,GPT,2a801338-21a9-11e6-8af2-6805ca3f4651,0x5800828,0x2800000)
   rawuuid: 2a801338-21a9-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwtmpfs
   length: 21474836480
   offset: 47245709312
   type: freebsd-ufs
   index: 5
   end: 134219815
   start: 92276776
6. Name: ada0p6
   Mediasize: 32212254720 (30G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(6,GPT,4f1bb0d4-21a9-11e6-8af2-6805ca3f4651,0x8000828,0x3c00000)
   rawuuid: 4f1bb0d4-21a9-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwusrfs
   length: 32212254720
   offset: 68720545792
   type: freebsd-ufs
   index: 6
   end: 197134375
   start: 134219816
7. Name: ada0p7
   Mediasize: 21474836480 (20G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(7,GPT,ab1e562f-21a9-11e6-8af2-6805ca3f4651,0xbc00828,0x2800000)
   rawuuid: ab1e562f-21a9-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwcachefs
   length: 21474836480
   offset: 100932800512
   type: freebsd-ufs
   index: 7
   end: 239077415
   start: 197134376
8. Name: ada0p8
   Mediasize: 21474836480 (20G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(8,GPT,e26bd9fe-21a9-11e6-8af2-6805ca3f4651,0xe400828,0x2800000)
   rawuuid: e26bd9fe-21a9-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwrootfsbackup
   length: 21474836480
   offset: 122407636992
   type: freebsd-ufs
   index: 8
   end: 281020455
   start: 239077416
9. Name: ada0p9
   Mediasize: 21474836480 (20G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(9,GPT,fc74c097-21a9-11e6-8af2-6805ca3f4651,0x10c00828,0x2800000)
   rawuuid: fc74c097-21a9-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwvarfsbackup
   length: 21474836480
   offset: 143882473472
   type: freebsd-ufs
   index: 9
   end: 322963495
   start: 281020456
10. Name: ada0p10
   Mediasize: 32212254720 (30G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(10,GPT,0c7624f0-21aa-11e6-8af2-6805ca3f4651,0x13400828,0x3c00000)
   rawuuid: 0c7624f0-21aa-11e6-8af2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwusrfsbackup
   length: 32212254720
   offset: 165357309952
   type: freebsd-ufs
   index: 10
   end: 385878055
   start: 322963496
11. Name: ada0p11
   Mediasize: 257698037760 (240G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(11,GPT,970d2531-2ba8-11e6-95e2-6805ca3f4651,0x17000828,0x1e000000)
   rawuuid: 970d2531-2ba8-11e6-95e2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 257698037760
   offset: 197569564672
   type: freebsd-ufs
   index: 11
   end: 889194535
   start: 385878056
12. Name: ada0p12
   Mediasize: 42949672960 (40G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(12,GPT,0f128e61-2632-11e6-ba45-6805ca3f4651,0x35000828,0x5000000)
   rawuuid: 0f128e61-2632-11e6-ba45-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwsquidcachefs
   length: 42949672960
   offset: 455267602432
   type: freebsd-ufs
   index: 12
   end: 973080615
   start: 889194536
13. Name: ada0p13
   Mediasize: 1890566144 (1.8G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(13,GPT,bb0137d4-2ba8-11e6-95e2-6805ca3f4651,0x3a000828,0x3857e0)
   rawuuid: bb0137d4-2ba8-11e6-95e2-6805ca3f4651
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gwsquidlogsfs
   length: 1890566144
   offset: 498217275392
   type: freebsd-ufs
   index: 13
   end: 976773127
   start: 973080616
Consumers:
1. Name: ada0
   Mediasize: 500107862016 (466G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r10w10e19
```

(Note: the "..backup" slices were created thinking that I would take backups of the partitions on the same disk, silly idea...)

Is there some sort of package that can:
(1) snapshot my live filesystem and copy/backup it to an NFS/iSCSI share (could use the NAS for that purpose)? Point is, I would like to have an off-system/off-site backup of my system, in order to be able to
(2) *easily* re-create a freebsd boot disk from the backup in (1)

Please, excuse my ignorance, trying to maximize things done with the tiny pieces of time juggling stuff ...


----------



## SirDice (Apr 19, 2019)

Use dump(8)/restore(8). But personally I don't backup the OS or the applications. Restoring usually takes more time than a clean install (as in; a few hours vs. 10 minutes). So I only backup configuration files and data. Everything else is easily (re)installed from scratch, the data is what's important.

VMs are treated a little different. Then its worthwhile to invest in a backup solution that works at the hypervisor level (Veeam is a popular choice). Those often work independently of the OS in the VM and can quickly create and restore snapshot type backups.



carmik said:


> I'm using a FreeBSD system as the main authoritative DNS/firewall/UTM system for a 100+ user (local) network. I do not have a backup of this critical system, which makes me nervous. A lot!


Also think about hot or cold standby systems. Even if you have good backups it's going to take some time to rebuild/restore. In the meantime you'd be offline. For critical systems remember the mantra; one equals none. So make sure you have something that can take over at a moments notice.


----------



## carmik (Apr 22, 2019)

SirDice said:


> Use dump(8)/restore(8). But personally I don't backup the OS or the applications. Restoring usually takes more time than a clean install (as in; a few hours vs. 10 minutes). So I only backup configuration files and data. Everything else is easily (re)installed from scratch, the data is what's important.


I really don't know what I should backup, and even if I did there's always the problem of having the right version ISO in hand, remembering which packages to load, etc. I made this installation when I was smarter, faster and leaner. Looking at it now it feels as though someone else who knew what to do, made it....

How would one go about using dump(8)/restore(8), considering a NFS system is available for this purpose?

Alternatively, if you should have a backup of the configurations, which folders would you need? /etc, /usr/local/etc, other?



> VMs are treated a little different. Then its worthwhile to invest in a backup solution that works at the hypervisor level (Veeam is a popular choice). Those often work independently of the OS in the VM and can quickly create and restore snapshot type backups.


Budget was always a constraint around these parts. Plus I always preferred open source whenever I felt it was rock solid for production use. Atm I use proxmox, having full VM backups, plus LVM-based snapshots for whatever VMs I have. This FreeBSD system is the one to include. I have not done so for one reason only (don't laugh): I don't feel comfortable VM'ing my FreeBSD with its 3 network cards... Talking about pathetic here.



> Also think about hot or cold standby systems. Even if you have good backups it's going to take some time to rebuild/restore. In the meantime you'd be offline. For critical systems remember the mantra; one equals none. So make sure you have something that can take over at a moments notice.


I do have a cold standby system. Problem is that it's backup was taken some years ago. Relocating a no-backup problem to an obsolete backup one.

And one more thing: over these 10+ years I've been jumping once in a while in these forums and getting awesome support from the community and from you personally. Just a big thanks here for making an excellent system, superb


----------



## 11e9b60a (Apr 22, 2019)

rsync is not mentioned but perhaps it should be. -a should work, <strike>just watch out for a few immutable files that need to be handled manually. find / -flags +schg </strike>
gdisk adaNewDrive (NOTE, sysutils/gdisk is handy if gpart is being buggy and won't let you make a new partition table. gdisk hints at "sysctl kern.geom.debugflags=16" to do what you want)
newfs -U -j -L label /dev/gpt/drive
PMBR as per man gptboot (NOTE booting won't work if rootfs is not the first encountered ufs partition, stupid loader): gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 adaX
freebsd-boot: just dd it

EDIT: new
<code>
gjournal label ....
newfs -J -L rootfs /dev/diskid/...p[first-encountered-ufs-partition].journal
</code>


----------



## blackhaz (Apr 22, 2019)

I use rsync. Here's my backup script that is run by cron every hour. It first checks if another rsync process is running. If not, it aborts. The result is written to a log file so I could make sure the backup is made.


```
#!/bin/sh
# This script is launched by cron every hour to backup the entire file system to a Raspberry Pi.
# /etc/ssh/sshd_config "PermitRootLogin yes" must be set on the remote box.

if ! pgrep -x "rsync" > /dev/null
then
    echo "`date` Initiating backup..." >> /home/USERNAME/scripts/last-raspberry-backup
    rm /home/USERNAME/scripts/last-raspberry-backup.log
    sshpass -p "PASSWORD" rsync --log-file=/home/USERNAME/scripts/last-raspberry-backup.log \
    --archive --hard-links --delete --sparse --xattrs --numeric-ids --acls --progress \
    --exclude=/usr/home/USERNAME/.cache --exclude=/dev --exclude=/tmp --exclude=/media \
    --exclude=/mnt --exclude=/proc --exclude=/var/cache --exclude=/compat/linux/proc \
    --exclude=/usr/home/USERNAME/.gvfs \
    --exclude=/usr/home/USERNAME/share \
    --exclude=/usr/home/USERNAME/.local/share/Trash \
    --exclude=/var/db/entropy \
    --bwlimit=1000 \
    / root@192.168.0.22:/mnt/freebsd-backup/
    echo "`date` rsync code $?" >> /home/USERNAME/scripts/last-raspberry-backup
else
    echo "`date` rsync running..." >> /home/USERNAME/scripts/last-raspberry-backup
fi
```

I display the dates of successful attempts from the log file every time my terminal starts, in .zshrc:


```
cat /home/USERNAME/scripts/last-raspberry-backup | grep "code 0" | tail
```

So, every time I open a terminal I see something like this:


```
Sat Apr 13 01:01:06 BST 2019 rsync code 0
Sun Apr 21 21:40:19 BST 2019 rsync code 0
Sun Apr 21 23:05:49 BST 2019 rsync code 0
Mon Apr 22 10:08:59 BST 2019 rsync code 0
%
```

This allows me to keep track when the last backups are made.

Restore is done by detaching the drive from the RPi box, attaching it to the main machine and doing something like this after a clean install:


```
rsync -av --hard-links --xattrs --acls --sparse /media/da0p1/freebsd-backup/ /
```


----------



## carmik (Apr 22, 2019)

I see. Assuming that the worst has happened, does one need to download the needed extra packages on the freshly installed system, or does the rsync command above take care of that?


----------



## VladiBG (Apr 22, 2019)

did you think that you can backup a live database using rsync ? i guess not. So first make sure that you make a backup of the database using it's internal tools like pg_dump or mysqldump and after that copy the backup of that database.

You may find my post useful from this thread about dump/restore








						UFS - trying to merge to a  raid system
					

Hi, I have a server running current version of freebsd 11.2  I have bought 8 10 tb hard drives and had one 920 gb drive.  the 920 gb drive is an old one that I installed when I built my first server.  I now want to set the system up as a raid.   so for every 2 each 2 drives will mirror each...




					forums.freebsd.org


----------



## blackhaz (Apr 22, 2019)

carmik, you just need to install the rsync port or package. Nothing else needed.


----------



## carmik (Apr 24, 2019)

I made some modifications but it seems the "--acls" option was not supported. Without that option it seems to run ok. But would omission of this parameter render the backup unusable?


----------



## carmik (Apr 24, 2019)

I'll look into that as well, thank you.


----------



## gpw928 (Apr 26, 2019)

carmik said:


> I'm using a FreeBSD system as the main authoritative DNS/firewall/UTM system for a 100+ user (local) network. I do not have a backup of this critical system, which makes me nervous. A lot!



It would certainly do you no harm to get a copy of your system.  I would quiesce any applications that write data to disk, use dump(8) on each file system, and save the files produced.

You can write them to a network file system mount point, or pipe them into a network command, e.g. for the root:

`dump u0f - / | ssh user@host dd of=/some/writable/path/root.dump`

In your situation, I would also look for some spare hardware to test the restore(8).  Make a plan, and do it.  That would give you confidence about recovery.

You may then want to do the dump regularly, or look at other options like rsync.

CYA 101 complete.

Once you have the backups sorted, you can then look at ongoing management.  Virtualisation adds an extra layer of complexity.  But it's also very handy for making test beds.

If you don't really know what you have, then I would first choose to try to replicate it on another piece of hardware, document as I went, and test at appropriate times.  The idea is you want a reference system, as well as a full retreat path readily available.

You need to create a list of what's installed and how it's configured.

Start by getting a list of installed packages:

`pkg info`

Build a new host and install them on that new host.  On the old host, look for configuration files:

`find /etc /usr/local/etc -name '*.conf*' -print`

Examine every ".conf" file.

Then compare the full /usr/local tree on your hosts for clues about might have been installed as a port (as opposed to a package).

Here is a good FreeBSD setup guide.  Use it as a checklist to determine what's been done on your existing host.

But get your backup first, and come back for more guidance as you need it.

Cheers,


----------



## gpw928 (Apr 27, 2019)

Wozzeck.Live said:


> Forget dump (very slow, but interesting to make incremental data backup). Moreover, if you run UFS with UFS Journaling, "dump" will not be unavailable. Dump requires UFS snapshot that is not possible with UFS Journaling, unless you deactivate first UFS journaling or unless you switch to gjournal (the best solution for UFS and what every UFS user should set up)


Would you please clarify what you mean by "UFS journaling"?  I am only aware of gjournal ("newfs -J") and journaled soft-updates ("newfs -j").  I was not aware that either are incompatible with dump(8).


----------



## VladiBG (Apr 27, 2019)

You can't make dump -L on the live file system while the soft updates journaling are enabled.


----------



## gpw928 (Apr 27, 2019)

VladiBG said:


> You can't make dump -L on the live file system while the soft updates journaling are enabled.


Thank you.  We all did very well without "-L" for quite a few decades before it got invented...


----------



## Phishfry (Apr 27, 2019)

I agree to an extent. We made out fine without journaling of the filesystem for how long. Linus has it too. Any filesystem after ext2.
Hence any interop tools have been broken since. They work but they just trounce the FS journal.
I truly enjoy the features of journaling. It is fun to muse at the old days.

My general FreeBSD server philosophy is to use a 32/64GB DOM or half-mini SATA disk for the OS and all applications.
Then stick all the data on raid arrays.
To me it makes it easier to maintain. I do cron filesystem backups of settings and configs into a tarfile that is dated.
Secondary I do a dd of the OS disk and make it a disk image. That way I can mount it to extract files or burn it to a new disk.

My problem with the OP setup is that I dislike all those partitions (like 13) on a corporate firewall/UTM/DNS.
I believe in the isolated firewall device which has no extranous applications which could be compromised.
KISS approach. This is where the pfSense philosophy has been embedded in my brain.
They have the right mindset. Whether you need a web application is the only question. I have since moved to OPNSense.
As I gain more experience I will ditch that too.
I was kind of shocked when I was investigating my new OPNSense NanoBSD install. They have the filesystem RW by default.
What the heck. That is why I use the NanoBSD install.
So you really can't trust anybody else with your firewall. You really need to know whats going on.


----------



## gpw928 (Apr 28, 2019)

Phishfry said:


> I agree to an extent.


I guess I was being somewhat facetious.  Snap is nice to have.  I'm not really a luddite...

I agree on KISS for Internet firewalls.  Mine is on a Rasperry Pi.  DNS, time, byte counting (so the ISP can't screw me for $10/GB on exceeding quota), Internet link monitoring (its PPP over 3G cellular mobile, and has to be watched), and packet filtering.  That's all.  16 GB SD card.   5 GB actually used.  Access is by ssh key on the LAN.

To get the box count down I virtualised both OPNsense and pfSense, to test them.  Fact is that the Raspberry Pi does everything I want, and I understand every single line of the firewall code...  So I have no reason to change yet, but the vast feature list of OPNsense/pfSense may compel me one day.

The original poster has a single system, apparently unrecoverable, with 100 people that may stop work if it fails.  To that end, I think (s)he needs to prioritise:

managing upwards to share "ownership" of the problem;
get the existing firewall recoverable; and
look at improving the setup.
I would dump(8) the existing file systems to the NAS.  Then boot the standby system from a FreeBSD CD/USB, make the required file systems, and restore(8) the backups from the NAS.  Change the IP address, install the boot block, and boot it up.  I would do it that way because it's tried and true.  Warren Block has some good guidance for dump/restore here.  I just found some guidance on getting the standby system pre-configured here.  Cpdup(1) mentioned above looks like it may be a good alternative to dump/restore, but the standby system would still need file systems and boot blocks prepared.  It's a bit daunting for the inexperienced, so if it were me, I would make a written plan and post it here for critique...

It would seem to me that pfSense (maybe with subscription support) might be a good choice for 3 above.


----------



## carmik (Apr 30, 2019)

As I've described in the op, the concept of having a multitude of partitions was naive. Reasoning at that time was that I could safeguard file deletions from say /var by restoring files from /varbackup.  A hardware failure would affect all slices anyway.

Regarding pfsense: that came much, much later than my freebsd build. Started using the latter on version 4 (IIRC). My problem is that right now my setup is too complex, my fear is that I can't fine tune pfSense (be it squid, squidGuard, my custom DNS zones or other) the way my box is currently built. Or that pfSense might turn fully commercial, leaving me with a dead-blocked product. Never gonna happen with FreeBSD...

Finally, Pi is good for handling traffuc, but HTTP caching needs fast and reliable storage. Even URL filtering (via squidguard in my case) needs RAM. My box (a i5-4570) strains when users start google maps (squid seems to be going nuts with only a couple of users doing google earth zooms and pans).

In any case, it seems that I'll have (fastest way) get some dumps, and then try to load them into a VM to see if it boots alright.

Thanks to all for some quality responses, glad to be here


----------



## mfaridi (May 6, 2019)

I think you can use something like Clonezilla to make a whole system image or snapshot of your current running system and restore it when something bad happens.
I move my FreeBSD servers to Dell servers by Clonezilla and modify some config files and use those great servers in new hardware. our old servers have hardware problems and I move them to new hardware by Clonezilla and all config are remaining and working.
I do not know we have something Clonezilla in FreeBSD.


----------



## recluce (May 6, 2019)

mfaridi said:


> I think you can use something like Clonezilla to make a whole system image or snapshot of your current running system and restore it when something bad happens.
> I move my FreeBSD servers to Dell servers by Clonezilla and modify some config files and use those great servers in new hardware. our old servers have hardware problems and I move them to new hardware by Clonezilla and all config are remaining and working.
> I do not know we have something Clonezilla in FreeBSD.



I am not sure that Clonezilla is the best approach, but it should work with any OS, including FreeBSD, if you simply choose the sector by sector backup approach. Drawback: as Clonezilla is working blindly here, it will backup blank space and be much less efficient compared to its filesystem-based backup modes.


----------



## rickyzhang (Jun 3, 2020)

I tried Clonezilla -- clonezilla-live-2.6.6-15-amd64 to backup my pfSense home server. 

It did locate my destination disk and its EXT4 partition. But it failed to find the FreeBSD UFS partition disk as a source disk.

The FreeBSD UFS partitions did show up when it asked for destination disk.


----------



## Phishfry (Jun 4, 2020)

I have been experimenting with sysutils/ufs_copy with good results.
Especially useful for shuffling around disks. It works at the disk partition level.
You can clone a mounted partition as a UFS snapshot.


----------



## Dave Lister (Jun 4, 2020)

carmik said:


> I'm using a FreeBSD system as the main authoritative DNS/firewall/UTM system for a 100+ user (local) network. I do not have a backup of this critical system, which makes me nervous. A lot!
> 
> At some point I want to virtualize this installation. But I lack the time to do so. Please also note that even though this system runs with absolutely minimal care for around 15 years now, apart from the usual updates I would not call myself BSD- (or Linux for that matter) savvy.
> 
> ...


As a Freebsd noob myself I asked a similar question here:-








						Simple backup protocol for home user.
					

I'm new to FreeBSD and have just installed a 12.1 release 32-bit version with Mate desktop, OpenOffice, Thunderbird and Firefox are all installed, and even the printer is working.  So before I inevitably mess things up, I am looking for some pointers on how best to go about backing up the...




					forums.freebsd.org
				




I had problems using rsync which seemed likely due to the fact I was writing to a USB drive and it couldn't handle so many small files - I think is was overheating.  However borg backup was suggested and seems to have worked well.
I used:-

```
# pkg search borg
py37-borgbackup-1.1.10         Deduplicating backup program
py37-borgmatic-1.3.26_1        Wrapper script for Borg backup software
```
then installed both

```
# pkg install py37-borgbackup-1.1.10
# pkg install py37-borgmatic-1.3.26_1
```

then followed the instructions here:-

https://borgbackup.readthedocs.io/en/stable/quickstart.html

1) Before a backup can be made a repository has to be initialized:

```
# borg init --encryption=repokey /path/to/repo
```

Backup the ~/src and ~/Documents directories into an archive called _Monday_:

```
# borg create /path/to/repo::Monday ~/src ~/Documents
```

Restore the _Monday_ archive by extracting the files relative to the current directory:

```
# borg extract /path/to/repo::Monday
```


----------



## Argentum (Jun 4, 2020)

carmik said:


> Is there some sort of package that can:
> (1) snapshot my live filesystem and copy/backup it to an NFS/iSCSI share (could use the NAS for that purpose)? Point is, I would like to have an off-system/off-site backup of my system, in order to be able to
> (2) *easily* re-create a freebsd boot disk from the backup in (1)



My advice is just for the future - with ZFS your task would be easy. In your position I would strongly advise to build a new system with ZFS.


----------



## Dave Lister (Jun 4, 2020)

Argentum said:


> My advice is just for the future - with ZFS your task would be easy. In your position I would strongly advise to build a new system with ZFS.


Is there any potential long term risk with Oracle reverting ZFS to closed source and OpenZFS developing independently?


----------



## Argentum (Jun 4, 2020)

Dave Lister said:


> Is there any potential long term risk with Oracle reverting ZFS to closed source and OpenZFS developing independently?



That is a good question, but personally I am not afraid. I am using ZFS on all my installations from laptop to BHYVE quests and I am happy with that.


----------



## tingo (Jun 4, 2020)

Dave Lister said:


> Is there any potential long term risk with Oracle reverting ZFS to closed source and OpenZFS developing independently?


As with any open source project, there is always the long term risk that the project in question might be abandoned some time in the future, due to lack of resources or interest.
My view (FWIW): OpenZFS has less chance of getting abandoned than FreeBSD, because it is used by more projects.
If you are planning for the long term, it is wise to think about / have a plan for what you would switch to if your favorite open source project dies.


----------



## DavidMarec (Jun 7, 2020)

Dave Lister said:


> Is there any potential long term risk with Oracle reverting ZFS to closed source and OpenZFS developing independently?



Oracle ZFS has already turned into closed source. OpenZFS lives now independently so  that the both versions are no longer compatible.

Like others, I may point out that running FreeBSD on top of ZFS makes administrations tasks, i.e. regular backups, easier.


----------



## carmik (Aug 12, 2020)

blackhaz said:


> I use rsync. Here's my backup script that is run by cron every hour. It first checks if another rsync process is running. If not, it aborts. The result is written to a log file so I could make sure the backup is made.
> 
> 
> ```
> ...



Re-opening this thread to say a big thank you. This saved my life after I accidentally overwrote a couple of Gb from the start of the boot drive. Had different /usr /var and /tmp partitions, so the damage was solely to the root filesystem. Which could not boot anymore.

Just in a case some careless fella bumps into the same issue, what I did was:

Booted with a Freebsd livecd (same version, 11.3 in my case)
System would not "see" my partitions. I fired `gparted /dev/ada0` which threw an error that partitions were possibly corrupted asking me to select between setting either an MBR partition or a GPT one. I selected GPT first *without comitting changes in gparted!* I elected to just display partitions and there they were all partitions I had. So I thought I'm good and committed this info.
Repaired the overwritten boot loader with instructions in https://forums.freebsd.org/threads/how-to-restore-boot-loader.62390/post-360340
System would still (obviously) not boot. So I figured I had to recreate partition ada0p2 (the first working partition that is). To do so, I formatted the partition with `newfs -U /dev/ada0p2`
Now, here's where it got tricky. I could not run the rsync command to recover the files that were backed up with the script above. So I made a new install on a fresh disk. The old one, the one I was trying to recover, became /dev/ada1
After this "minimal" install, I used `pkg install rsync`.
Mounted /dev/ada1p2 as /tmp/old
Used rsync to restore whatever was there.
Removed the disk I made the fresh install on, to try and boot from my recovered disk
The system would boot to a point throwing an error:

```
mountroot: unable to remount devfs under /dev (error 2)
mountroot: unable to unlink /dev/dev (error 2)
```

Not an expert on FreeBSD and time was pressuring me. So I booted up the livecd version, mount ada0p2 and copied the entire /dev directory from the LiveCD to the disk. Rebooted and this time I overcame that error. There were a couple of errors regarding non-existing mountpoints (/usr /var and /tmp) for which I just make the specific directories and rebooted.

And it booted like a charm. Praize FreeBSD!

blackhaz mate thanks a zillion again!


----------



## carmik (Aug 12, 2020)

Argentum said:


> My advice is just for the future - with ZFS your task would be easy. In your position I would strongly advise to build a new system with ZFS.


Any howto's describing this approach? Would ZFS protect me from disk failures on plain hardware with single-disks? Or are you proposing Freebsd running on top a ZFS RAID?


----------



## Argentum (Aug 17, 2020)

carmik said:


> Any howto's describing this approach? Would ZFS protect me from disk failures on plain hardware with single-disks? Or are you proposing Freebsd running on top a ZFS RAID?



Hello!

With ZFS I just cloned my FreeBSD laptop with ease. (now writing this message on that very machine) -


took a fresh (and bigger) hard drive and connected to the laptop via USB adapter;
the system recognized this drive as /dev/da0, boot device was /dev/ada0
manually partitioned new USB drive with `gpart`;
`gpart add -t efi -s 200M -a 4K da0`
`gpart add -t freebsd-swap -s <my_new_swap_size>G -a 4K da0`

created bigger freebsd-zfs partition with `gpart add -t freebsd-zfs -s <my_new_zpool_size>G -a 4K da0`;
installed boot code with `gpart bootcode -p /boot/boot1.efifat -i 1 da0`;
added new zfs partition to the existing pool with  `zpool attach zroot /dev/ada0p3 /dev/da0p3`;
allowed the system to resilver that mirror, cheking progress with `zpool status`;
shut down, removed old disk, installed new disk which was connected to USB during resilver;
booted up from new disk and removed old device from mirror with `zpool detach zroot <old_drive_id>`;
now writing this message on a new system with bigger capacyty and new drive, having old bootable system in my drawer.


----------



## carmik (Aug 18, 2020)

Thanks Argentum . Now, if I understand correctly, one has to make the initial install on ZFS, before being able to clone a disk using your instructions, correct?


----------



## Argentum (Aug 18, 2020)

carmik said:


> Thanks Argentum . Now, if I understand correctly, one has to make the initial install on ZFS, before being able to clone a disk using your instructions, correct?



That is my belief that today it is a good idea to make most FreeBSD installs on ZFS. My personal experience is good. With ZFS it is easy to clone, snapshot and maintain the system. My previous post is just one example how I replaced the hard drive in my FreeBSD laptop.


----------



## Mjölnir (Aug 18, 2020)

carmik said:


> Any howto's describing this approach? Would ZFS protect me from disk failures on plain hardware with single-disks? Or are you proposing Freebsd running on top a ZFS RAID?


Since you wrote you have a cold backup machine, you can very easily backup to that system with `zfs send/receive`; that's a reasonable solution even for single disk systems.  ZFS offers some additional level of data protection via checksums (detects corruption where traditional filesystems don't).  When you reinstall, please also strongly consider to rework your firewall setup.  A firewall is _not_ a single machine, but a _network setup_.  A firewall where the packet filter is not on it's _own separate physical_ box (ideally managed through console access only, or - less secure - via a physically separate management LAN), does not deserve it's name.  Consider OPNsense or pfSense & have a look at their hardware recommendations (refurbished if costs matter).  Additionally, ZFS makes it easy to run vulnerable services jailed (in a DMZ host).  On fear about ZFS development/maintainance beeing stalled: there's so much demand by major players throughout the industry, that this will not happen.  It's more likely that Oracle drops SPARC hardware & Solaris, before OpenZFS dies.


----------

