# UFS Backup



## graudeejs (Nov 16, 2008)

Ok, this is defiantly worth writing... especially for new users
Here i will cover how to backup/restore (to file) FreeBSD using native utilities called *dump *and *restore*

note: dump and restore works only for UFS (aka FFS)

*Backing up system*
To backup system you need to use dump utility

backup:

```
$ dump -0Lauf /path/to/backups/ad0s1d.dump /dev/ad0s1d
```
where /dev/ad0s1d is any of your partitions, slices or disks, formated with UFS

Backup and compress on the fly

```
$ dump -0Lauf - /dev/ad0s1d | bzip2 > /path/to/backups/ad0s1d.dump.bz2
```

*-0* - means to backup entire filesystem
*-f name* - output to file/device, or to stdout if you use *-*
*-a* - you need this if you output to file. (tells dump not to worry about the backup media volume)
*-u* - Means to update dumpdates file
*-L* - needed if you backup mounted filesystem


*Restoring system*
to restore system restart in single user mode
format filesystem that you want to restore
in backup example, we backed up /dev/ad0s1d, so let's format it now

```
$ newfs -U /dev/ad0s1d
```

now you need to mount it

```
$ mkdir /mnt/target
$ mount /dev/ad0s1d /mnt/target
```
Let's imagine you backed up files to usb stick (da0, in root directory)
we need to mount it

```
$ mount -t msdosfs /dev/da0 /mnt/usb
```


Important note: you need space in /tmp to be able to restore
if you run out of space in /tmp, mount some filesystem somewhere and
create symbolic links from /tmp to that mount point


now to restore from backup you need to cd to dir where you mounted partition that you want to restore

```
$ cd /mnt/target
```

to restore from uncompressed backup

```
$ restore -rf /mnt/usb/ad0s1d.dump
```

to restore from compressed backup

```
$ bzcat /mnt/usb/ad0s1d.dump.bz2 | restore -rf -
```


And that is it
now you can delete file dumpdates (or something like that, check for weird file in target directory, in our case /mnt/target)

now unmount filesystems and reboot

*Some notes*
you can do incremental backups - backup everything and then backup only files that have changed since (on current backup level) see manual for more info

you can use dump/restore to clone your system to other PC's
you will probably need to copy Master Boot Record (MBR) as well

to backup MBR:

```
$ dd if=/dev/ad0 of=/path/to/mbr.img bs=512 count=1
```

to restore MBR:

```
$ dd if=/path/to/mbr.img of=/dev/ad0 bs=512 count=1
```

*Tips*
* I prefer to compress backup, you can guess why

* if you backup /usr you may delete content of ports directory
this will speed up backup process, and reduce size of backup...
It's good thing because by the time you will restore /usr from backups
/usr/ports will be outdated, and you will need to update them anyway.
And portsnap works very well (fast) in fetching ports

* I prefer to do full backups, that way you can be 100% sure, there won't
be any confusing situations

* if you want to do backups while using filesystem, make sure you haven't
deleted .snap directory, on partition that you want to backup

* if you have backed up encrypted drive, you need to somehow encrypt backups
because if someone gets these files, he can restore them to his pc, and read your files at will. (I used this method in FreeBSD + Geli guide, to encrypt drive, but process can be reversed)


*Resources*
dump(8)
restore(8)


*Update 1*
*Moving system*
You can move system from disk to disk on fly with

```
$ newfs -U /dev/ad2s1a
$ mount /dev/ad2s1a.... /target
$ cd /target
$ dump -0Lauf - /dev/ad1s1a  | restore -rf -
```

you can do the same using sudo

```
$ sudo echo
$ sudo dump -0Lauf - /dev/ad1s1a  | sudo restore -rf -
```

*Update 2*
as OpenBSD suggests, using gzip instead of bzip2 will seed up compression at cost of larger (very little) archives

so now i suggest using gzip to compress and zcat to uncompress on fly

I've tested it, and i was amazed.
No more Bzip2 for me

*Update 3*
Just wanted to remind, that you don't need to use *-u* flag if you're using dump from fixit...

*Update 4*
You can dump UFS filesystem and restore to any other FS. Dump is FS specific, while restore isn't.
Note that you probably can't use linux restore to restore dumpfile created on FreeBSD and wise versa, because dumpfile formats ar probably different

*Update 5*
To increase compression ratio you can use xz(1) (which I personally prefer lately) instead of gzip(1)


----------



## dave (Nov 17, 2008)

Super helpful!


----------



## abarmot (Nov 17, 2008)

yeah, thanks a lot for how-to!!


----------



## thortos (Nov 17, 2008)

This strategy will probably fail for every server being used more than marginally. Especially dumping databases that are in use (such as Postgres or mySQL data directories) will yield inconsistent results and most likely result in non-working databases after recovery.

While I am aware that important databases are to be replicated live onto backup servers, I want to illustrate that this dump-while-in-use strategy is best used for desktops or low-profile servers, not for heavily-used systems.

How do you people handle the backups of your servers? I'm running a set of customized backup scripts per server that tar important directories and scp them to the backup server, starting and stopping daemons as needed, but obviously that's not for anyone with uptime requirements. I also have many servers running in VMware and use that to snapshot the VMs regularly and scp them to the backup server.


----------



## graudeejs (Nov 17, 2008)

thortos said:
			
		

> This strategy will probably fail for every server being used more than marginally. Especially dumping databases that are in use (such as Postgres or mySQL data directories) will yield inconsistent results and most likely result in non-working databases after recovery.
> 
> While I am aware that important databases are to be replicated live onto backup servers, I want to illustrate that this dump-while-in-use strategy is best used for desktops or low-profile servers, not for heavily-used systems.
> 
> How do you people handle the backups of your servers? I'm running a set of customized backup scripts per server that tar important directories and scp them to the backup server, starting and stopping daemons as needed, but obviously that's not for anyone with uptime requirements. I also have many servers running in VMware and use that to snapshot the VMs regularly and scp them to the backup server.



Thanks for your reply 
I use FreeBSD as desktop, so this is more desktop-oriented guide
You made some very good points....


----------



## zszalbot (Nov 18, 2008)

thortos said:
			
		

> How do you people handle the backups of your servers? I'm running a set of customized backup scripts per server that tar important directories and scp them to the backup server, starting and stopping daemons as needed, but obviously that's not for anyone with uptime requirements. I also have many servers running in VMware and use that to snapshot the VMs regularly and scp them to the backup server.


I use a script called automysqlbackup. It works quite well and it suits my needs. 

http://sourceforge.net/projects/automysqlbackup/

Yours,

Zbigniew Szalbot


----------



## soko1 (Nov 18, 2008)

Poor /sbin/dump that does not support uzip (geom_uzip.ko) = (


----------



## graudeejs (Nov 18, 2008)

read my 1st post again

```
$ bzcat /mnt/usb/ad0s1d.dump.bz2 | restore rf -
```


----------



## Mel_Flynn (Nov 18, 2008)

The attached script, runs a weekly full backup, and incrementals 1-6 for the other days. It can compress locally (the machine being backed up has faster CPU then the backup machine) or remotely.

All this, from the daily periodic. Primarily useful for desktops that are on during the night or where the owner has chosen a different time for daily to run.

The full back up can take a very long time, naturally depending on ammount of data, CPU speed for compression and network transfer speed.


----------



## graudeejs (Nov 18, 2008)

Mel_Flynn# 


> # dd is necessary, because bzip2 cannot "compress STDIN to
> #named file"



if i understand you right, there's what i say about it:
you can compress stdin to file (simplified)

```
dump -0Lauf - /dev/da0s1a | bzip2 > /path/to/backup.img.bz2
```


----------



## fxp (Nov 19, 2008)

Mysql backup:

```
Stop mysql
make snapshot
Start mysql
do dump
```


----------



## abarmot (Nov 19, 2008)

fxp, do not need to stop mysql.
mysqldump lockes tables while dumping...


----------



## Mel_Flynn (Nov 19, 2008)

killasmurf86 said:
			
		

> Mel_Flynn#
> 
> 
> if i understand you right, there's what i say about it:
> ...



Yes, but this doesn't really work well with all shells. At least I had problems with it a few years back when i wrote it. Things may have improved since then, but I kept it to see the difference in transfer speed that dump and dd report:

```
DUMP: finished in 68 seconds, throughput 1796 KBytes/sec
  DUMP: level 3 dump on Wed Nov 19 03:22:39 2008
  DUMP: DUMP IS DONE
53795+1 records in
53795+1 records out
27543432 bytes transferred in 269.804762 secs (102087 bytes/sec)
```


----------



## graudeejs (Nov 19, 2008)

well, you used 
#!/bin/sh
in your script, which means it MUST work everywhere the same, unless someone have replaced sh with something else.

and it doesn't matter under which shell you launch script, because it'll be run in SH


----------



## gelraen (Nov 20, 2008)

killasmurf86 said:
			
		

> well, you used
> #!/bin/sh
> in your script, which means it MUST work everywhere the same, unless someone have replaced sh with something else.
> 
> and it doesn't matter under which shell you launch script, because it'll be run in SH


Only if launch it as binary. When you run it like "source ./myscript" it will be parsed by current shell


----------



## graudeejs (Nov 20, 2008)

gelraen said:
			
		

> Only if launch it as binary. When you run it like "source ./myscript" it will be parsed by current shell



now, why would you like to do that?


----------



## fender0107401 (Nov 21, 2008)

Thank you for the post, I just need a system backup solution.

I think backup is an important part of the system administration, though you may never need the backup data.


----------



## graudeejs (Nov 21, 2008)

fender0107401 said:
			
		

> Thank you for the post, I just need a system backup solution.
> 
> I think backup is an important part of the system administration, though you may never need the backup data.



as a FreeBSD desktop user, i experiment a lot. And backups saves my tons of time.


----------



## fender0107401 (Nov 21, 2008)

killasmurf86 said:
			
		

> as a FreeBSD desktop user, i experiment a lot. And backups saves my tons of time.



I am desktop user too, and I never experiment any terrible things (except several kernel panic for my mp3-player, but other usb device is normal).

Maybe the reason is the time that I use it is very short (since june) and I prefer the security_release branch. :e


----------



## blackjack (Nov 21, 2008)

This my script for dumpfilesystems. It run every day at 4:00 AM.

```
cat /root/dumpfs.sh 
#!/bin/sh
fl=`date "+%d-%m-%Y"`
path="/backup/dumpfs"

#root file system
/sbin/dump  -0 -L -f - /dev/ad4s1a > $path/rootfs/dump_ad4s1a_${fl}.img
tar cfz $path/rootfs/dump_ad4s1a_${fl}.tar.gz $path/rootfs/dump_ad4s1a_${fl}.img
rm -f $path/rootfs/dump_ad4s1a_${fl}.img
chmod 400 $path/rootfs/dump_ad4s1a_${fl}.tar.gz

#home
/sbin/dump  -0 -L -f - /dev/ad4s1f > $path/home/dump_ad4s1f_${fl}.img
tar cfz $path/home/dump_ad4s1f_${fl}.tar.gz $path/home/dump_ad4s1f_${fl}.img
rm -f $path/home/dump_ad4s1f_${fl}.img
chmod 400 $path/home/dump_ad4s1f_${fl}.tar.gz

#usr
/sbin/dump  -0 -L -f - /dev/ad4s1e > $path/usr/dump_ad4s1e_${fl}.img
tar cfz $path/usr/dump_ad4s1e_${fl}.tar.gz $path/usr/dump_ad4s1e_${fl}.img
rm -f $path/usr/dump_ad4s1e_${fl}.img
chmod 400 $path/usr/dump_ad4s1e_${fl}.tar.gz

#var
/sbin/dump  -0 -L -f - /dev/ad4s1d > $path/var/dump_ad4s1d_${fl}.img
tar cfz $path/var/dump_ad4s1d_${fl}.tar.gz $path/var/dump_ad4s1d_${fl}.img
rm -f $path/var/dump_ad4s1d_${fl}.img
chmod 400 $path/var/dump_ad4s1d_${fl}.tar.gz
```
And this script for backup MYSQL databses.

```
cat /root/backup_db.sh 
#!/bin/sh
passwd_root_mysql='password'
fl=`date "+%d-%m-%Y"`
#billing database
/usr/local/bin/mysqldump -Q --add-locks -u root --default-character-set=cp1251 --password=${passwd_root_mysql} bill > /backup/db/bill/bill_${fl}.sql
tar cfz /backup/db/bill/bill_${fl}.tar.gz /backup/db/bill/bill_${fl}.sql
rm -f /backup/db/bill/bill_${fl}.sql
chmod 400 /backup/db/bill/bill_${fl}.tar.gz
#all databases
/usr/local/bin/mysqldump --set-charset --all-databases -u root --password=${passwd_root_mysql} > /backup/db/all/all_databases_${fl}.sql
tar cfz /backup/db/all/all_databases_${fl}.tar.gz /backup/db/all/all_databases_${fl}.sql
rm -f /backup/db/all/all_databases_${fl}.sql
chmod 400 /backup/db/all/all_databases_${fl}.tar.gz
#old_base
/usr/local/bin/mysqldump -Q --add-locks -u root --default-character-set=cp1251 --password=${passwd_root_mysql} old_base > /backup/db/old_base/old_base_${fl}.sql
tar cfz /backup/db/old_base/old_base_${fl}.tar.gz /backup/db/old_base/old_base_${fl}.sql
rm -f /backup/db/old_base/old_base_${fl}.sql
chmod 400 /backup/db/old_base/old_base_${fl}.tar.gz
```


----------



## Mel_Flynn (Nov 21, 2008)

killasmurf86 said:
			
		

> well, you used
> #!/bin/sh
> in your script, which means it MUST work everywhere the same, unless someone have replaced sh with something else.
> 
> and it doesn't matter under which shell you launch script, because it'll be run in SH



No. The shell redirect is on the target machine and passed on from ssh's command line parsing. All I remember is that it wouldn't work to a BSDi 4.1 host, nor an AIX host, but I can't for the life of me remember the error message.

echo foo|ssh host "cat - >/tmp/out"

works now, didn't work then.
Come to think of it, it's possible it was caused by a shell wrapper.


----------



## graudeejs (Dec 12, 2008)

*UPDATE 2*
as OpenBSD suggests using gzip instead of bzip2 will seed up compression at cost of larger (very little) archives

so now i suggest using gzip to compress and zcat to uncompress on fly

I've tested it, and i was amazed.
No more Bzip2 for me 



P.S. can admin/moderator integrate this in original post (#1)


----------



## nakal (Dec 14, 2008)

I would not backup MBRs like you suggested, except you expect to restore things on the same drive again. MBR stores the drive geometry and partitioning information.

When you restore to a fresh drive, after a drive failure for example, it is a better idea to use fdisk, bsdlabel and eventually boot0cfg, in case you want a boot manager.

It is also possible to use gpart now. These is my preferred way to partition drives at the moment. For more information, how to use GPT partitions on i386 and amd64 and boot from them, read the article on my website: http://m8d.de/news/freebsd-on-gpt.php. It's a bit tricky, but you rather have to understand what I do there, not repeat the steps line by line.


----------



## graudeejs (Dec 15, 2008)

nakal said:
			
		

> I would not backup MBRs like you suggested, except you expect to restore things on the same drive again. MBR stores the drive geometry and partitioning information.
> 
> When you restore to a fresh drive, after a drive failure for example, it is a better idea to use fdisk, bsdlabel and eventually boot0cfg, in case you want a boot manager.
> 
> It is also possible to use gpart now. These is my preferred way to partition drives at the moment. For more information, how to use GPT partitions on i386 and amd64 and boot from them, read the article on my website: http://m8d.de/news/freebsd-on-gpt.php. It's a bit tricky, but you rather have to understand what I do there, not repeat the steps line by line.



ye, thank you for reminding me.... (i really forgot about this)
btw, i don't backup my MBR, if anything i use sysinstall to rebuild partitions on drive and then press "w" in fdisk editor.
It will write partition table to disk and ask for loader, pick MBR or FreeBSD loader, and exit sysinstall.
Then i just use bsdlabel to rebuild labels and that is it.
After that newfs and restore


----------



## sim (Dec 20, 2008)

thortos said:
			
		

> How do you people handle the backups of your servers? I'm running a set of customized backup scripts per server that tar important directories and scp them to the backup server, starting and stopping daemons as needed, but obviously that's not for anyone with uptime requirements. I also have many servers running in VMware and use that to snapshot the VMs regularly and scp them to the backup server.



I backup my servers using rsnapshot from my archive server:

On each client server, a nightly cron makes a snapshot of each filesystem and mounts them on /snapped_fs (/snapped_fs/, /snapped_fs/usr/, /snapped_fs/var/ etc).  So I always have yesterday's complete filetree, mounted and frozen in time.  When my archive server connects for the nightly rsnapshot, it syncs the frozen tree, not the live tree. Filesystem snapshots are supposed to be consistent.

Just to be sure, another nightly cron also runs pg_dumpall. PostgreSQL dumps are point-in-time, consistent dumps which don't require the server to stop or lock any tables. I keep the last 15 dumps, and these are of course part of the filesystem snapshot so they get copied with rsnapshot.

It's getting late,  I wonder if that makes sense lol!

/sim


----------



## jadawin@ (Jan 26, 2009)

Why don't using something like backuppc?


----------



## varda (Apr 19, 2009)

Except of these ways I'm frequently using usual tar for backup purpose

```
# to file
cd /usr
tar --one-file-system -cjvf /path/to/usr-backup.tbz .
```


```
# mirroring entire partition
cd /usr
mount /dev/ad2s1f /mnt
tar --one-file-system -cf - . | tar -xvf - -C /mnt
```
Sometimes I'm using pax to clone disk after mounting new partitions respectively

```
cd /; pax -p e -X -rw . /mnt; \
cd /var; pax -p e -X -rw . /mnt/var; \
cd /usr; pax -p e -X -rw . /mnt/usr; \
cd /tmp; pax -p e -X -rw . /mnt/tmp
```
Sometimes I'm using cpdup utility for partitions synchronisation

```
mount /dev/ad2s1f /mnt
cpdup -vvv -x -i0 /usr /mnt
```
Those ways seems faster for me. I may be wrong.

If I want to backup/mirror over internet I'm using rsync or fuse sshfs. In local network I'm mounting storage from remote machine using ggated/ggatec and then everything as usual.


----------



## varda (Apr 19, 2009)

Forget to say about disk partitioning. I'm keeping it easy and simple.

```
# clear disk
dd if=/dev/zero of=/dev/ad2 bs=1m count=1
# initialize disk and create single slice
fdisk -BI /dev/ad2
# initialize slice and create single partition
disklabel -Bw /dev/ad2s1
# relabel slice to required partitioning
disklabel -R /dev/ad2s1 file
newfs /dev/ad2s1a
...
```
Manually creating simple partitons description file and using it in above snippet of code

```
#	size	offset	fstype	[fsize bsize bps/cpg]
a:	1G	16	4.2BSD		# /
b:	2G 	*	swap		# swap
c:	*	*	unused
d:	4G	*	4.2BSD		# /tmp
e:	8G	*	4.2BSD		# /var
f:	*	*	4.2BSD		# /usr
```
Sometimes enabling journaling

```
geom journal load
geom journal label /dev/da2s1f
newfs -J /dev/da2s1f.journal
mount -o async /dev/da2s1f.journal /mnt
# perform backup
```
May be adding it to fstab

```
# Device		Mountpoint	FStype	Options		Dump	Pass#
/dev/da2s1f.journal	/mnt		ufs	rw,async	2	2
```

This's just examples.


----------



## monkeyboy (Apr 22, 2009)

the bigger question for me is just *what* to make backups onto... tape technology doesn't seem to have kept pace with disk tech.... but OTOH us ol'timers never did consider making "backups" onto disk to be "true" backups (must be archiveable, kept "forever", offsite, etc, etc.)


----------



## monkeyboy (Apr 22, 2009)

monkeyboy said:
			
		

> the bigger question for me is just *what* to make backups onto...


To pursue this a bit further... to make a "real backup" of a 1TB disk, which are available for about $100 these days, one needs a media form that can storage that 1TB of data costing no more than, say $10-20, such that you can stow away a copy "forever" every week or few weeks (at least 10-20 times a year). Is there such a storage technology?


----------



## graudeejs (Apr 22, 2009)

I would rather make mirror (raid-some number i don't remember)
that way if disk goes down, than another will work.

And don't for get to buy good UPS


----------



## monkeyboy (Apr 23, 2009)

mirror, RAID1, is good... for what it is... but completely different than a backup or snapshot/archive. rm * anyone?

furthermore if some software error corrupts or totally smashes that filesystem, guess what? you have TWO perfectly smashed copies of that filesystem...


----------



## graudeejs (Apr 23, 2009)

monkeyboy said:
			
		

> mirror, RAID1, is good... for what it is... but completely different than a backup or snapshot/archive. rm * anyone?
> 
> furthermore if some software error corrupts or totally smashes that filesystem, guess what? you have TWO perfectly smashed copies of that filesystem...



well then you're back to use of simple backup/restore, but probably with backup levels.
You don't want to save shitload of same data every time... and then once in a while do level 0 dump


----------



## fronclynne (Apr 24, 2009)

monkeyboy said:
			
		

> To pursue this a bit further... to make a "real backup" of a 1TB disk, which are available for about $100 these days, one needs a media form that can storage that 1TB of data costing no more than, say $10-20, such that you can stow away a copy "forever" every week or few weeks (at least 10-20 times a year). Is there such a storage technology?



Heh.  Good one.  Last time I even looked at a tape, 80G was close to $90, and even assuming larger & cheaper, a weekly backup of 1T would take about two weeks.

Looks like a 15pack of 25G blah-rays is >$60.  Man, it just gets worse all the time.

Have you looked at 3.5" floppies?  They should be pretty cheap.


----------



## monkeyboy (Apr 24, 2009)

killasmurf86 said:
			
		

> well then you're back to use of simple backup/restore, but probably with backup levels.
> You don't want to save shitload of same data every time... and then once in a while do level 0 dump


That's right... or at least one approach...



			
				fronclynne said:
			
		

> Heh.  Good one.  Last time I even looked at a tape, 80G was close to $90, and even assuming larger & cheaper, a weekly backup of 1T would take about two weeks.
> 
> Looks like a 15pack of 25G blah-rays is >$60.  Man, it just gets worse all the time.


Well I'm currently using DLT IV 40/80GB tapes, which you can get on ebay for about $3/4 each. That's still 10X more expensive than I'd want (on a per TB basis), but worse, the quanta is too small. I'd settle for some kind of 200GB media at $100/TB (200GB real for $20).

I think this is a huge problem that, as you say, keeps getting worse. Namely that disk technology is far outstriping the development of suitable backup media. People buy these ultra-cheap 1TB disks thinking that they have 1TB of storage for cheap... but as any sysadm knows, your storage is only as good as your backup strategy.

Looking at the "big boys", (e.g. Network App., HP, "enterprise solutions"), they seem to be converging on "virtual tape libraries", which spread the architecture between disk and tape for backup. Mucho $$$ -- I haven't seen a OSS-type solution in the same vein (our university/health center spends millions $$$ on such things just to store perhaps a PB -- in other words they spend perhaps $10K to store 1TB, NOT $100).

Then there's the ol' "can you expect to read it in 10-20 years"... I have ALOT of 20 year-old data that is still very valuable, but the only offline data that I have that I expect to be able to read is on CDROM. I have other forms of archival media (8mm tape, MO), but the likelihood that I can read those things is pretty low, and 5-10 years from now, prolly close to zero.

I'd like to see some well-thought-out, AFFORDABLE, solution from the OSS community that addresses these issues. I think this is one of the hidden HUGE disservices that Microsoft has brought on to the computing world. MS has never considered proper backup/recovery to be an important part of computing. It is shameful that Windows never had a credible solution -- except perhaps those offered by 3rd parties. How many countless Windows users have faced a total data loss with "reinstall the OS" as their "recovery" solution -- pitiful.


----------



## graudeejs (Apr 24, 2009)

So what is so good about tape, that you still use it?
[I have never ever used/seen tape, I'm just a desktop user]

I don't seep problem keeping backups on HDD's, as long as you properly maintain them.

Btw what do you store on 1TB?

[lol, i know it's not much once you have it. My old PC had 8GB and i thought it was a lot. Now i have ~400GB, and i can't believe i could store my data on 8GB disk]


----------



## monkeyboy (Apr 24, 2009)

killasmurf86 said:
			
		

> So what is so good about tape, that you still use it?
> [I have never ever used/seen tape, I'm just a desktop user]
> 
> I don't seep problem keeping backups on HDD's, as long as you properly maintain them.
> ...


Pls understand that my/our strategies are "in transition" and may not make total sense now vs when they may have started  5-10 years ago, when things were different -- hence these questions in this thread...

I store many GBs of medical research imaging data, as well as the usual documents, papers, presentations, etc.

I don't believe I have ever totally lost a single file in 30 years. And the times I have lost the most recent copy of a file I can count on one hand.

So tape... it USED to be that tape was 10-1000X cheaper than disk on a per byte basis. Its not that I particularly care for tape over disk backup. But I care about PROPER backups. Here is a casual list of some of the requirements for backup:

- ability to restore the state of a filesystem (or single file) across the whole lifetime of that filesystem (or file), sampled (snapshot) at certain intervals. I used to keep "monthly"'s FOREVER. I think keeping at least yearlies forever would be a minimum, with exponential-like frequency for more recent snapshots. For example, every year, plus every month for the past 2 years, plus every week for the past 2 months...

- ability to restore the system from scratch with only modest effort (not reading in 100 tapes/floppies/whatever).

- ability to read backups at least for 20 years.

- offsite/multisite storage, so that a fire/theft of a single site doesn't wipe you out.

then some niceties: minimal operator intervention, minimal time required, minimal down time of system, online backups possible, etc.

What's wrong with disk? maybe nothing... it depends on what you mean by "disk backup"... does it meet all the criterion listed above?

What's wrong with simply copying one disk to another?

- you only get one copy (or N copies, where N is very small), which means only N snapshots in time.
- there is appreciable chance that you will not be able to read current disks 10-15 years from now,
- not that easy to store at a separate site (stacks of hard drives on a shelf in another building?),
- and somewhat fragile to transport around...

My main points are:
- when one buys a 1TB disk for $100, that isn't the same as 1TB of PROPERLY backed up storage, which may cost more like $1000...
- people don't have a real appreciation for proper backups -- it used to be at any well-run data center, if you lost a file from a few days ago, they would be able to restore it for you without any question or issue...
- now that tape is no longer so cheap, relatively, there seems to be few, well-thought-out tools, strategies and software for affordable but solid backups...

Unix dump/restore was great... but what tools exist now to address the changing landscape in media (disk cheaper than tape), while retaining the same priorities and requirements of true backups?


----------



## graudeejs (Apr 24, 2009)

he he...
I only started backing up my system about half year ago....
before that, i never made a singe backup

probably it's because I still don't have to much valuable information.

I do backups mostly to save myself from recompiling FreeBSD over and over and over (not that i don't do that.... lol)


----------



## fronclynne (Apr 25, 2009)

Well, I just read to-day that the blu-ray gang are supposedly going to have 400G media on the market this year or next and 1T is supposed to be on the horizon.  So, the ol' WORM jukeboxes reborn?  In any case, so long as prices come down, it may be workable.

In the long term, I think archiving will be done in that area they're calling the "cloud" these days.  I have medium confidence in what google is doing with fault-tolerance and redundancy.  If it became possible to rent space from them (or amazon, or some other bunch of untrustworthy goons) reasonably and store data there no-questions-asked (your format, your encryption, their platters) it may well be as good as anything else that obviously won't matter when we're chained by the neck to some psycho on a motorbike.


----------



## graudeejs (Apr 25, 2009)

Blue-ray..... 1 scratch and ....... 

cloud computing.... 
Check "network security monitoring with freebsd" video clip in Freebsd community page


----------



## monkeyboy (May 15, 2009)

killasmurf86 said:
			
		

> So what is so good about tape, that you still use it?
> [I have never ever used/seen tape, I'm just a desktop user]
> 
> I don't seep problem keeping backups on HDD's, as long as you properly maintain them.


Just in case anyone still thinks that making "backups" on a 2nd hard disk constitutes a real backup strategy... this article from Slashdot reminds us of why it isn't... and the subsequent discussion reiterates the points I was trying to make -- it ain't a "backup" unless it is 1) full, 2) on removable media, 3) offline, 4) offsite, 5) tested for actual restore...

These guys lost 13 YEARS of data cuz their "backups" weren't really backups...
===
Hacker Destroys Avsim.com, Along With Its Backups on Friday May 15, @01:17AM

"Flight Simulator community website Avsim has experienced a total data loss after both of their online servers were hacked. The site's founder, Tom Allensworth, explained why 13 years of community developed terrains, skins, and mods will not be restored from backups: 'Some have asked whether or not we had back ups. Yes, we dutifully backed up our servers every day. Unfortunately, we backed up the servers between our two servers. The hacker took out both servers, destroying our ability to use one or the other back up to remedy the situation.'"


----------



## graudeejs (May 15, 2009)

he he he, that's why i bought my 16BG flash


----------



## bluetick (May 30, 2009)

I notice the current man page for dump has added an example of dump to dvd.
[cmd=]/sbin/dump -0u  -L -C16 -B4589840 -P 'growisofs -Z /dev/cd0=/dev/fd/0' /u[/cmd]

Mirror + rsync + dump = call me paranoid, but I do it.
Tape would be great but the price keeps it out of my reach.

Side note on dvd capacity article, 300 dvds on one disc.
http://news.bbc.co.uk/2/hi/science/nature/8060082.stm


----------



## graudeejs (Jun 29, 2009)

thortos said:
			
		

> This strategy will probably fail for every server being used more than marginally. Especially dumping databases that are in use (such as Postgres or mySQL data directories) will yield inconsistent results and most likely result in non-working databases after recovery.
> 
> While I am aware that important databases are to be replicated live onto backup servers, I want to illustrate that this dump-while-in-use strategy is best used for desktops or low-profile servers, not for heavily-used systems.
> 
> How do you people handle the backups of your servers? I'm running a set of customized backup scripts per server that tar important directories and scp them to the backup server, starting and stopping daemons as needed, but obviously that's not for anyone with uptime requirements. I also have many servers running in VMware and use that to snapshot the VMs regularly and scp them to the backup server.



I think making ufs snapshot first and than dumping it, would solve the problem, because it takes very little time to make snapshot
http://forums.freebsd.org/showthread.php?t=3317


----------



## vsoto (Jul 25, 2009)

bluetick said:
			
		

> [cmd=]/sbin/dump -0u  -L -C16 -B4589840 -P 'growisofs -Z /dev/cd0=/dev/fd/0' /u[/cmd]



How can this command be modified to write a compressed backup to DVD?

V.


----------



## graudeejs (Jul 25, 2009)

Don't make simple things complex
Write a script if you need


```
$ dump -0Lauf - /dev/da0s1a | gzip >> /path/to/dumps/root.dump.gz
$ growisofs -Z /dev/cd0 -R -J /path/to/dumps/root.dump.gz
```

read
http://forums.freebsd.org/showthread.php?t=1195
about buring all kind of stuff to CD/DVDs
It explains how to write/append to cd/dvd and much more


----------



## jaymax (Aug 24, 2009)

An excellent HOW TO, but have I missed something - How to back up and restore the root directory, / . The /tmp, /var, /usr are straight forward once the / is established. Therein is my problem.


----------



## graudeejs (Aug 24, 2009)

boot from freebsd fixit cd (or freevsd dvd), newfs root, mount it and resotre it.
You can't restore root while it's used by base system.


----------



## jaymax (Aug 24, 2009)

*Persistent "write failed filesystem is full" condition*

Thanks!
It works
But one minor caveat on restoring another filesystem
You had a note 

```
Important note: you need space in temp to be able to restore
if you run out of space in tmp, mount some filesystem somewhere and
create symbolic links from /tmp and /var/tmp to that mount point
```
what should be the space of the /tmp system relative to the dump file to be restored. I have a tmp filesystem 0.5 Gb that is linked to /tmp and to /var/tmp by linked directories /mnt/tmp/tmp and /mnt/tmp/tmp2 respectively. The file to be restored is ~ 240Gb the attempt produced a trail of "write failed filesystem is full" condition to stdout.

I changed the link(s) to another filesystem on another disk that had ~56Gb available space and ended up with the same error conditions.

BTW: I am using Fixit on the Installation CD


----------



## graudeejs (Aug 24, 2009)

what is the size of partition, that you are restoring?
I think it's smaller, then nessacery (I doubt it's because you don't have space on /tmp)

I haven't been dump/restore for some time now (using zfs now), but I remember (if my memory isn't failing), that when you run out of tmp, you'd get a lot of errors during restore (It took time for me to figure out, that I need more size on tmp)

Check if your partition is lorge enough.
Remember you're restoring ~240GB dump (I assume it's uncompressed size), that means that your FS must be larger.... (No I can't tell how large)


----------



## Free (Aug 25, 2009)

I have been folowing this, every thing got copyed, but it wont load =\
It's trying to find kernel on ad0, but hdd with new system is on ad2.

How to fix it ?


----------



## graudeejs (Aug 25, 2009)

you need to mark ad2 bootable and remove bootable flag from ad0
for that use fdisk, sorry, can't tell anything more, I don't use gpt, and i'm unfamiliar with fdisk.


as an alternative, you can use sysinstalls fdisk. (just be careful, not to mess things up)
sysinstall ->> configure > fdisk, select ad2, pick slice, and press S, at left bootable flag (A or something like that will appear)
Then press W to save changes.
for ad0 you have to do same, but in this case, you will lose flag A, again press W

Then press cancel all the the way to exit, or hit Ctrl+C, and abort.

Hopefully this will do the trick

P.S. make sure you can boot from ad2. There might be some bios setting that can prevent it (But I highly doubt this is the case. I had this problem, and to solve it, I had to do change some bios settings related to RAID [on matherboard], anyway, this is not your case)

Good luck, and don't mess up 
Perhaps if you can, try on emulator


----------



## Free (Aug 25, 2009)

> sysinstall ->> configure > fdisk


It showes NOTHING there O_O


```
FreeBSD Disklabel Editor


Part      Mount          Size Newfs   Part      Mount          Size Newfs
----      -----          ---- -----   ----      -----          ---- -----












The following commands are valid here (upper or lower case):
C = Create        D = Delete   M = Mount pt.            W = Write
N = Newfs Opts    Q = Finish   S = Toggle SoftUpdates   Z = Custom Newfs
T = Toggle Newfs  U = Undo     A = Auto Defaults        R = Delete+Merge

Use F1 or ? to get more help, arrow keys to select.
```

What to do ?

Ps: 100 % that there is partitions, I know because right now it's booted from them.
(0:ad(2,f)/boot/loader)


----------



## jb_fvwm2 (Aug 25, 2009)

It sounds like the same "nothing to install to" where there
were partitions on ad10_OR_da10 (sata on pci controller) 
and a 8-13-2009 disc1 sysinstall on cdr.  My recollection of
which screen is already vague though, but highly likely it is
the one shown in the previous post.  Time to commit to disk, and
"no device node found for /dev/ad10(da10)" or nearly-the-same
error occured (I tried about 6 times from the start.)  Another
thread yesterday/today mentions that error...


----------



## graudeejs (Aug 26, 2009)

perhaps try gpt and or gpart (if if you're using FreeBSD-7)
or gpart if you're trying freebsd-8

This is really weird


----------



## jaymax (Aug 29, 2009)

*Persistent 'root restore' problem*

My system disk was setup with /, /var, /tmp, & /usr mount points which I would like to maintain for more than sentimental reasons, there are many absolute paths to files using this architecture. Now I am rebuilding the system disk, I am using Fixit from the dist disk to boot the system, have created new partitions, filesystems etc, and the base installation files. 

The back up files are on another disk.

Fixit only permits the restored files to be mounted under /mnt/ so there are /mnt/ad0s1a, /mnt/ad0s1d, /mnt/ad0s1e and /mnt/ad0s1f respectively. 

When I execute the reboot I can relatively smoothly mv the /mnt/* files to their respective locations, except for the /* files. There is a string of messages about files exist or busy etc. So many files, especially the critical /etc/* files are not installed.

If I omit the base installation from the setup then I get "/bin/cp missing" on the mv attempt, indeed there is no /bin directory.

What is the correct way of doing this restore, I am sure there is one but I do not have it. 

Thanks


----------



## graudeejs (Aug 29, 2009)

I don't think I understand what you're really trying to do.

If you want to move system from one disk (partition/slice etc) to another, you should not use mv/cp instead use combination of dump restore

mount formated destination, and source (For this i create /mnt/src and /mnt/dst directories  ), cd to /mnt/dst and execute:

```
# dump -0Laf - /dev/ad0s1d | restore -rf -
# cd /mnt
# umount /mnt/src /mnt/dst
```
then repeat for eash FS you want to move.
Read update 1 on 1st post

To restore from dump on another fs, again, newfs new target, mount bough fs on /mnt/src /mnt/dst (or whatever you prefer), cd to /mnt/dst and do what's written in *Restore System* in 1st post


Sorry, if this is not what you wanted, to know, but I didn't really understand what you want to do.
Hope, this helps, if not, let me know, here


----------



## jaymax (Aug 29, 2009)

I am trying to update my O/S [v 6.0], made prior dump of /, /var, /tmp, & /usr. In the process of updating the system disk got clobbered and I am now trying to restore same. 

How do I get the original / dump (root dump) to the new / partition slice. It is in /mnt/ad0s1a after Fixit.


----------



## graudeejs (Aug 29, 2009)

Well, if you newfs'ed filesystem, that contained, dumps, than nonehow....
otherwise, newfs /dev/ad0s1a, mount fs with dumps, mount /dev/ad0s1a and cd to it.... and read *restore System* paragraph in 1st post, it explains everything


----------



## saxon3049 (Sep 2, 2009)

killasmurf - really interesting for the desktop bsd user who wants a short term back up.

Monkey - I do agree with you on most if not all the points you have raised a local back up is not a real back up it is a temporary solution to a problem. 

Here is my back up methodology:

I clone all my data drives once a month onto 3 disk's one goes to a safe deposit box in a bank, another goes to a friend of mine in Manchester (roughly 25 miles away from me) another sits in a fire proof safe in my home.
I have tape back ups every 2nd day of my mission critical files, one gets dropped in a bank deposit safe (every 7 day's I go and collect the last weeks tapes) one set goes in the fireproof safe in my home and another goes to a relatives house who drives past me on the way to and from work who is also in IT. 
I have a nightly local back ups of my main work stations to a hard drive.
I have copies of the really important stuff stashed at various places updated as needed. 
All my financial records like receipts, invoices, tax records, my will, etc I have hard copies and digital copies stored with my solicitor, my bank deposit box, my family deposit box, my fire proof safe and 3 relatives.

Some might call that excessive but when I started in IT professionally I worked for a company that lost 6 years of data (Ironic really as they where a back up holding company for legal records) and where really lost without it and if it wasn't for one guy who had a copy of everything they would have gone bust, I don't want that happening to me so I have copies of copies.


----------



## routers (Sep 10, 2009)

my backup system

160 GB SATA 2 disk


```
dd if=/dev/ad1 of=/dev/ad3 bs=8192

dd processing 09-Sep-2009-01:30:01  starting
dd processing 09-Sep-2009-03:51:57  finished
```


----------



## graedus (Oct 13, 2009)

Though I find your guide very enlightening and concise, personally I find myself very uncomfortable with dump.

As other users have stated, doing either one-shot or incremental backups with tar, copying/cloning entire filesystems with tar, or plain old dd, seems to work best for both low and high end systems.

My strategy with one-shot short term backups for upgrading/testing purposes is by moving filesystems (copying with tar). This provides two benefits: 0. you just need enough spare continuous backup space, no matter of disk layout or geometry. 1. You can access your files right away (so you can grab your kernel or config files on the fly). Afterwards, you can do a proper backup.

Properly stored and managed, external hdd are a good way to do backups with tar (multiple copies are a must for serious backups). But I wouldn't count on expecting a lifetime of 20-something years. 10 years is more than enough, and afterwards, move all your data to a more recent (and higher capacity) technology. I did from floppies to cds, from cds to dvds (sometimes by storing .iso files), and from dvds to hdd (when storing to dvds didn't made the cut).


----------



## graudeejs (Oct 14, 2009)

about tar: you can do that with dump/restore as well.

Anyway, I don't use UFS anymore....

ZFS for life


----------



## Seeker (Apr 26, 2010)

killasmurf86 said:
			
		

> Blue-ray..... 1 scratch and .......
> 
> cloud computing....
> Check "network security monitoring with freebsd" video clip in Freebsd community page



Where is that "Freebsd community page"?
Link to that video clip?


----------



## graudeejs (Apr 26, 2010)

http://www.freebsd.org/community.html
>>
http://www.youtube.com/results?search_query=freebsd&search_type=&aq=f
>>
quick search on "network security monitoring with freebsd"
>>
http://www.youtube.com/watch?v=UM4ZrsOjmNQ


----------



## Seeker (Apr 26, 2010)

O, I thought about finding video link, at your first posted link (community), but it wasn't there.
Last (third) link, you posted, I've found on my own, via google.
I thought it was something similar, but not same.

Thanks.


----------



## synonymous (Feb 17, 2011)

Hi,
Thanks so much for this wonderful post.
I tried to use your method on a FreeBSD LiveCD, and it did not work. I got to the point where I am able to create the slices in the partition (ad0s1), and mount them to /mnt/tmp. However, as soon as I try to "cd" into it, I get an error. I tried your method on a FreeNAS iso (downloaded from http://sourceforge.net/projects/fre...2/FreeNAS-i386-LiveCD-0.7.2.5543.iso/download). I am definitely doing something incorrect (pardon my lack of sufficient literacy on BSD - I have been conditioned in the M$FT world for the past 10 years). 
This is what I am doing:
My Laptop: Very old Dell Inspiron machine with 20GB HDD, and 256 MB RAM. (I want to repurpose this old machine as a FreeNAS server for my home PCs)
- Boot from the FreeNAS Live CD (this is based on FreeBSD variant - nanoBSD) (correct me if I am mistaken)
- Once the system is loaded, I choose option "6" - "Shell prompt"
- At the shell prompt I did a "df -h" and got the mounted file system.
- Using fdisk, and bsdlabel I created a slice, and partitioned it.
- Then I mounted the first partition (mount /dev/ad0s1a /mnt/tmp)
- However, I could not cd into /mnt/tmp - it errors out with "file or device not found". 

Can you please advice, as to what I am doing wrong? OR Perhaps my question should be, if dump, and restore work on systems booted through LiveCD (since the LiveCD filesystem is loaded in the RAM).

Any insight would be helpful.
Thanks.
-SR


----------



## graudeejs (Feb 17, 2011)

Well if you told what you did exactly like you did, then it looks like you did everything correctly.
I haven't used FreeNAS and NanoBSD so I don't know their specifics.

dump and restore should work from anything (unless you don't try to write to Read-Only filesystem) 

You said you sliced and formated your disk and mounted it to /mnt/tmp/?
Did you create /mnt/tmp/? It won't be created automatically, but it should give error, if you try to mount to non-existing directory (Maybe this doesn't apply on FreeNAS)


----------



## synonymous (Feb 19, 2011)

Thanks for your insight Killasmurf86.

I guess you may have been correct. Anyhow, I retraced my steps once again. And this time everything did go through fine. However, the FreeNAS did not boot as expected (specifically, it did go through the boot process, but it failed to mount the root). Anyhow, after a long investigative work (downsides of not having a formal FreeBSD knowledge), I was finally able to figure out another method to make this work. 

I thought to myself, if the LiveCD is able to install it to a disk (it "does" wipe out other partitions), then I should be able to modify that "install script", so that I can install FreeNAS to a "slice". I researched for the install script on the FreeNAS source tree and viola found the file I was looking for : http://support.freenas.org/browser/freenas/legacy/etc/inc/install.inc

This gave me the steps on how to install it to a HDD - I followed the scripted steps in "function install_harddrive_image($harddrive)". I am happy to report that FreeNAS is running in a dual-boot / multi-boot scenario (though officially it is not supported).

Needless to say, your post was quite helpful to get me started.

-SR


----------



## graudeejs (Feb 28, 2011)

Glad to hear


----------



## Caliante (Mar 31, 2011)

Thanks for writing this tutorial; I plan to use it step by step when I have enough nerves to try it :e

But on googling around there are also some remarks about first steps that you perhaps might want to include so your post becomes a 'really really really contains every step' guide?

For example:
- One seems to be needing the label from each disk and slice and the content of /etc/fstab?
- How to verify the dump just being made? (I mean, a backup is useless if, once you need it, it can't be restored).
- How to exclude directories from being dumped?
-?


----------



## graudeejs (Apr 1, 2011)

Caliante said:
			
		

> Thanks for writing this tutorial; I plan to use it step by step when I have enough nerves to try it :e
> 
> But on googling around there are also some remarks about first steps that you perhaps might want to include so your post becomes a 'really really really contains every step' guide?
> 
> ...


That's too much info, we have slices, eli, gpt... one must know what he's doing after all



			
				Caliante said:
			
		

> - How to verify the dump just being made? (I mean, a backup is useless if, once you need it, it can't be restored).


md5(5) or sha256(5) or any other



			
				Caliante said:
			
		

> - How to exclude directories from being dumped?


There is manual for such details 
dump(8), restore(8)


----------



## Caliante (Apr 1, 2011)

killasmurf86 said:
			
		

> That's too much info, we have slices, eli, gpt... one must know what he's doing after all
> 
> 
> md5(5) or sha256(5) or any other
> ...



I beg to differ, there is a manual for [cmd=]$ dump -0Lauf /path/to/backups/ad0s1d.dump /dev/ad0s1d[/cmd] as well, to name just one. 

But no problem I will do it myself.


----------



## monkeyboy (Apr 1, 2011)

AFAIK...


			
				Caliante said:
			
		

> For example:
> - One seems to be needing the label from each disk and slice and the content of /etc/fstab?



Not sure what you are asking. The dump program needs to know the /dev entry name of the disk/partition with the filesystem that you want to dump. This is easily gotten from the *df* command, as well as /etc/fstab although fstab is just a static table and might not reflect actual current mounted filesystems and their underlying device node names. That is, /etc/fstab could be wrong. Also fstab only lists those filesystems that would automatically be mounted at boot time, plus random others, depend on what the sysadmin has stuck in there. But filesystems can be living in other device partitions and never get listed in fstab. Yet they would still be eligible for dump. The upshot is that dump simply needs (an accurate) device node name with a valid filesystem in it.



			
				Caliante said:
			
		

> - How to verify the dump just being made? (I mean, a backup is useless if, once you need it, it can't be restored).



I don't know of any simple way to verify the dump other than to restore it in a spare partition and see if you like the result. But I would definitely try to catch all I/O errors from dump and dump tries hard to flag those things, so this would catch most problems. And you can always just do a "[cmd=]dd of=/dev/null bs=XX if=TAPEDRIVE[/cmd] to make sure that you can read the entire dump from the media.



			
				Caliante said:
			
		

> - How to exclude directories from being dumped?



I don't know of a way to do that simply because dump does it work OUTSIDE of the filesystem. It is an inode-level backup program and has no real knowledge of directories per se. NB that it also backs up FILESYSTEMS, which are only *parts* of the tree. There is no way to tell dump to backup an entire system (unless there is only one filesystem, which is rare).

The easiest way to understand this is to look at the output of *df*. It lists all MOUNTED filesystems. Basically dump operates on ONE and only one of those lines of df's output. So if /var is a separate filesystem (and therefore has its own line entry in df), then you can dump it, or not. But there is no way to, for example, only backup /usr/lib (or not), since /usr/lib is rarely its own filesystem.

You want to do backups WITHIN the entire file space, choosing directories at will, then use a file space-level program, like *tar* or *cpio*. These programs allow you to say what directories to archive.

Here is an example of df output:


```
$ df
Filesystem      1K-blocks     Used    Avail Capacity  Mounted on
/dev/da0s1a        126702    74398    42168    64%    /
devfs                   1        1        0   100%    /dev
/dev/da0s1d        253678   172438    60946    74%    /var
/dev/da0s1e        126702    11316   105250    10%    /tmp
/dev/da0s1f       2939197  2162978   541083    80%    /usr
/dev/da1s1d       3107529  2561610   483768    84%    /home
```

On this system, you can see there are FIVE mounted filesystems (plus /dev, which doesn't count). A single *dump* command can and does only deal with ONE of those five filesystems. You can choose to dump as few or many as you wish, but each one will require its own dump command. You cannot choose to dump only parts of one of those five filesystems. If you are using tape media, a common technique is to append multiple dumps as separate TAPE FILES, all on the same tape, like this:


```
# dump 0Labf 100 /dev/nsa0 /dev/da0s1a
# dump 0Labf 100 /dev/nsa0 /dev/da0s1d
# dump 0Labf 100 /dev/nsa0 /dev/da0s1e
# dump 0Labf 100 /dev/nsa0 /dev/da0s1f
# dump 0Labf 100 /dev/sa0 /dev/da1s1d
```

They will all go on one tape (presuming the tape is large enough, else dump will automatically prompt for a new tape). Note that the no-rewind device /dev/nsa0 is used for all but the last dump.

To restore only the 3rd filesystem, for example, you can use the mt command to skip tape files:

`# mt -f /dev/nsa0 fsf 2`

will position the tape at the beginning of the 3rd dump, ready for the restore, in this case, of the /tmp filesystem.

`# restore ivbf 100 /dev/nsa0`

I like to use the interactive mode of restore, because you get to see what is going to happen before it happens...


----------



## graudeejs (Apr 1, 2011)

If you want to skip some parts of file system tree when you make backup with *dump*, you can set nodump flag on that directory with chflags(1) prior to dumping


----------



## monkeyboy (Apr 1, 2011)

killasmurf86 said:
			
		

> If you want to skip some parts of file system tree when you make backup with *dump*, you can set *nodump* flag on that directory with chflags(1) prior to dumping



Good to know, thanks, but does *chflags nodump* allow one to block off a dump of an entire directory and all its subdirectories? That would mean that dump would need to build an internal directory tree, which is quite counter to the (classical) design of dump. I can see that chflags can easily block the dumping of a particular file (inode), but that's pretty limited in usefulness (maybe good for bad block inodes though...)


----------



## wblock@ (Apr 1, 2011)

dump(8) does build a directory tree.



> Directories and regular files which have their â€œnodumpâ€ flag (UF_NODUMP)
> set will be omitted along with everything under such directories, subject
> to the -h option.


----------



## monkeyboy (Apr 2, 2011)

wblock said:
			
		

> dump(8) does build a directory tree.


hmmm.. I guess I should try it...

It still would seem to have some problems/issues... the NODUMP flag resides at the inode level, no? So then what happens if a file (inode) is flagged as NODUMP, but is multiply linked in other directories which are not NODUMP flagged. Those directories will be missing a file that probably should be there, or not...

Or what happens if there is a filesystem inconsistency and there is, for example, a stray link in a NODUMP directory back to root... could get nasty.

Here's what McKusick said a while ago, but perhaps dump has changed since then...


> The dump program runs on the raw disk partition dumping sequentially by inode number. So, it has no idea of the file-tree hierarchy. Thus any propagation of the "nodump" flag would have to be done by the filesystem (or by using a different archiving program)... (i.e. chflags -R nodump manually)
> 
> Kirk McKusick



===testing

Indeed I think I have found a problem with the nodump flag:


```
mkdir test
mkdir test2
cp bigfile test
ln test/bigfile test2
chflags nodump test
dump 0Lahf 0 /tmp/dumpfile DEV_THIS_FS
```

and the dump fails to contain test2/bigfile even though there is no nodump flag on bigfile, only on the test directory. This is wrong, in my view...


----------



## bbzz (May 3, 2011)

That's because dump doesn't consider nodump flags if it is dumping level 0 snapshot. Use level 1 (or above), for example:

```
dump -1aLu -f /backup/root.dmp /
```


----------



## graudeejs (May 3, 2011)

bbzz said:
			
		

> That's because dump doesn't consider nodump flags if it is dumping level 0 snapshot. Use level 1 (or above), for example:
> 
> ```
> dump -1aLu -f /backup/root.dmp /
> ```



A suggestion on dump flags:

`# dump -1[b]Lauf[/b] ....`
^^ very easy to remember


----------



## wblock@ (May 3, 2011)

bbzz said:
			
		

> That's because dump doesn't consider nodump flags if it is dumping level 0 snapshot.



-h0 overrides that.


----------



## bbzz (May 3, 2011)

Hmm, so it does. Missed that 'h'.


----------



## ethoms (Aug 13, 2011)

*UFS2? and backup all slices*

Hi I'm only familiar with ufsdump/ufsrestore on Solaris 10. I found it awesome for bare metal system-state restore. Still trying to get comfortable with this using ZFS on root. In fact I'm considering using only UFS2 for root + OS related filesystems.

Does dump/restore work witht the latest UFS2 (UFS+J, UFS+S etc) filesystems? Sorry, I'm new to FreeBSD.

Also, on Solaris ufsdump uses the special reserved slice 2 to backup entire disk. Then you just partition and format the new slices, move to / and ufsdump the whole lot. Worked well for me. Does FreeBSD have an equivelent of slice 2 (the backup slice)?


----------



## graudeejs (Aug 13, 2011)

It works fine.

I also should work with UFS2+SJ when that will be available

Not sure what is this slice 2 on Solaris


----------



## ethoms (Aug 17, 2011)

Thanks for confirming that dump/restore works on current UFS2 filesystems.

In Solaris UFS, slice 2 is a reserved slice, slice 0 is usually / (root), slice 1 is swap, /usr is commonly put on s3, /var on s4 etc.

If your HD is c0t0d0 (controller zero, terminal zero, disk zero) then c0t0d0s0 is root filesystem, c0t0d0s1 is swap, c0t0d0s3 may be /usr, c0t0d0s5 may be /home (actually /export/home on Solaris). But c0t0d0s2 relates to the whole disk, sometimes called overlap. So `ufsdump -0f /mnt/backup.dump /dev/dsk/c0t0d0s2` will back up the entire disk.

I guess FreeBSD doesn't have an overlap slice. Not a big deal probably, just have to backup slices separately.

Would this be a good layout for a FreeBSD 8.2/9 installation?:


```
da0s1a         => / (UFS2)
da0s1b         => swap
da0s1c         => /usr (UFS2)
da0s1d         => /var (UFS2)
da0s2          => data (zpool)
data/usr/local => /usr/local  (zfs)
data/usr/home  => /usr/home (zfs)
data/mysql     => /var/db/mysql (zfs)
```
...etc...


----------



## graudeejs (Aug 17, 2011)

In FreeBSD, if you're using bsdlabels (bsdlabel(8)), then you have label *c*, which is unused, However I don't know what is it's purpose...
I don't think dump would work on it


----------



## wblock@ (Aug 18, 2011)

c is reserved for the whole disk.  No idea what practical use it has today, if any.


----------



## rainbowcrypt (Sep 10, 2011)

Hi,
I'm new on FreeBSD and have some trouble with restoring my backup. I'm on the "single user mode", I format my partition with [cmd=]newfs -U /dev/ad....[/cmd] and mount it. The problem is that this partition was the /usr and it seems like binaries like gzcat were there... So I have no way to unzip my compressed backup file!

Is there a way to unzip it without the need of my other OS (linux/mageia) and enough space on it to temporarly store the uncompressed dump (My external backup hard drive is FAT formated, so can't deals with files bigger than 4GB ...)

Thanks


----------



## graudeejs (Sep 10, 2011)

Yes, gunzip was in /usr/bin/

You can download FreeBSD DVD or Fixit CD, burn it and boot from it.
There you can go to Fixit mode and from there you can fix just about anything, as there will be all the tools you need.

I recommend to keep /usr with root fs, not much point in separating it. /usr/local however is other case, as it contains stuff installed from ports.


----------



## rainbowcrypt (Sep 10, 2011)

First, thanks for the fast answer!

/usr and / are separate fs because I've strictly followed a "how to install FreeBSD"  On Linux I used to do only 2 partitions : / and /home.

BTW, my problem is not there, since I do dump/restore just to be able to resize my ufs partitions. It seems to be the best (only?) way to do that. So one day in the future I will have to restore the root directory too!

Do you think I can do all these stuff directly from the install CD? It's sure that it will be easier since the partitions won't be on read-only mode (such as my root directory now...) and I will have all the tolls needed!

thanks


----------



## graudeejs (Sep 10, 2011)

Install CD, might not contain fixit mode.... (DVD and Fixit CD does)
Anyway, there shouldn't be any problems

P.S.
Some defaults should be overlooked (I think)


----------



## wblock@ (Sep 10, 2011)

mfsBSD is a lot nicer to use than the livefs CD.


----------



## rainbowcrypt (Sep 10, 2011)

thanks for the idea.
It was hard to find a liveCD booting my machine, but frenzy 1.3 did. So I could format and restore everything. :beer

wblock : I saw you site during my search, but when I saw the size of the .iso, I was afraid that there was missing some tools


----------



## hockey97 (Apr 9, 2012)

Hi, I been reading this thread. I need to come up with a backup solution to my server.

I need to backup everything to a usb external hard drive. The hard drive is formatted with fat32. Since I used it with windows currently. I want to use this to backup my server that is using FreeBSD 8.1 currently.  I want to do the backups soon.

I want to install FreeBSD 9.0  well upgrade my system. Is it worth upgrading my system?

I seen many users on these forums complaining about hardware issues not working after installing FreeBSD 9.0 or upgrading from 8.0 to 9.0

Yet, first off I would like to know what needs to be typed to backup the whole file system/hard_drive. 

I need a complete backup where I know 100% for sure if anything goes wrong I can rely on this backup to bring my server back to where it should be at.

I do know I need to use a dump level 0. Just need to know how I can copy everything from root to my external hard drive.


----------



## josefernando (Oct 8, 2012)

It isnt working when i try to restore the mbr, i have to set the bs to 512K or it just dont apear to copy the partition table. I'm checking it by using ls on /dev.

Basically when i use "dd if=/dev/ada0 of=/dev/ada1 bs=512 count=1" what apears on /dev is /dev/ada1, but when i set the bs to 512K, the partitions apears, but i cant mount it anywhere.

Can anyone help me?

PS: I'm new to FreeBSD, and i'm not a native english speaker.


----------



## wblock@ (Oct 9, 2012)

Don't use 512k for a block size.  Blocks are 512 bytes, not 524,288 bytes.  But don't use dd(1) to write an MBR, either.  Use gpart(8) backup and restore.


----------

