# Very slow dump | restore



## Anonymous (Nov 21, 2010)

Hi,

I started a dump|restore of apprx. 2.0 GB of data from the /usr-UFS-partition of an un-mounted secondary SATA-HD to a 2.6 GB /usr-UFS-partition of an USB Memory Stick. It is running more than 24 h now.

Before this, using the same procedure, I did dump|restore of 177 MB from the /-partition and of 62 MB from the /var-partition from the same HD to the respective partitions of the same USB Memory Stick, and in these cases the tranfer rates were reasonably fast:

Here come the sizes of the HD to be backed-up as reported by df -h once it was mounted.

```
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ad6s1a    496M    177M    279M    39%    /
devfs          1.0K    1.0K      0B   100%    /dev
/dev/ad6s1e    496M     14K    456M     0%    /tmp
/dev/ad6s1f    573G    2.0G    525G     0%    /usr
/dev/ad6s1d    1.9G     62M    1.7G     3%    /var
```

Here comes the partition info of the USB memory stick as reported by gpart show da0s1:

```
=>      0  7831467  da0s1  BSD  (3.7G)
        0   393216      1  freebsd-ufs  (192M)
   393216  1572864      2  freebsd-swap  (768M)
  1966080   196608      4  freebsd-ufs  (96M)
  2162688   196608      5  freebsd-ufs  (96M)
  2359296  5472171      6  freebsd-ufs  (2.6G)
```

The respective command sequences were:

backing up / (177 MB):

```
mount /dev/da0s1a /mnt; cd /mnt
dump -0af - /dev/ad6s1a | restore -rf -
cd ..; umount /mnt
```
DUMP: finished in 154 seconds, throughput 1166 KBytes/sec

backing up /var (62 MB):

```
mount /dev/da0s1d /mnt; cd /mnt
dump -0af - /dev/ad6s1d | restore -rf -
cd ..; umount /mnt
```
DUMP: finished in 40 seconds, throughput 1593 KBytes/sec

backing up /usr (2048 MB):

```
mount /dev/da0s1f /mnt; cd /mnt
dump -0af - /dev/ad6s1f | restore -rf -
cd ..; umount /mnt
```
But this one told after 24 h:
DUMP: 92.31% done, finished in 1:54 at Sun Nov 21 20:43:06 2010
=> throughput 0.9231*2048*1024/24/3600 = 22.4 KBytes/sec

After more than 24 h the reported CPU time is quite low:

```
root   1258  1207  1258  1207    2 I+     0    0:00.41 dump -0af - /dev/ad6s1f (dump)
root   1262  1258  1258  1207    2 S+     0    0:17.56 dump: /dev/ad6s1f: pass 4: 94.29% done, finished in 1:29 at Sun Nov 21 22:01:35 2010 (dump)
root   1263  1262  1258  1207    2 S+     0    0:10.39 dump -0af - /dev/ad6s1f (dump)
root   1264  1262  1258  1207    2 S+     0    0:10.40 dump -0af - /dev/ad6s1f (dump)
root   1265  1262  1258  1207    2 S+     0    0:10.40 dump -0af - /dev/ad6s1f (dump)
root   1259  1207  1258  1207    2 D+     0    1:22.52 restore rf -
```

So, why is backing up of /usr so terribly slow. What did I wrong, hence what can I do to get /usr backed-up at a reasonable rate in a reasonable time. I had the same problem already in the past, when backing up the primary internal H to a secondary internal HD.

Many thanks for any suggestions.

Best regards

Rolf


----------



## graudeejs (Nov 22, 2010)

Why on earth do you dump, and immediately restore on your backup media? That is plain wrong....
http://forums.freebsd.org/showthread.php?t=185


----------



## danbi (Nov 22, 2010)

It is very useful to mount the new (empty) filesystem with the async flag, because it doesn't yet contain anything valuable and may be recreated by repeating the procedure. The async flag will speed up restore very much, especially if you have lots of small files and especially when you restore to flash media where write speed and IOPs is severely limited.


----------



## Anonymous (Nov 22, 2010)

killasmurf86 said:
			
		

> Why on earth do you dump, and immediately restore on your backup media? That is plain wrong....



Because people keep suggesting this. 



			
				killasmurf86 said:
			
		

> http://forums.freebsd.org/showthread.php?t=185



Here comes a quote from your writing-up:



			
				killasmurf86 said:
			
		

> Ok, this is defiantly worth writing... especially for new users
> Here i will cover how to backup/restore (to file) FreeBSD using native utilities called *dump *and *restore*
> ...
> 
> ...



[CMD=""]dump -0Lauf - /dev/ad1s1a  | sudo restore -rf -[/CMD]

This is almost exactly what I did.

I am new to FreeBSD, and I merely have an own opinion on the correct way of cloning a disk to another volume. Now, I only know, that the way how I did it was impractically slow. Therefore my question on how to do it better.

Anyway, many thanks for your reply.

Best regards

Rolf


----------



## graudeejs (Nov 22, 2010)

Ye, but, I think, you didn't understand the purpose of that command....
For regular backups don't do that....
Now if you wanted to move filesystem from one disk to another (for example if one disk is starting to die, and you have different partitions... or you don't want to use tar, then you can use dump .. | restore....) It was just to show that you could do that.....

you should do something like
`# dump -0Lauf - /dev/ad0s1d | bzip2 > /path/to/backups/ad0s1d.dump.bz2`
Only Now, if you have recent FreeBSD I would use xz (better compression, faster decompression etc)

backup:
`# dump -0Lauf - /dev/ad0s1d | xz > /path/to/backups/ad0s1d.dump.xz`
cd to where you want to restore, when you need to, run
`# xzcat /path/to/backups/ad0s1d.dump.xz | restore -rf -`

well something like this


----------



## Anonymous (Nov 22, 2010)

danbi said:
			
		

> It is very useful to mount the new (empty) filesystem with the async flag, because ...



Danbi,

Many thanks for this hint. I will test this the next time I have to do a backup (clone).

Best regards

Rolf


----------



## Anonymous (Nov 23, 2010)

killasmurf86 said:
			
		

> Ye, but, I think, you didn't understand the purpose of that command....



Aldis!

Please don't get me wrong, I really appreciate your valuable posts. As a matter of fact, I was aiming for volume cloning in one shot, and dump | restore after all did what I want, but only extremely slow. After 29 hours the cloning process finished successfully with the following message:


```
DUMP: DUMP: 2143721 tape blocks
DUMP: finished in 104261 seconds, throughput 20 KBytes/sec
DUMP: DUMP IS DONE
```

man dump tells us, that the default block size is 10 kB, so ~200 GB were dumped, and not 2 GB as I assumed - sorry my fault. After all I can consider me lucky that it did not dump the whole 591 GB of that partition in 86 hours ;-)

Seriously, I am looking for a tool like asr for Mac OS X: http://www.manpagez.com/man/8/asr/

Does something like this exist for FreeBSD?

Many thanks again for all the kind replies to my questions.

Best regards

Rolf


----------



## wblock@ (Nov 23, 2010)

rolfheinrich said:
			
		

> man dump tells us, that the default block size is 10 kB, so ~200 GB were dumped, and not 2 GB as I assumed - sorry my fault. After all I can consider me lucky that it did not dump the whole 591 GB of that partition in 86 hours ;-)
> 
> Seriously, I am looking for a tool like asr for Mac OS X: http://www.manpagez.com/man/8/asr/
> 
> Does something like this exist for FreeBSD?



It really sounds like dump/restore.  However, you should look at the -C option to dump.  Some examples and comparison of dump/restore, dd, and Clonezilla are in my FreeBSD backup article.


----------



## graudeejs (Nov 23, 2010)

rolfheinrich said:
			
		

> Aldis!
> 
> Please don't get me wrong, I really appreciate your valuable posts. As a matter of fact, I was aiming for volume cloning in one shot, and dump | restore after all did what I want, but only extremely slow. After 29 hours the cloning process finished successfully with the following message:
> 
> ...



LOL. That explains a whole lot


----------



## Anonymous (Nov 25, 2010)

*[Solved] Very slow dump | restore*

I would like to post a quick follow-up. In the meantime, I did some experiments with sysutils/cpdup and with applying the optimizations on dump | restore as suggested by danbi and wblock.

For testing cpdup, I mounted the source /dev/ad6s1f (SATA HD) as /mnt/usr0 and the target /dev/da0s1f (USB stick) as /mnt/usr1

[cmd=""]newfs /dev/da0s1f[/cmd]
[cmd=""]mount /dev/ad6s1f /mnt/usr0[/cmd]
[cmd=""]mount /dev/da0s1f /mnt/usr1[/cmd]
[cmd=""]cpdup -Iv /mnt/usr0 /mnt/usr1[/cmd]

Result:

```
cpdup completed successfully
2087913663 bytes source, 2087913663 src bytes read, 0 tgt bytes read
2087913663 bytes written (1.0X speedup)
208186 source items, 248806 items copied, 0 items linked, 0 things deleted
19061.5 seconds   213 Kbytes/sec synced   106 Kbytes/sec scanned
```

So copying speed using cpdup was 5.5times faster than using a dumb dump|restore on the same source/target.


For testing the dump | restore optimizations, I did the follwing:

[cmd=""]newfs /dev/da0s1f[/cmd]
[cmd=""]mount -o async /dev/da0s1f /mnt[/cmd]
as indicated by danbi.
[cmd=""]dump -C 24 -0af - /dev/ad6s1f | restore -rf -[/cmd]
as indicated by wblock.

Result:

```
DUMP: DUMP: 2143721 tape blocks
DUMP: finished in 10857 seconds. throughput 197 KBytes/sec
DUMP: DUMP is DONE
```

By these optimizations, dumping speed increased by a factor of 9.6. I am still not that amazed about 197 kB/s, though.

Given that cpdup can be used for incremental updates which should be much much faster, it is an interesting alternative to dump|restore.

Many thanks and best regards

Rolf


----------



## hopla (Feb 14, 2012)

(I know I'm reviving an old thread here, but starting a new one for this small addition seemed worse)

I would like to add that I have just successfully moved a /usr filesystem to a new, bigger one on another disk using the dump/restore method described above (I wanted to use dump/restore, because that still seems to be the only sure way that stuff like chgflgs are also copied over).

At first attempt it was also very slow, then I checked a few things:

 original /usr filesystem was mounted noatime, new one wasn't
 original /usr filesystem was using softupdates, new one wasn't
 I used -C16 flag on dump command, but since this was all happening on same host, -C64 was probably even better
So I:

 blew away new filesystem, creating new one, this time using softupdates:


```
# newfs -U /dev/daXs1e
```

 mounted it again, this time using noatime (you could, in addition, also use the async option described somewhere above, however I found the dump was significantly sped up already without it):


```
# mount -o noatime /dev/daXs1e /usr-new
```

 restarted *dump* with -C64:


```
# cd /usr-new && dump -C64 -0uanL -h0 -f - /usr | restore -rf -
... lot of dump output ...
  DUMP: finished in 2323 seconds, throughput 7168 KBytes/sec
  DUMP: level 0 dump on Tue Feb 14 16:20:30 2012
  DUMP: DUMP IS DONE
```

It now restored/copied everything over at a speed of about 7 megabytes per second (which still isn't anywhere near disk speed, but acceptable), instead of a measly 0.10 megabytes per second it seemed to go at on the first attempt.


----------



## Crivens (Feb 14, 2012)

Two solutions come to mind which I did not test out, but I want to share the ideas.
1) growfs
you can dd the whole partition onto a new and bigger one and then use growfs to adapt the file system size. Is there anyone who has used this?

2) since dump writes blocks of (default) 10KB, each 10 KB restore may want to access the disk, which I would guess will result in disk trashing. According to the man page, dump can be configured to use bigger block sizes per write. Using several MB should make sure that dump and restore do not step on each others disk positioning too much.

As I wrote, I did not test these and currently have no disk around to try it. Any volunteers? Does it sound like it could improve matters?


----------



## bbzz (Feb 14, 2012)

Any reason why you didn't use journaled soft updates? *-j*


----------



## hopla (Feb 15, 2012)

I could actually use growfs, since I'm doing this in a VMWare VM. But it seemed very finicky and very tricky... (see these links: http://bsdbased.com/2009/11/30/grow-freebsd-ufs-filesystem-on-vmware-hdds , http://forums.freebsd.org/showthread.php?t=526 )
Especially since I don't have console access (which means no single user mode).

I don't use journaling because I still consider it experimental, but maybe I'm uninformed and scared  It also seems to use up a lot of space.

I looked at the write block size of dump and retried the dump from yesterday (I hadn't done the actual migration yet and it's only 16G of data), with -b 1000 (it can't be bigger apparently):


```
DUMP: 98.87% done, finished in 0:00 at Wed Feb 15 11:39:10 2012
  DUMP: DUMP: 16654609 tape blocks
  DUMP: finished in 1195 seconds, throughput 13936 KBytes/sec
  DUMP: SIGSEGV: ABORTING!
  DUMP: SIGSEGV: ABORTING!
  DUMP:   DUMP: SIGSEGV: ABORTING!
SIGSEGV: ABORTING!
  DUMP: SIGSEGV: ABORTING!
zsh: segmentation fault (core dumped)  dump -C64 -b 1000 -0uanL -h0 -f - /usr | restore -rf -
```

It was twice as fast, but restore core dumped just at the end... Some Googling turned up these posts: http://lists.freebsd.org/pipermail/freebsd-stable/2005-January/010868.html , http://web.archiveorange.com/archive/v/iBpRadp7sF62qwLm9HQz . Apparently going over -b 512 is asking for trouble, even more than 127 might be too much and some seem to say 32 or 64 is just good. I wanted to try the async mount option as well after this, so I tried again after a *mount -o noatime,async* (strange thing about the async option though: it doesn't show in mount list) and with -b64, also I only used -C32 since the dump manpage actually recommends 32 max:


```
# dump -C32 -b64 -0uanL -h0 -f - /usr | restore -rf -
... lots of dump output...
  DUMP: 94.86% done, finished in 0:01 at Wed Feb 15 17:13:34 2012
  DUMP: DUMP: 16654147 tape blocks
  DUMP: finished in 1314 seconds, throughput 12674 KBytes/sec
  DUMP: level 0 dump on Wed Feb 15 16:53:07 2012
  DUMP: DUMP IS DONE
```

So it is faster: 12,3 MBytes/sec. I think you might push blocksize up even more and gain more speed, but I had enough testing/messing-about for today


----------



## jb_fvwm2 (Feb 16, 2012)

BTW I was hesitant about SUJ, but it appears to be the best thing ever for the ufs2 here, so far. No way I'd want to revert (I don't have it on the root slice, though).


----------



## Crivens (Feb 16, 2012)

Hi!

Well, I am glad to see the block size was doing something. But when it is true that changing it is asking for trouble, then I have an other way to do the same without bothering dump or restore. dd to the rescue! 

Piping the data stream trough dd with a big blocksize should do the same trick without changing the sizes in either dump or restore.
Like this:

```
dump <flags> | dd bs=128M | restore <flags>
```
 which should collect 128 MB worth of dump output and then hannd it to restore to drain it while dump can have a little nap.

I seriously doubt that dump is the bottleneck here as I remember saturating the USB bus with it easily when backing up to external drives.


----------

