# Dump file max size?



## scorpizz (Jun 13, 2012)

I have a backup/restore issue on a FreeBSD 8.1. I haven't been able to pin-point the source to the problem yet.

When I do a full backup of a disk (1.1 TB of data) with dump, I'll get an error when trying to do a restore of the dump-file. Somewhere near the end of the restore, the error shows up as a 
	
	



```
error while skipping over inode
```
 When excluding some parts of the disk to 500GB of data, there is no problem when restoring.

I have tried the dump/restore process three times with different data sets above 1TB in size, but with the same error as a result.

Is there a limit in the dump or restore application that limit*s* the dump-file size to 1 TB? Or is there something else in the chain from the source disk to application to destination storage that is "known" to create this kind of errors?

The directory structure is restored in all situations without errors.

System:
FreeBSD 8.1 with one boot disk and a 1.8TB data disk (UFS2) - one slice on a SATA disk.
Dump da1s1a (1.8TB) -> nfs-share (5TB SATA-disk RAID) on different server. = OK
Restore nfs-share/dump-file > da1s1a = 
	
	



```
error while skipping over inode
```
 after 9/10th of the restore process.

The error is seen in both "real" restore writing and in test-mode (-N).


----------



## wblock@ (Jun 13, 2012)

It could be a limit of the destination filesystem.

Keeping dump files to 2G or less is not a bad practice.  That can be done by piping dump(8) output through gzip(1) and split(1):
`# dump -C16 -b64 -0uanL -h0 -f - /usr | gzip -2 | split -b 2000M - root.dump.gz`

This produces root.dump.gzaa, root.dump.gzab, and so on.

When restoring, use
`# cat root.dump.gz* | gunzip | (cd /restoredir && restore -rf -)`
Note that gzcat(1) is not a replacement for cat(1) and gunzip(1).  It does things in the opposite order and can cause arrhythmia when the backup seems corrupted.


----------



## scorpizz (Jun 14, 2012)

Sounds interesting, even that it will throw an awful lot of 2GB dump files in the directory  
I'll try following this approach and see what the result will be.

Thanks for input wblock@


----------



## wblock@ (Jun 14, 2012)

Larger files can certainly be used.  Limiting them to 2G or less makes it easier when copying onto unknown filesystems that might not support larger files.


----------

