# What do you people use for cloning disks?



## aimeec1995 (Sep 18, 2017)

I am wondering what you people use to clone disks on FreeBSD.

I was trying to use the dd utility and even for a small clone it took over 10 hours.


----------



## k.jacker (Sep 18, 2017)

Are you talking about UFS?


----------



## Phishfry (Sep 18, 2017)

`dd` really needs the optimal block size to work fastest.

Have you fooled with different block sizes?

For example `dd if=/dev/ada0s1a of=/dev/da0 bs=1M conv=sync`

From my experience Hard Disks like *bs=1M* and USB drives work best at *bs=64K*


----------



## rufwoof (Sep 18, 2017)

Is cloning a absolute necessity?

For a desktop UFS setup I have freebsd installed twice. One more or less just pure freebsd (no ports/packages, excepting squashfs-tools) and the other my main desktop setup. I boot the former and mount the latter then mksquashfs that to a backup. But that's not a pure clone.

`cd /mnt
mkdir sda4
mount /dev/ada0s4 sda4
mksquashfs /mnt/sda4 backup.sfs`

On mine, takes around 5 minutes to create a 3GB compressed filesystem file (.sfs) of a 6GB main installation.

You can mount and view the content (read only) that .sfs if you install fusefs-squashfuse, and of course unsquashfs the .sfs (restore it) as required

`cd /mnt
mkdir sda4
mount /dev/ada0s4 sda4
unsquashfs -f -d /mnt/sda4 backup.sfs`

See Thread 62445


----------



## Datapanic (Sep 18, 2017)

I use Clonezilla:  http://www.clonezilla.com  It can clone just about anything.


----------



## ronaldlees (Sep 18, 2017)

aimeec1995 said:


> I am wondering what you people use to clone disks on FreeBSD.
> 
> I was trying to use the dd utility and even for a small clone it took over 10 hours.



Ten hours?  Mine have never taken more than maybe an hour or so (1TB).  I use `dd` cuz it's simple.


----------



## aimeec1995 (Sep 19, 2017)

Yes.


----------



## ralphbsz (Sep 19, 2017)

ronaldlees said:


> Ten hours?  Mine have never taken more than maybe an hour or so (1TB).  I use `dd` cuz it's simple.


That would mean that you are reading and writing your disk at 277 MB/s (1TB divided by 3600 seconds in an hour).  Which is right near the theoretical maximum.

In my experience, the key to faster cloning (that is, any task that requires high-throughput sequential IO on multiple disks) is

A: very large IO sizes.  On spinning disks, make the IOs much larger than a single track, so the drive can schedule IOs internally optimally.  On spinning disks, larger IOs typically don't hurt.  On SSDs, there is an optimal IO size, and larger IOs may hurt.

B: Asynchronicity, or queuing multiple IOs.  For example, let's look at a very stupid way to do copying: Read from the source disk.  Then write to the target disk.  Repeat.  If you do it this way, then each disk is idle about half the time.  So here is a better idea: Read one block from the source disk.  The simultaneously start writing that block to the target disk, and start reading the next block from the source disk.  Iterate as soon as both IOs are done (this is two-way parallelism).  This works pretty good, if both disks are always exactly the same speed.  But performance of disks is known to fluctuate, so one disk will sometimes be idle while the second disk is being a little slow.  Also, at any given moment there are only zero or one IOs active on each disk.  This causes the "turnaround delay" problem: Imagine the disk just finished a write request, and the response to the write request has to go all the way up the IO stack, from the disk itself, through controller firmware, device driver, into user space, there the copy program has to start a new write request, which then has to go all the way down the stack.  In the meantime, the platter has rotated, and the data to be written next has moved away.  At this point, one will likely lose one full rotation of the disk (or a few ms of time).  For this reason, it's more efficient to have multiple IO requests queued at the disk drive, so as soon as one finishes, the disk can immediately start working on the next one.

So the optimal copy program does the following: It has one thread (or group of threads) which starts issuing multiple read commands to the source disk, and whenever reads finish, issues new ones, keeping a certain number active at all times.  These read threads fill a data buffer.  As soon as enough contiguous data is available, a write thread (or group) starts issuing one or more simultaneous write requests to the target disk.  The program I use is not available to the public; it's part of a proprietary storage performance testing tool.

With 5-10 IO requests queued on the source and target disks (each), and IO sizes of multiple MB, one can get very near the theoretical limit of the disks; with a simple minded copy program, one typically gets 1/3 to 1/2 the bandwidth.  The `dd` program is actually not bad; I remember reading the source code (I forget on which OS), and it uses two threads, but only one IO per device.


----------



## silicium (Oct 8, 2017)

`dd` and geom safety to prevent foot shooting is a treasure. Without it on Linux, I nuked once my 1TB data disk because I forgot to check if drive letter /dev/sd? was still the same after reboot.


----------



## rigoletto@ (Oct 9, 2017)

I use ZFS and so `zfs send | zfs recv`.


----------

