# Clone disk



## freejlr (Aug 6, 2022)

So I have the following problem, and I don't know what could be the best way to solve it. I have to copy the entire disk of a laptop with a 1TB nvd disk, to another with the same characteristics.

That laptop has a dual boot with Windows 10 and Debian so I chose to make a clone with dd. I have a 1TB removable disk to temporarily store the image.

Start FreeBSD as liveusb, mount my 1TB removable usb with UFS partitioning and proceed with dd like this:

Create image:


> dd if=/dev/nvd0 status=none bs=1m conv=fdatasync,noerror | xz -9 > /mnt/disk.xz



Restore image in the other laptop:



> xz -9d  /mnt/disk.xz | dd of=/dev/nvd0 status=none bs=1m conv=fdatasync,noerror



Would it be a good procedure? The compression delay would be very high as it is a 1TB file? Any other way to proceed?

Thanks.

Edit:
According to geom, the disks have the same size, I can see that in Mediasize.


----------



## Crivens (Aug 6, 2022)

You may use "gzip -1" for compression. That should make the image fit the target file system and not make compression be the bottleneck.


----------



## freejlr (Aug 8, 2022)

It is not a good method, the only thing I get is a bottleneck.

The transfer speed is 15MB/s it is very slow, if I check the status=progress option dd slows down, the counters and the timer are very slow.

The following occurred to me, in this case with a prehistoric memory that I had around here of 256MB.



> da4
> 512             # sectorsize
> 264241152       # mediasize in bytes (252M)
> 516096          # mediasize in sectors
> ...



Process as follows:



> dd if=/dev/da4 of=disk.part1 bs=512 count=258048 && dd if=/dev/da4 of=disk.part2 bs=512 skip=258048



With this I intend to divide the disk into two images, and thus be able to dispense with the compression that I think is to blame for the bottleneck.

Could this be a good option? Although I would have to change the size of the input blocks bs=....

Thanks.

Edit:

To write the images to disk I have changed the size of the block:



> dd if=disk.part1 of=/dev/da4 bs=1m && dd if=disk.part2 of=/dev/da4 bs=1m seek=252



That worked for me, I suppose that to create the images I can also change the block size to bs=1m

That's one thing I don't understand about dd, when I set the block size to 1m, it reaches its maximum transfer speed, if I put higher values, these are ignored, the speed doesn't change, what determines the transfer speed? I did not understand this very well, if I put for example bs=5m the speed is the same as bs=1m.

On the other hand, could the bottleneck be due to the partitioning of the disk that hosts the image?

I create them like this:



> gpart destroy -F da4
> gpart create -s gpt da4
> gpart add -t freebsd-ufs -a 1m da4
> newfs da4p1



Do you think it can be a good solution for the 1TB drive?


----------



## bgrant (Aug 8, 2022)

I'm sure there are others with good ideas for you but I have a couple of observations.

If your CPU is fast compression may actually speed up the process because USB can be slow/limiting vs reading of a nvme/ssd at bus speed

The da4 disk you showed above is listed as USB2.0 which is very slow for this type of process.  Make sure the external disk you use is USB3.0 or better.

I'm not an expert on nvme/ssd drives but I believe that copying an entire disk image to a new disk will mark all parts of the disk as used. You then may need to do a trim operation to tell the new system's disk that there are parts of its disk that can be marked as unused.  I know that this exists but not how to use it.

Good luck.


----------



## freejlr (Aug 9, 2022)

bgrant said:


> I'm sure there are others with good ideas for you but I have a couple of observations.
> 
> If your CPU is fast compression may actually speed up the process because USB can be slow/limiting vs reading of a nvme/ssd at bus speed
> 
> ...



The da4 disk was an example. The disk where I am writing the image is an external magnetic disk with USB 3.0 connection of 1TB. Write at around 130MB/s.

The nvd0 image will be copied to another nvd0 of the same size, well it's the same model, since they are identical laptops. I think for that reason, I don't have to worry about used parts of the disc or not, I think.

I will try to do this, if when removing the compression I have a bottleneck it is because of the magnetic disk, I would have to write at about 130MB/s. I suppose that it opens other types of solutions, such as creating the partition on the new disk and copying the data used in another way with rsync or others.

But having the disk with 5 partitions and grub, I preferred to clone the entire disk.

Thanks.


----------



## freejlr (Aug 9, 2022)

On the other hand, I'm going to convince the user to stop dual boots.

That it has virtual machines in windows with vwmare, with ubuntu, debian etc... It has a very powerful laptop to have a dual boot....

Is incredible....


----------



## bgrant (Aug 9, 2022)

Have you looked at: clonezilla It only copies blocks in use for Windows and Linux and a few other file systems including ufs?

I should say I used it successfully a number of times a few years back but haven't had the requirement recently.


----------



## freejlr (Aug 10, 2022)

I have some very strange behavior with dd under freebsd. When writing /dev/zero to the external drive, it copies at its maximum speed of 130-140MB/s.



> dd if=/dev/zero of=/dev/da1 bs=1m



But when performing the da0 clone to the external drive it goes at a speed of 15-20MB/s which I said before.



> dd if=/dev/da0 of=/dev/da1 bs=1m



That with the SSD disk of my laptop, what can this behavior be due to?

Yes, try clonezilla, for example when performing disk clone operation:



> dd if=/dev/da0 of=/dev/da1 bs=1m



This copies at about 100MB/s, why? In Freebsd the copy speed is so low?

On the other hand, I tried to do a cloned disk with clonezilla, it clones the disk but when installing it on the new one it fails, nor does it start.

But, it turns out that the disk is installed in a FAKERAID, if, for example, when trying to install Windows the disk is not recognized, I need to install some intel drivers, so that Windows can recognize the RAID, since the nvd is hidden from the OS, because of that damn FAKERAID.

Could the problem be there? And how is it possible for FreeBSD and clonezilla to recognize the disk and partitions, if they don't have any intel drivers installed? Is it already integrated in the nvd driver?

What the hell, what is the function of a fakeraid??? Isn't it better to use a software raid...?

There may be a problem, now the question I have is the following, will the dual boot that was installed under this FAKERAID work if I disable the RAID?

Thanks.

Edit: VMD controller


----------



## Crivens (Aug 10, 2022)

You may run `gstat` while the dd is running to see which disk is the bottleneck.


----------



## freejlr (Aug 12, 2022)

Solved

The problem was the VMD setup that caused the boot to fail. I don't understand why you would have a FakeRaid, if you only have one disk?

Where are the advantages apart from being able to use an OS within a Raid? In the end what I did was deactivate it, and it was solved.

But there is the question, windows apparently needs a driver to be able to see the Raid, of course since the disk is behind it and is invisible to the OS, but since Freebsd is able to see the partitions etc... some is integrated into it nvd driver?



Crivens said:


> You may run `gstat` while the dd is running to see which disk is the bottleneck.



I don't understand what happened when I had the 15Mb/s bottleneck.

Checking the status of the gstat disks again and running:



> dd if=/dev/nvd0 status=none bs=1m conv=fdatasync,noerror | gzip -1 > /mnt/disk.gz



At first I got a speed of 55 - 70 MB/s Then I got values of 320Mb/s.

The device in gstat was 99% busy. I suppose that by finding many empty disk sectors, gzip was able to increase the compression speed, but I won't say more since I haven't studied gzip's compression algorithm.

The good news is that you can finally clone the disk with dd and restore it to the new laptop, but this process has given me a lot of questions.

I think I'll post them in another thread.

Thanks guys.

P.S: As I said before, I tried clonezilla, it also seems an interesting tool.

Edit:

But apparently when I restore the disk with:



> gzip -d /mnt/disk.gz | dd of=/dev/nvd0 conv=noerror,fdatasync bs=1m



There is if I have a bottleneck, da1 and da1p1, which is the external disk reaches peaks of 170% - 190% busy, and peaks of inactivity, so I can assume that decompression is not constant


----------

