# dd and zeroing hard disk



## tanked (Jun 30, 2011)

Hello, I'm using FreeBSD 8.2 amd64. I bought an external USB 2TB disk and because it came pre-formatted with a load of Windows rubbish. I wanted to use dd to zero out the disk. Here is the command I used:

[cmd=]dd if=/dev/zero of=/dev/da0 bs=1M[/cmd]

However the command was started at around 23:00 last night and it is still running as I write now (17:40) - is this normal for a disk of that size? *gstat* shows that there is disk activity on /dev/da0 and it is constantly around 98% busy.

Thanks.


----------



## randux (Jun 30, 2011)

It's hard to judge. I cleared a 320 but from /dev/urandom and it took about 8 hours. You're using a huge blocksize and that can't be helping since the drive doesn't use buffers that size. Unless you have some reason to physically remove the data you should have just used it, it sounds like what you are doing is a total waste of time. If you have a reason to need to remove the data then you should at least write pseudorandom data to it instead of zeros. Why don't you just cancel it and get on with formatting it for FreeBSD?


----------



## tanked (Jun 30, 2011)

It isn't actually for FreeBSD, it is for a satellite receiver that runs a stripped down Linux OS. When I plugged it in the receiver it couldn't format it for some reason so I connected it to FreeBSD to zero-out the drive.


----------



## tanked (Jun 30, 2011)

Question answered, the command has just finished.


----------



## randux (Jun 30, 2011)

If so then you should have been able to just blast the boot sector (contains the partition table) with a `dd if=/dev/zero of=/dev/yourdisk bs=512 count=1` and it should have been ok. Glad you are back online.


----------



## wblock@ (Jun 30, 2011)

Using a large buffer with dd(1) increases speed.  Of course, "large" is relative.  On a single drive, 64K or 128K is usually large enough to keep up with the drive.  1M will go just as fast, no real downside other than tying up 1M for a while.  Using random data rather than /dev/zero will probably slow it down unless you have a fast processor.

Agreed that there's not much need to wipe out everything.  Wiping the first and last 35 512-byte blocks will take out the MBR, partition table, and GPT data.


----------



## phoenix (Jun 30, 2011)

randux said:
			
		

> If so then you should have been able to just blast the boot sector (contains the partition table) with a `dd if=/dev/zero of=/dev/yourdisk bs=512 count=1` and it should have been ok. Glad you are back online.



You definitely do *not* want to use *bs=512*, you will bog the system down and it will take aeons to complete.  With *bs=512* you are doing individual requests for each sector of the disk, completely negating all the fancy electronics on the disk.  Increase the bs and things will go a *LOT* faster.

Don't believe me?  Try the following and check the speed indicators at the end of the command (each command writes 100 MB of data to disk):

```
# dd if=/dev/zero of=/dev/whatever bs=512  count=200000000
# dd if=/dev/zero of=/dev/whatever bs=1K   count=100000000
# dd if=/dev/zero of=/dev/whatever bs=32K  count=3125
# dd if=/dev/zero of=/dev/whatever bs=64K  count=1560
# dd if=/dev/zero of=/dev/whatever bs=128K count=780
# dd if=/dev/zero of=/dev/whatever bs=1M   count=100
```

You can even reboot in between tests to make sure there are no caching effects.  Using a larger bs will increase throughput to the disk.


----------



## Beastie (Jun 30, 2011)

^ Very true if *dd*ing GBs of disk space, but he's only *dd*ing a single sector (the MBR). It'll be done before the end of a blink.


----------



## phoenix (Jun 30, 2011)

Well, the OP was *dd*'ing the entire disk.    See the command-line used in the first post.


----------



## Pushrod (Jun 30, 2011)

Given that the computer of mine with the smallest amount of RAM has 512MB, I usually use 64 or even 128M as the block size when zeroing out a disk, or creating a file-backed filesystem. I realize a bs of that size is probably pointless, but it keeps the CPU use down and removes processing as the bottleneck.


----------



## wblock@ (Jun 30, 2011)

phoenix said:
			
		

> Don't believe me?  Try the following and check the speed indicators at the end of the command (each command writes 100 MB of data to disk):
> 
> ```
> # dd if=/dev/zero of=/dev/whatever bs=512  count=200000000
> ...



The first two are going to be really slow because they're writing about 100G.

Adjusted numbers:

```
# dd if=/dev/zero of=/dev/whatever bs=512  count=204800
# dd if=/dev/zero of=/dev/whatever bs=1K   count=102400
# dd if=/dev/zero of=/dev/whatever bs=32K  count=3200
# dd if=/dev/zero of=/dev/whatever bs=64K  count=1600
# dd if=/dev/zero of=/dev/whatever bs=128K count=800
# dd if=/dev/zero of=/dev/whatever bs=1M   count=100
```

The last three will take about the same time.  Maybe the last four.  But it's kind of short for a good benchmark, the overhead will distort the numbers.


----------



## randux (Jul 1, 2011)

phoenix said:
			
		

> Well, the OP was *dd*'ing the entire disk.    See the command-line used in the first post.



Yeah but if you read my post I told him he only needed to clear the partition table so I specified count=1. That will work and it will not take ages. Looks like you didn't read anything I wrote!


----------



## gordon@ (Jul 3, 2011)

I generally recommend blowing away the first 10 MB of the disk or so. That'll nuke any partition tables and the beginning of any filesystems behind it, just in case.


----------

