# Corruption simulation



## Herrick (Apr 29, 2010)

Hi,

Is there an easy way to simulate corruption on a disk ?

I would like to simulate corruption at the hardware level to see how zfs would react

Thanks


----------



## ptempel (Apr 29, 2010)

You could use something like "dd if=/dev/zero of=/dev/ad0sX bs=YYY count=ZZZ".  Just pick different small values for the slice (x), block size (YYY) and number (ZZZ).  Maybe you could write a simple script to randomize them.


----------



## Herrick (Apr 29, 2010)

Hi,

thanks for the hint, but I wonder if we can target a specific file's location on a disk...

The reason why I'm asking, is because I have a 12TB slice and there is only about 100mbs of file in there for testing... Could take a while before hitting the file with dd in random spots 

Thanks


----------



## User23 (Apr 29, 2010)

just disconnect one of the drives in pool.

edit:

wait, is that the zfs on a single hwraid device?


----------



## carlton_draught (Apr 29, 2010)

Herrick said:
			
		

> Hi,
> 
> thanks for the hint, but I wonder if we can target a specific file's location on a disk...
> 
> ...


Not sure how to do that. If you are just after proof of concept to see if ZFS works (e.g. using diff against a copy on a USB stick), why not do the following: 

Put together a mirror or RAIDZ with two or three spare disks you have. 
Write random data to one whole disk, that way you are sure to hit part of the file. e.g. if the device you want to write to is /dev/ad0:
`# dd if=/dev/random of=/dev/ad0 bs=1M`
 diff your file with an assumed good copy of the file, to see if the ZFS magic has worked. It's easy enough to throw ZFS with copies=2 on your USB stick, if you want to be pedantic and know that it works (but if you don't trust ZFS, just use UFS and assume that if the diff works, ZFS has worked, and ignore the infinitesimal chance that they have both had a coincidental error. Or if you are that anal just use as many copies on different UFS drives and test each to bring that chance even lower. Or use a simple text file where you can manually check the characters.)


----------



## ptempel (Apr 30, 2010)

I agree with carlton_draught above that figuring out where a single file is stored could be difficult.  Unless it was a very large file taking up most of the space.   I found a page that has the ZFS test suite scripts that the Opensolaris folks use:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfstestsuite

two interesting ones for you might be "redundancy" and "scrub_mirror" (assuming you have a ZFS mirror).  I haven't used ZFS yet but an intrigued to try it if I can get some funds for a cheap home NAS project.


----------



## sub_mesa (May 1, 2010)

You can also try geom_nop; i believe it can simulate read failures (not the same as corruption, though):


```
-r rfailprob  Specifies read failure probability in percent.
     -w wfailprob  Specifies write failure probability in percent.
```

man gnop for usage. 

Once you have a /dev/ada0.nop for example, you can add those to ZFS array (not ada0 but ada0.nop!)


----------



## john_doe (May 1, 2010)

Herrick said:
			
		

> thanks for the hint, but I wonder if we can target a specific file's location on a disk...


Haven't tried myself but...
`# zinject -t data /path/to/file`?


----------

