# gvinum raid5 performance



## graudeejs (Dec 18, 2010)

Today I started experimenting with gvinum.
I'm interested in raid5.

For this I use 3x 500GB sata2 HDD's

`# dd if=/dev/random of=/dev/ad2 bs=16384` works at 42MB/s
`# dd if=/dev/random of=/dev/gvinum/raid5_test bs=16386` works at 5.2MB/s

What the hell?
Why is it so slow?

I tried creating raid5 with different stripesize... but performance just sux....

Writing to zraid1 gives 120MB/s, the difference is HUGE.
Any Ideas?

I have 1 plex (maybe I need many plexes)

P.S.
Striping is little better (34MB/s)


EDIT:
CPU load is 0.00 to 0.14 while writing to raid5


----------



## xibo (Dec 19, 2010)

So, what are the stripe sizes you tried?

RAID level 5 (and also 3, 4 and 6) is "noticably" slower then level 0/1 RAIDs if the writes are too small (or non-sequencial), as the parity blocks need to be recalculated and written after each write operation. The fastest write access is achieved when writing stripe_size*(disks-1) sized blocks at once ( or with sufficiently sized write buffers ), as that saves the requirement to read the data from the other disks to evaluate the parities.


----------



## graudeejs (Dec 19, 2010)

I tried default stripe size, and also 515B to 512K 
Right now I dropped the idea... and reinstall my system with raidz again...

But I will be interested in doing experiments in VirtualBox

It's interesting how raidz can achieve such a good performance (today I saw 148MB/s writing speed (using zpool))


----------



## Galactic_Dominator (Dec 20, 2010)

You will have better performance and data integrity with graid3().  Raid 5 is not a good thing despite it's focus.

Plus /dev/random is slow.  You can't really use that to benchmark write speed.  On my current system(old laptop), I can generate about 24MB/sec.  

Also your blocksize was different in your examples.  May not have had much effect but use apple to apple comparisons.


----------



## phoenix (Dec 20, 2010)

The big difference between RAID5 and raidz comes from the Copy-on-Write feature of ZFS.

On a RAID5 array, writing 128 KB of new data into a 1 MB file requires reading the entire 1 MB file off the array, changing the 128 KB in memory, recomputing the parity bits, then writing the entire 1 MB back out to disk.

On a raidz vdev, writing 128 KB of new data into a 1 MB file requires calculating the checksum on the 128 KB block, writing the 128 KB to disk, and updating the block pointers to point to the new data.  No reads required.

RAID5 updates are always "read", "update", "write".  All operations on ZFS are just "write", "update block pointers", since no data is ever overwritten.

Thus, benchmarks for RAID5 arrays will be slowed down by all the reads, while benchmarks for raidz will always be streaming writes.


----------



## phoenix (Dec 20, 2010)

You may want to try graid5 instead of gvinum(8), as there will be less complexity that way.


----------

