# Hardware is ready, What tests shall we run?



## dvl@ (Feb 14, 2013)

I think we have all the hardware assembled, and I've run a few tests, but now it's time to selected a group of tests to run on each set of HDD and see how they compare.

I've added a blog post which outlines the hardware so you know what we have to test.  In short, we have a 4x2TB Seagate, raidz1 array, 3 individual 3TB HDD (Toshiba, WD, and Seagate), one 2TB Segate, one SSD, and the ability to create an 8x2TB raidz2 array of Seagates.

I can run a simple bonnie++ test.

But I think what I'm looking for is some fio tests to run.  On a simple search, I didn't find any 'standard' set of tests. 

Got ideas?


----------



## wblock@ (Feb 14, 2013)

A test that might or might not have interesting results: use dump(8)/restore(8) to duplicate the FreeBSD system on each of the individual drives.  Boot from each and do a buildworld.

The times might not be that interesting because the hard drives are all probably similar in performance.  The SSD will win, but by how much?  That could illustrate how much a buildworld depends on I/O rather than CPU.  Doing the same buildworld from a root-on-ZFS setup could be interesting also.  Even if the results are not surprising, they would at least verify the common assumptions.


----------



## dvl@ (Feb 15, 2013)

Sounds feasible.  Something along the line of what the handbook says?

e.g.


```
cd /mnt/root && dump -0b 512 -f - / | buffer -S 2048K -p 75 | restore -rb 512 -f -
```

Oddly enough, I found that reference on an obscure website.  

Repeat for each mount point.


----------



## wblock@ (Feb 15, 2013)

Here is another one. 

Incidentally, anything higher than 64 for -b might be bad news.


----------



## dvl@ (Feb 15, 2013)

wblock@ said:
			
		

> Incidentally, anything higher than 64 for -b might be bad news.



And why might that be?


----------



## wblock@ (Feb 15, 2013)

When searching for ways to speed up dump(8), I found reports that -b64 was roughly twice as fast as -b32.  However, the next logical step, using even larger values, resulted in failures.  Unfortunately, I don't think I saved a pointer, but I'm sure they were on one of the FreeBSD mailing lists.


----------



## dvl@ (Feb 15, 2013)

Hmmm, do you have an easy way to change booting remotely?  boot from da0 this time, da1 next time, etc.


----------



## wblock@ (Feb 15, 2013)

Remotely?  Well, there are the bootme and bootonce attributes for GPT partitions.  See gpart(8).


----------



## dvl@ (Feb 15, 2013)

Sounds like a way to brick my machine when I'm away.... Perhaps I'll wait until next week.


----------



## dvl@ (Feb 17, 2013)

Idea: put the OS onto an SSD, use that as the source.  This would eliminate the source for the copy as a possible bottleneck.


----------



## dvl@ (Feb 19, 2013)

I have copied the base os to both the SSD (mounted at /sdd) and to a 5-disk raidz2 array (mounted at /mnt).  From there I'll do two make buildworlds.  The first, from /sdd/usr/src, the second from /mnt/usr/src.

I'm hoping that'll be enough of a test without necessitating booting from the respective drives.


----------



## wblock@ (Feb 19, 2013)

The nice thing about rebooting for benchmarks is that it clears all buffered filesystem stuff.


----------



## dvl@ (Feb 19, 2013)

Rebooting can be done.  That's not a problem.  But it'll be from the same gmirror each time.


----------



## tingo (Feb 20, 2013)

Are you looking for an easy way to change boot device temporarily?
If so, 
	
	



```
gpart set -a bootonce -i <n> <device>
```
 might help.


----------



## dvl@ (Feb 20, 2013)

tingo said:
			
		

> Are you looking for an easy way to change boot device temporarily?
> If so,
> 
> 
> ...



That is so sexy!


----------



## dvl@ (Feb 20, 2013)

Hmmm, how does this work in conjunction with BIOS boot devices?


----------



## _martin (Feb 20, 2013)

I did post it once here, not sure which thread it was. It's worth mentioning these ZFS best practises. 

Especially:


```
RAIDZ Configuration Requirements and Recommendations

A RAIDZ configuration with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised.

    Start a single-parity RAIDZ (raidz) configuration at 3 disks (2+1)
    Start a double-parity RAIDZ (raidz2) configuration at 6 disks (4+2)
    Start a triple-parity RAIDZ (raidz3) configuration at 9 disks (6+3)
    (N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 6
```

Recently I too built a new server and wanted to post some tests to compare. I've 6 2TB REDs in raidz2 with one Intel's 520 SSD as cache device, all hooked up on LSI SAS 9211 on S1200bts board. One pool is partially encrypted, HW AES supported.

But I'm not that familiar with any "standard" test either; I can share my bonnie++ tests to compare though.


----------



## dvl@ (Feb 20, 2013)

I can do an 8 x2TB disk raidz1 or raidz1.  What I'm testing with now is constricted by having the 3 x 3TB HDD in the server.

At this point, it seems clear which HDD to buy, based on the tests so far.

Or would any of you do more tests before making a purchasing decision?


----------



## wblock@ (Feb 21, 2013)

dvl@ said:
			
		

> That is so sexy!



Um... but not back when I posted it in message number 8?


----------



## dvl@ (Feb 21, 2013)

wblock@ said:
			
		

> Um... but not back when I posted it in message number 8?



Sorry, no. I didn't get it then.


----------



## dvl@ (Feb 21, 2013)

I am now of the opinion that I don't need any more tests to find which HDD of the three 3TB disks I've been looking at.

Anyone think I'm wrong?  I think the Toshiba is the fastest.


----------



## Anonymous (Feb 21, 2013)

dvl@ said:
			
		

> I am now of the opinion that I don't need any more tests to find which HDD of the three 3TB disks I've been looking at.
> 
> Anyone think I'm wrong?  I think the Toshiba is the fastest.



Did you post the results anywhere?


----------



## dvl@ (Feb 21, 2013)

Good point.  The G+ post is at http://bit.ly/12YnoB7 which points to this Google Docs spreadsheet: http://bit.ly/Ww27cR


----------



## _martin (Feb 22, 2013)

dvl@ said:
			
		

> I can do an 8 x2TB disk raidz1 or raidz1.


But neither of that would be in line with recommendation pasted above. If you can't do more disks, maybe you can go with 6x3TB. 
Once you have those disks you can check and compare performance difference with raidz1 on 3x and 4x setup (and raidz2 with 6x and 8x).


----------



## dvl@ (Feb 22, 2013)

matoatlantis said:
			
		

> But neither of that would be in line with recommendation pasted above. If you can't do more disks, maybe you can go with 6x3TB.
> Once you have those disks you can check and compare performance difference with raidz1 on 3x and 4x setup (and raidz2 with 6x and 8x).



The recommendation pasted above in message #17 doesn't indicate to me that an 8 disk raidz2 is not recommended.

Can you elaborate please as to how it goes against recommendation?


----------



## _martin (Feb 22, 2013)

dvl@ said:
			
		

> The recommendation pasted above in message #17 doesn't indicate to me that an 8 disk raidz2 is not recommended.
> 
> Can you elaborate please as to how it goes against recommendation?



Paragraph _Should I Configure a RAIDZ, RAIDZ-2, RAIDZ-3, or a Mirrored Storage Pool?_ gives you some pointers. 

I found this zfs mailing list very helpful.


----------



## dvl@ (Feb 22, 2013)

I dont' see the phrase "Should I Configure a RAIDZ, RAIDZ-2, RAIDZ-3, or a Mirrored Storage Pool? " at that zfs mailing list URL.

Googling for that phrase, I found http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Should_I_Configure_a_RAIDZ.2C_RAIDZ-2.2C_RAIDZ-3.2C_or_a_Mirrored_Storage_Pool.3F

At the URL you posted in message #26, I found references to using '3, 5, or 9 disks' arrays, but early in the discussion.  When reading further into the posting, I concluded there is no recommendation against not having an 8 disk raidz2 array.  The issue discussed at the start of the post seems to be about rounding and skipping sectors.  Reading later, it seems that that issue it not as big an issue as it was once or as it was once perceived.

Am I reading it wrong?


----------



## _martin (Feb 23, 2013)

dvl@ said:
			
		

> I dont' see the phrase "Should I Configure a RAIDZ, RAIDZ-2, RAIDZ-3,


It's right there - paragraph 1.2.4 on this solaris internals web.

Mailing list explains what is the optimal disk amount in vdev of certain type and why (it's not saying what not to use, but rather explains the optimal size).

Anyway, you'll have your disks - you can do your tests and verify it yourself. I gave my 2Â¢.


----------



## tingo (Feb 23, 2013)

dvl@ said:
			
		

> Hmmm, how does this work in conjunction with BIOS boot devices?



Two separate things. BIOS / UEFI will boot from the default device or the device you tell it to boot from. gpart(8) says it will look for partitions, but doesn't say anything about drives. I really wish there was a man page for gptboot.


----------



## wblock@ (Feb 23, 2013)

A few weeks ago, I added some information about how gptboot and gptzfsboot work in the -CURRENT version of the gpart(8) man page.  I have not yet MFCed it, because after doing it I realized that it should really be in separate man pages... which I myself have proposed before, but forgot.

It's been suggested that gptboot is very similar to the old MBR booting and could be described in boot(8).  gptzfsboot needs a separate page.  A thread from the freebsd-doc mailing list starts here.

If you can help on details or even just suggesting an existing man page to use as a template, please PM or email me.


----------

