# mSATA Device for ZIL MLC or SLC



## minimike (Sep 16, 2012)

Hi there

I've bought for my Company an new Server with 96 GB memory and two XEON's. Also the Box has got two 240 GB SSD's for the Operating System (FreeBSD 9.1) and sixteen 3000 GB drives to use as storage. Now the storage seems to be bad configured because I use more then 9 drives for an VDEV. And I didn't though about a ZIL before.It's currently really very slow. I have understood painful what I did wrong.
So tomorrow at work I would backup the data and destroy the current zpool to build a better one. Now I've to think about a ZIL. My next problem is that it's impossible to put more drives or SSD's in this case because it's full. It's only possible to put two mSATA drives on two PCIe slots inside. Currently I keep my eyes on two models of mSATA drives for buying. One is a 

ADATA SX300 mSATA SSD 64 GB


```
readspeed	550 MB/s
writespeed	485 MB/s
Random Write (4KB)	65000 IOPS
transmissionspeed	6 Gbit/s

S.M.A.R.T support	yes
TRIM support	        yes
ECC	yes
Native Command Queuing (NCQ)	yes
controller	         LSI SandForce SF-2281
type                    mlc
```

The other one is a 

Intel 311series mSATA SSD 20 GB

```
readspeed	        200 MB/s
writespeed	        105 MB/s
Random Write (4KB)	3300 IOPS
Random read (4KB)	37000 IOPS
type                    SLC
```


I am a absolute novice about them. Which model should I buy? The MLC or the SLC? Tell me more if you think my idea would be absolute idiotic please.

cheers Darko


----------



## wblock@ (Sep 16, 2012)

Two 240G SSDs for FreeBSD alone is... a lot.

On some systems, the mSATA connector is tied to another SATA connector.  One or the other can be used, but not both.

Possibly you could partition the mirrored SSDs, like 40G for FreeBSD and the rest for a mirrored ZIL.


----------



## minimike (Sep 16, 2012)

wblock@ said:
			
		

> Two 240G SSDs for FreeBSD alone is... a lot.
> 
> On some systems, the mSATA connector is tied to another SATA connector.  One or the other can be used, but not both.
> 
> Possibly you could partition the mirrored SSDs, like 40G for FreeBSD and the rest for a mirrored ZIL.



Then I've to reinstall a production ready setup. And 64 GB are Swap also 100 GB for Databases. This would not to be a option for me.

And we would by a mSATA PCIe Adapter. PCIe is just for power. A SATA Cable goes from each Adapter to the SATA Controller.


----------



## gkontos (Sep 16, 2012)

Using 16 drives for a pool is hardly a bad thing. It all depends on how you configure your pool. For example if you use 2 striped RAIDz2 your performance can easily reach to 500MB in writes giving you 36TB of available storage.

Adding a LOG device might help if you are doing a lot of synchronous writes. With 96GB or RAM you will not need more than a  48GB for LOG. Do keep in mind that ZIL should best be mirrored. 

Using SSDs for the OS is a total waste unless you include SWAP and CACHE there. You will get more benefit if you stripe the CACHE.

Please describe what controller is being used in this set up and what file sharing protocols do you use.


----------



## wblock@ (Sep 16, 2012)

SLC is better, more expensive flash.  But that Intel is only 20G, meant for Intel's "Smart Response" (which does not work on FreeBSD, AFAIK), and, again, only 20G.

It sounds like you still have open SATA ports.  Is it just a matter of a place to mount the SSDs?


----------



## minimike (Sep 16, 2012)

Hello gkontos,

many thanks about your reply. I have the data currently in my office but I think I have bought a 3ware 9750SA-8I what will be handled with the tws driver.

The primary Job of this Box is to serve Bacula. Secondary planed/used protocols are almost AFP to serve until 50 Mac OSX Clients. CIFS for Windows. For the future is planed ftp and NFSv4 for some external targets. And some HTTP sites but with very very little Trafic. The Box is also a member of a Active Directory Domain. The Port net/samba36 is installed. Winbind gets the Users and put them to PAM. Netatalk authenticates against PAM with the from Winbind has given Users. The employers want to have TimeMachine for OSX on the Network. And the Box ships this "fine not really beautiful solution" At second Bacula is backing ob the MacOSX Clients once a day in the week.
Also Bacula would backing up all other devices and Servers on my Company. First on Storage from Monday to Friday. And then they put the Backups on a 48 TB Tape-Library at the weekend. 

This box has the stadium about a base-system. We will begin with 16 drives. If it works fine, I would get more money to buy the Tape-Library at first and second an external case for more drives 12 or 16 HDD's. Maybe in six months.

To reduce traffic for backing up with Bacula it's planed to serve the profiles and data from the Active Directory Users on this box in the future.


----------



## minimike (Sep 16, 2012)

wblock@ said:
			
		

> SLC is better, more expensive flash.  But that Intel is only 20G, meant for Intel's "Smart Response" (which does not work on FreeBSD, AFAIK), and, again, only 20G.
> 
> It sounds like you still have open SATA ports.  Is it just a matter of a place to mount the SSDs?



Yes. I could put two of some like this PCIe adapter's inside the Box

http://www.hwtools.net/adapter/MP3S.html

Power comes from PCIe and then two cables to SATA Ports on the Mainborad. Four ports currently are unused.

Thats the reason why I'm asking about the mSATA stuff for ZIL. It sounds like that would be the silverbullet to solve my problem and it shows very easy to implement.


----------



## gkontos (Sep 16, 2012)

@minimike,

I don't have any experience with that specific controller. In those cases I usually go with LSI (like the one you have) HBA controllers that run under the mps() driver.

I am not sure if a separate ZIL will help bacula. What IO are you getting now?


----------



## kpa (Sep 16, 2012)

As far as I know the ZIL is only used on synchronous writes, for example on database transactions where atomicity is a big concern. Also NFS uses synchronous writes and that's why a separate ZIL is recommended if using NFS with ZFS.  If bacula does not use synchronous writes or the writes are made trough a network filesystem that is not using synchronous writes it's not going to benefit from a separate ZIL device.


----------



## minimike (Sep 17, 2012)

gkontos said:
			
		

> Using 16 drives for a pool is hardly a bad thing. It all depends on how you configure your pool. For example if you use 2 striped RAIDz2 your performance can easily reach to 500MB in writes giving you 36TB of available storage.



You mean something like this?



```
mightychicken# zpool list -v
NAME         SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
brainpool   43.5T  7.07G  43.5T     0%  1.00x  ONLINE  -
  raidz2    21.8T  3.52G  21.7T         -
    da0         -      -      -         -
    da1         -      -      -         -
    da2         -      -      -         -
    da3         -      -      -         -
    da4         -      -      -         -
    da5         -      -      -         -
    da6         -      -      -         -
    da7         -      -      -         -
  raidz2    21.8T  3.55G  21.7T         -
    da8         -      -      -         -
    da9         -      -      -         -
    da10        -      -      -         -
    da11        -      -      -         -
    da12        -      -      -         -
    da13        -      -      -         -
    da14        -      -      -         -
    da15        -      -      -         -
```


----------



## gkontos (Sep 17, 2012)

minimike said:
			
		

> You mean something like this?



Yes but you have to align the disks for 4K otherwise you will have significant performance loss.


----------



## minimike (Sep 17, 2012)

gkontos said:
			
		

> Yes but you have to align the disks for 4K otherwise you will have significant performance loss.



Do you use gnop  for that?


----------



## gkontos (Sep 17, 2012)

minimike said:
			
		

> Do you use gnop  for that?



Yes, you may also want to use gpart() as described here.


----------



## minimike (Sep 17, 2012)

hmm curios.

I've created the *.nop devices and then the zpool with these devices.
After rebooting the box  the zpool has started automaticly with da0 until da15 without *.nop


----------



## Savagedlight (Sep 18, 2012)

*.nop devices are generally destroyed on reboot, and not automatically recreated. This does not matter though, as the VDEVs ashift value is set on creation, and can't change later on.

If you "lost" disk/partition labels, you may want to look at the -d argument to zpool import.


----------

