# install BSD on the ZFS l2arc ssd



## M_ (May 26, 2013)

Hello everybody,

*F*irst of all, since this is my first post, let me introduce myself. My name is Marco and I am a physics student. For my studies I use mainly Linux (Ubuntu and Scientific Linux), OS X and a little bit of AIX for parallel computing. I have played around a little bit in the past with OpenSolaris and FreeNAS, but my knowledge of FreeBSD is very limited. Fortunately both the documentation and this community provide a lot of material to learn!

I am about to build a small, low-power computer to serve as NAS, Git server and backup machine in my home network, and, given my previous very positive experience with ZFS, I would really like to use it for my data disks.

At the moment I have an AMD E350 motherboard with 8 GB of RAM, Gb ethernet, 2 x 2 TB drives that I plan to use in ZFS mirror + 1 x 2 TB drive to back up the mirror. For the OS I was planning to buy either a 32 or 64 GB SSD.

Given the very limited amount of SATA ports on my setup I was planning to use this SSD, not only for the OS, but also for the ZFS L2ARC and, possibly, ZIL. Now my question is: how should I configure this? I know that the general rule is to have dedicated disks both for L2ARC and ZIL, but is partitioning really not an option? What about mapping ZIL and L2ARC as files on the OS filesystem?

Alternatively I could use a memory card to boot the system and the SSD only for L2ARC + ZIL. How would you configure this installation?

Ultimately, the ZFS installations I did so far were not using L2ARC nor ZIL. If I serve 3-4 computers on a Gb network with NFS, Samba and AFP, how much of a difference do they make?

Thank you very much for reading all the way through, have a nice day,

Marco


----------



## gkontos (May 26, 2013)

Just a few pointers. 


For ZFS you want to use ECC memory.
ZIL will increase your speeds as long as the writes are synchronous (Databases, NFS). ZIL space <= RAM. Usually 1/2
L2ARC will increase your read performance. There is no rule of thumb regarding the space. The more the better.
In your case, your overall performance depends a lot on the CPU. I have seen machines being able to push at 90MB without any separate devices.

net/samba36 can often cause headaches and you need to compile it with IO support. I have not tried Samba4 yet.
AFP works a bit faster, especially with net/netatalk3 over net/netatalk2.

If you intend to use an SSD then yes, you can use the same SSD for the OS, L2ARC & ZIL.

You might also want to upgrade your system to 9.1-STABLE that includes the latest ZFS feature flags and use LZ4 compression.


----------



## M_ (May 26, 2013)

Dear @gkontos, thanks for your reply.



> For ZFS you want to use ECC memory.



Unfortunately ECC memory is not supported by this hardware, and, back when I bought it, I could not find a low power system (~10 W idle) that supports it. I might upgrade when the new generation of CPU comes out. Anyway, although I totally agree that ECC is in general a good tool avoid troubles, the use I would do of the system is mainly long term data archiving. Provided that the very first time the data get written on the disk, there are no memory bit flip (which I would probably spot since I tend to checksum source and target after moving stuff), I think I should be safe, should I not? I just need to know that if I access data in, today or in 5 years, they are the same bits I entered. I don't mind reading them twice, as long as I can trust that the on-disk copy is pristine.



> ZIL will increase your speeds as long as the writes are synchronous (Databases, NFS). ZIL space <= RAM. Usually 1/2



I know there is the possibility to force NFS to run async, is this a bad idea? It would then make ZIL unnecessary.



> If you intend to use an SSD then yes, you can use the same SSD for the OS, L2ARC & ZIL.



Regarding the actual implementation of having L2ARC and, possibly ZIL on the OS disk, what would you suggest? Multiple partitions or dedicated files?



> You might also want to upgrade your system to 9.1-STABLE that includes the latest ZFS feature flags and use LZ4 compression.



I will make sure to upgrade to 9.1-STABLE and try compression, thanks for the tip!

Thank you again,
Marco


----------



## gkontos (May 26, 2013)

M_ said:
			
		

> Dear gkontos,
> Unfortunately ECC memory is not supported by this hardware, and, back when I bought it, I could not find a low power system (~10W idle) that supports it. I might upgrade when the new generation of cpu comes out. Anyway, although I totally agree that ECC is in general a good tool avoid troubles, the use I would do of the system is mainly long term data archiving. Provided that the very first time the data get written on the disk, there are no memory bit flip (which I would probably spot since I tend to checksum source and target after moving stuff), I think I should be safe, should I not? I just need to know that if I access data in, today or in 5 years, they are the same bits I entered. I don't mind reading them twice, as long as I can trust that the on-disk copy is pristine.



ZFS relies on memory for any disk operation. Faulty memory means bad data written on disks. This is a risk that you have to accept.  



			
				M_ said:
			
		

> I know there is the possibility to force NFS to run async, is this a bad idea? It would then make ZIL unnecessary.



Again, it depends. But generally speaking I never use a ZIL device for a file server. 



			
				M_ said:
			
		

> Regarding the actual implementation of having l2arc and, possibly ZIL on the os disk, what would you suggest? Multiple partitions or dedicated files?



I usually perform CLI installations using gpart() allocating space marked as freebsd-zfs for my *CACHE* partitions. 



			
				M_ said:
			
		

> I will make sure to upgrade to 9.1-stable and try compression, thanks for the tip!
> 
> Thank you again,
> Marco


----------



## phoenix (May 27, 2013)

Never use files for backing storage for vdevs in ZFS. The support for using files as vdevs is for testing and prototyping, not production use, even at home.

Partition the SSD into 5 partitions using GPT:

type: freebsd-boot   size: 128 KB
type: freebsd-ufs     size: 4-8 GB (10-20 GB if you include /use/local) start: 1 MB mount: /
type: freebsd-zfs      size: 8-16 GB for L2ARC
type: freebsd-zfs      size: 1-2 GB for ZIL
type: freebsd-swap  size: 1-2 GB
Leave 4-8 GB unallocated to improve performance, as that allows room for the garbage collector to run in the background.

Be sure to read the man page for gpart(8) for the correct syntax to use.  And don't forget to install the correct boot blocks (gptboot if using UFS for / or gptzfsboot if using ZFS).


----------



## M_ (May 27, 2013)

Thanks, very useful indeed!

Cheers,
Marco


----------

