# Does ZFS reduce HDD available space?



## jigzat (Apr 5, 2013)

Hello everyone again, as some of you might have read I have a machine that works as a media server. It has now 3HDD (500GB boot and two 2TB WD Green HDDs) the pair of 2TB HDDs are each one in a ZFS pool. I have other two spare NEAR-RAID1 2TB Seagate Barracuda and I said near because they are just two boxed HDD's with anti-estatic bags just in case a the other two fail. 

Now the spare ones are formatted in HFS and they show in Mac as 2TB but the ones at the server show up as 1.96TB AFP shares. I know that manufacturers show their number in decimal orders of magnitude but both pairs effectively have different usable sizes, since when I moved the files from the HFS pair to the other one (ZFS) I couldn't move all the files because they lack space.

I know that FreeBSD reserves some space when one formats a drive with UFS, does the same thing happen with ZFS? Is the missing space used for store checksums to check file integrity, and if not where is the space?


----------



## olav (Apr 6, 2013)

ZFS does not reserve space. If you run out of space, you're in trouble as ZFS requires to write additional data to delete blocks. However, when you create a ZFS filesystem you can use the reservation property to safeguard yourself.


----------



## throAU (Apr 15, 2013)

Apple reports disk space the same way drive manufacturers do:  using decimal representations of kilobytes, megabytes, etc.  I believe this behaviour started with OS X 10.7 Lion.  Maybe 10.6 Snow Leopard.

FreeBSD, Linux, Windows, etc. use binary based representations of kilobytes, megabytes, etc.

This probably explains the discrepancy in reported disk space.

IMHO the whole decimal-for-storage measurements and binary for everything else thing is retarded (though I can see why Apple have done it - because the drive manufacturers have), but that's another issue.


----------



## jigzat (Apr 16, 2013)

Thank you both for the answers, but there is definitely something weird going on in my server because I had this two HDD's formatted in HFS, and they were holding each one near 2TB of data (there was almost no space left) I made a backup into yet another two 2TB HDDS, and took the first ones into the server and formatted them with ZFS, then I tried to move the files back from the backup and I couldn't move all of them, I was lacking about 20GB. 

I will try with a different brand since I had to return one of the backup HDDs.


----------



## Savagedlight (Apr 16, 2013)

ZFS does use some space for metadata, such as checksums used for integrity checking.
If you seem to run out of space, try `# zfs set compression=on tank`, which would enable LZJB compression. This is a very fast compression algorithm. Even though compression isn't recommended for latency-sensitive applications such as database servers, it shouldn't be an issue for bulk storage.

If that's not enough, you could try `# zfs set compression=gzip tank` or `# zfs set compression=gzip-N tank`.

See the zfs(8) manual for more information.


----------



## RedRat (Apr 16, 2013)

ZFS reserves 1/64 of spool space for its metadata, so if your spool size is 2TB then ZFS datasets on it can use only 2*63/64=1.96875 TB.


----------



## jigzat (Apr 16, 2013)

Cool, thank you very much, I was suspecting that, I will try to compress as soon as I have my HDD replacement and a new backup.


----------



## User23 (Apr 17, 2013)

jigzat said:
			
		

> Cool, thank you very much, I was suspecting that, I will try to compress as soon as I have my HDD replacement and a new backup.



If it is a media server most of the data will be already compressed. There is no need to compress media files like MPEG, it*'*s a waste of time and power. And BTW, filling HDDs up with more than 90% of their capacity is not a good idea. If filled sequential it is ok, but if you start deleting some files and write new ones it really slows down the HDD. Usually you don*'*t fill up a ZFS pool with more than 80% of its space.


----------

