# FreeBSD 9.1 and zfs



## akil (Jan 3, 2013)

Hi

I have a question to you. Do I need to recreate my zfs pools if I want to use FreeBSD 9.1 with updated zfs ? Maybe there is way to update my pool to avoid copying and other boring things.

As I read, new FreeBSD 9.1 has additional improvements, but I don't know how that would influence on my current pool which comes from 9.0.

Here is small hint - "ZFS improvements from illumos project"


----------



## Savagedlight (Jan 3, 2013)

It seems like these improvements are transparent, and won't need an upgrade of the pool or file systems. Please correct me if I'm wrong.

As for upgrading the pool, look in the man pages of zpool and zfs for the "upgrade" command.
I strongly recommend testing that everything else is working as it should, and that you back up any data you don't want to lose, before running the upgrade process.


----------



## usdmatt (Jan 4, 2013)

FreeBSD 9.1 includes the feature flag version of ZFS which I believe is either ZPOOL version 1000 or 5000. If your pool is <= 28 then you will most likely need to run a zpool/zfs upgrade (Well I say need, there's no real requirement to upgrade, you can leave it at an older version if you like). Just running the following will tell you if any pools/filesystems can be upgraded:


```
zpool upgrade
zfs upgrade
```

"Improvements" that are just bug fixes or changes to the ZFS code itself will obviously be in effect without upgrading the file systems.


----------



## BlueCoder (Jan 7, 2013)

*What I'd like to see*

Improvements I would like to see:

   Whitespaces, for unionfs

   Convert zpools between different types.

   Being able to grow,shrink and realign all pool types.

I'm perfectly fine with the last two being offline actions. Should still be faster than send | recv .  

A feature I would love but might require too much of an architecture change would be a better dedupication method at the zpool/file level. Data is share between volumes; much like hard links. Then run a low level priority daemon that searches for duplicated files. And a command line utility that manually hardlinks two identical files on different volumes in the same pool; freeing storage for one of the files. And also have cp and mv routines so that file data isn't copied. I think this would be a far more useful type of deduplication than how it is done at the volume level. With this one could partially achieve somewhat similar space saving results as cloning.


----------

