# ZFS - Differences between NOP-write and Deduplication



## Rulus (Jan 27, 2014)

Hi.

That is. I don't know what is the difference and which should I use. I use snapshots, so I would like to save space.

Thanks and greetings.


----------



## usdmatt (Jan 28, 2014)

I've used ZFS since FreeBSD 7.something and answered numerous questions on here but still had to go and look up what nop-write was about. Seems fairly straight forward. If a record of data is about to be "overwritten", ZFS can detect if the new record is exactly the same as the old one and not bother doing anything if it is. As ZFS doesn't actually overwrite any data, if the new record had been written, it would of been written somewhere new, and the old copy would of been kept (assuming you have snapshots that reference that record). With nop-write it's as if nothing ever happened; The snapshot still points to the live record, and so does the live dataset. This will not save space for duplicate 'live' data like dedupe will, only snapshots.

Personally I wouldn't want to rely on low level features like this to save space. It's more of a clever optimisation than something that users should make use of. The commit message in FreeBSD is also fairly specific about the requirements for it to work (Although it's possible these may have changed since then):



> It currently works only on datasets with enabled compression, disabled
> deduplication and sha256 checksums.



Neither sha256 checksums or compression are default options so you'd need to enable both of these and then you have no certainty that it's working or whether it's actually saving any space. Dedupe and compression will both give you instant access to savings ratios.

Unless you've put a decent amount of preparation and research in, dedupe can be cause problems if you're not careful. I would just enable lz4 compression and be done with it. (You can change checksum to sha256 as well if you like in the hope that the nop-write feature will come into effect). I've seen lz4 compression reach 2x savings in multiple situations.

Of course you shouldn't really knowingly under-provision storage and rely on dedupe/compression/whatever to stop you filling it.

How many snapshots are you planning to keep, and does your data change enough that you expect snapshots to use a serious amount of space?
Most average users can easily store a years worth of snapshots without really using that much space in the grand scheme of things.


----------

