# Write-Back cache!



## shahzaib (Sep 6, 2018)

Hi,

We are looking to use NVMe SSD as write-back cache just to enhance write IOPS while reads should only be served by backend storage based on HDDs. 

Is this possible with FreeBSD ?

Regards.


----------



## SirDice (Sep 6, 2018)

https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/


----------



## shahzaib (Sep 6, 2018)

SirDice said:


> https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/


Hi thanks for the response. However, i am bit confused on usage of SLOG/ZIL device. Our use case is as follows:

We run a video sharing website where publishers upload the videos on FreeBSD server via HTTP and then these videos are used for viewers as playback (Just like anyother video sharing website). So the specific area we're trying to optimize here is the disk writing which occurs during video uploading. So my very specific question is:

If i setup a SLOG device and upload a 10mb video via HTTP to FreeBSD server, will the writing go to SLOG first and then flush to backend storage later or the video will be directly upload to storage?


----------



## kpa (Sep 7, 2018)

SLOG/ZIL is only for synchronous writes to guarantee the atomicity of the writes as much as possible. You're not going to see synchronous writes unless you use NFS or some SQL databases or in the rare case your application actually uses them. If you're just storing uploaded data on a local disk none of the writes are going to synchronous and the SLOG/ZIL will be unused.

The purpose of the SLOG/ZIL device is to act as a journal that can be played back or rolled back completely on system startup when ZFS detects that the pool has unfinished synchronous writes on it pending. Normal writes are never journalled.


----------



## bds (Sep 7, 2018)

Have you thought about adding the NVMe SSD as a cache vdev for the pool? That would likely improve it.


----------



## SirDice (Sep 11, 2018)

bds said:


> Have you thought about adding the NVMe SSD as a cache vdev for the pool? That would likely improve it.


Cache devices only improve read access (in certain cases). 

```
Cache devices
     Devices can be added to a storage pool as "cache devices." These devices
     provide an additional layer of caching between main memory and disk. For
     read-heavy workloads, where the working set size is much larger than what
     can be cached in main memory, using cache devices allow much more of this
     working set to be served from low latency media. Using cache devices
     provides the greatest performance improvement for random read-workloads
     of mostly static content.
```
zpool(8)


----------



## KDragon75 (Sep 24, 2018)

I like to think of the ZIL as a backup of the sync writes in RAM until their flushed to disk with the TXG. The SLOG is just a "Separate zfs intent LOG" stored on a faster (flash) device. It should never be faster than your pool under ideal conditions.
You could setup the NVMe as its own pool (pref. 2 in a mirror) and script the file move so that once its done it "flushes" to the disk. This seems a bit silly though as I would imagine your pool is faster than the cumulative upload of your users. If not, add more disks!


----------



## kpa (Sep 25, 2018)

KDragon75 said:


> I like to think of the ZIL as a backup of the sync writes in RAM until their flushed to disk with the TXG. The SLOG is just a "Separate zfs intent LOG" stored on a faster (flash) device. It should never be faster than your pool under ideal conditions.
> You could setup the NVMe as its own pool (pref. 2 in a mirror) and script the file move so that once its done it "flushes" to the disk. This seems a bit silly though as I would imagine your pool is faster than the cumulative upload of your users. If not, add more disks!



This is just not correct. ZIL is the actual log on the physical medium and its purpose is to be the replay journal that gets played back in case of a crash. ZFS guarantees that the ZIL information hits the disc almost at the same time with the actual synchronous write, almost because it's not possible to guarantee full atomicity with storage mediums that do their own write buffering. The log can be stored on the main pool or on a separate SLOG device as you correctly noted.


----------



## KDragon75 (Sep 25, 2018)

kpa said:


> This is just not correct. ZIL is the actual log on the physical medium and its purpose is to be the replay journal that gets played back in case of a crash. ZFS guarantees that the ZIL information hits the disc almost at the same time with the actual synchronous write, almost because it's not possible to guarantee full atomicity with storage mediums that do their own write buffering. The log can be stored on the main pool or on a separate SLOG device as you correctly noted.


I think we're down to semantics. I think of it as a backup of the RAM for a crash or power loss as all the work is still done in RAM including when the TXG gets flushed to the pool. It's not "in-line" with the write path. As you pointed out it's only read after a crash because the writes not flushed must be completed. I do understand a sync write is not returned as completed until it's in the ZIL (pool or SLOG). Or perhaps I'm missing some nuance?


----------

