# ZFS Quick Question Regarding a 3 Drive Mirror



## overmind (Dec 6, 2011)

Hi,

I want to know if is possible  this scenario: Use ZFS on a system with two drives configured as ZFS soft RAID1 (mirror). Then add a third drive to ZFS mirror pool, wait for sync and then remove the third drives and use it as an offsite backup.  So most of the time the system (configured as a 3 ZFS mirrored drives) will work only with two drives. My question is: would the ZFS file system work slower being in degraded mode?

Is it ok this approach?


----------



## gkontos (Dec 6, 2011)

A ZFS mirror is consisted of 2 disks only. A 3rd disk can be added only as a spare. If I understand correctly you wish to maintain a 3rd disk for off site backups? You could shutdown your system, replace one drive, boot and resilver. The drive you removed would be an offsite backup. But this isn't really the proper way unless your data remains always the same.

Why don't you consider an incremental snapshot backup strategy instead?


----------



## funky (Dec 6, 2011)

This should be possible with the zpool attach and zpool detach command.

`# zpool create tank mirror c0t1d0 c1t1d0`
`# zpool attach tank c1t1d0 c2t1d0`
`# zpool detach tank c2t1d0`

For details see the Solaris ZFS Administration Guide (as pdf, page 85), and of course the zpool(8) manpage.


----------



## SirDice (Dec 6, 2011)

If you detach and attach a drive it will need to be resync'ed. This takes time and performance. 

Not the best way to create backups.


----------



## overmind (Dec 6, 2011)

funky said:
			
		

> This should be possible with the zpool attach and zpool detach command.
> 
> `# zpool create tank mirror c0t1d0 c1t1d0`
> `# zpool attach tank c1t1d0 c2t1d0`
> ...



On this method after a complete sync, If i try to mount c2t1d0 on another machine, I should see all the data from c0t1d right?

I would consider a snapshot based backup for onsite machine, but I was thinking of an offsite backup solution too.

From what I've read it is possible to use 3 hard drives configured as ZFS mirror, with data being mirrored to all three (and not using the third as a spare). Am I right?

Update: I've just read page 85 from the *pdf* in above link, funky's solution is what I needed.


----------



## phoenix (Dec 6, 2011)

gkontos said:
			
		

> A ZFS mirror is consisted of 2 disks only. A 3rd disk can be added only as a spare.



Not true.  ZFS supports n-way mirroring.  Meaning you can add as many disks into a mirror vdev as you want.  And all data will be duplicated across all the drives in the vdev.


----------



## gkontos (Dec 6, 2011)

phoenix said:
			
		

> Not true.  ZFS supports n-way mirroring.  Meaning you can add as many disks into a mirror vdev as you want.  And all data will be duplicated across all the drives in the vdev.



I just read about the 3-way mirror. Honestly, I never thought something like that as meaningful in regards to the traditional mirror sense. 

I also wonder about the performance cost in this kind of implementation. Having seen the performance in a triple disk raiz2 I highly doubt that a 3-way mirror would be any better. I could be wrong of course!

I will try it whenever I get the chance.


----------



## phoenix (Dec 6, 2011)

overmind said:
			
		

> I want to know if is possible  this scenario:
> 
> Use ZFS on a system with two drives configured as ZFS soft RAID1 (mirror).
> 
> ...



No, this won't work the way you think, in that you can't just stick that third drive into another machine and access the pool on the disk.

Instead, you need to do some reading on "breaking a zfs mirror" aka "zfs split".  Not sure if the FreeBSD version of ZFS supports that or not.

That allows you to take a 2-disk mirror vdev, add a third disk, resilver the data onto that disk, then "zfs split" the disk off the mirror, turning it into a stand-alone pool.  Then you can use that disk to boot another system and access the data on it.


----------



## overmind (Dec 6, 2011)

@gkontos

Performance does not matter. You just add the third drive, wait for the pool to rebuild then remove the third drive from the pool. Then move your removed drive to another location for safe data keep. This way you do not need to manually use rsync or other copy tool.


----------



## gkontos (Dec 6, 2011)

overmind said:
			
		

> @gkontos
> 
> Performance does not matter. You just add the third drive, wait for the pool to rebuild then remove the third drive from the pool. Then move your removed drive to another location for safe data keep. This way you do not need to manually use rsync or other copy tool.



Your pool will appear as degraded when you remove the 3rd drive but you don't really care because your mirror is good. I don't know, it makes some sense but then again you will have to constantly resilver your pool for your backup disk to be updated. And during that time you really shouldn't add / remove data there.

Have you thought of the 3rd disk being a different pool and send there your firsts pool snapshots incrementally ? You can always remove it afterwards without having to stop production. 

Anyway, time will tell. I learned about the 3 way mirror tonight


----------



## overmind (Dec 7, 2011)

You don't just physically remove third drive. You also remove it from the pool with *zfs remove pool drive* command. That way the remaining drives from mirror will not appear as degraded.

This could be a good approach for photographers for example that choose also offsite backup. First two mirror drives are used. Then a third is added, wait for resilver then is removed physically and logically. Then is moved to the new location. Then next month a new drive (fourth) is added to the pool (third being on offsite location), and after resilver process this fourth drive is moved to the offiste location and third drive that was an offsite location is taken onsite. And at the end of every month the process will be repeated.

Because photographers have projects from time to time (and do not change data overnight like in case of a database for a website) this approach is ok.

I know this is not very technical approach but many do use this kind of setup (not using zfs but just manually copying their projects. Some of you would recommend setting up a dedicated line between locations and 100% automating the process.


----------



## overmind (May 1, 2012)

@phoenix:  The process works the way you've describe it. And yes, *split* works on FreeBSD.

Next example shows the process:


```
zpool create tank mirror ada1 ada2
zpool attach tank ada2 ada3
zpool scrub tank            # this is not really necessary, resilver will be done anyway
                            # but is useful if we really want to make sure data is ok
zpool status -v
zpool split tank tank2      # we run this after pool finished resilvering or scrubing
                            # we'll notice tank remains with ada1 and ada2 and tank2 gets ada3
                            # tank2 will not be imported after this command will finish.
```


----------



## einthusan (May 3, 2012)

phoenix said:
			
		

> Not true.  ZFS supports n-way mirroring.  Meaning you can add as many disks into a mirror vdev as you want.  And all data will be duplicated across all the drives in the vdev.



Off-topic but does this sort of configuration have better read throughput? I assume 3 times the IOPS? And if you only have 1 vdev with 3 drives in it, don't you need another vdev with 3 drives to get data duplication? Unless you mean 3 vdevs each with a single drive?

Thanks in advance.


----------



## phoenix (May 3, 2012)

In theory, you could achieve 3x the read throughput, if you have three separate threads reading separate files, and ZFS pulls the data for each file off separate disks.  You might even be able to get 3x the read throughput for a single thread reading a single file by pulling data blocks off different disks in sequence.  However, that's the theoretical "best case scenario".  Add some write threads in, or more reading threads, and things won't be so rosy.

Also a 3-disk mirror means all three disks are in the same vdev, and all three disks have the same data on them (each disk is a mirror of each other).  All writes have to go to all three disks.  Reads can be pulled from individual disks.

A pool with a single mirror vdev, comprised of three disks:

```
# zpool create poolname mirror disk1 disk2 disk3
```

Which would give you something like:

```
# zpool status
  pool: storage
 state: ONLINE
  scan: none requested
config:

        NAME             STATE     READ WRITE CKSUM
        poolname         ONLINE       0     0     0
          mirror-0       ONLINE       0     0     0
            disk1        ONLINE       0     0     0
            disk2        ONLINE       0     0     0
            disk3        ONLINE       0     0     0
```

Compared to a pool with 3 vdevs, each with 1 disk (meaing a RAID0-like stripe):

```
# zpool create poolname disk1 disk2 disk3
```


```
# zpool status
  pool: storage
 state: ONLINE
  scan: none requested
config:

        NAME             STATE     READ WRITE CKSUM
        poolname         ONLINE       0     0     0
          disk1          ONLINE       0     0     0
          disk2          ONLINE       0     0     0
          disk3          ONLINE       0     0     0
```


----------



## einthusan (May 4, 2012)

Thanks for all that useful information. Just out of curiosity, would a 4-way mirror (single vdev) perform better in read throughput compared to a setup with 2 vdevs mirrored each with 2 drives? In both cases, the total number of drives being 4.


----------



## phoenix (May 4, 2012)

In general, multiple small vdevs in a pool will outperform a single large vdev made up of lots of drives.

For instance, a pool with 3 raidz1 vdevs of 5 drives each will outperform a single raidz3 vdev of 15 drives.

In theory, a dual-mirror pool (2x mirror vdevs of 2 drives each) should outperform a single mirror pool (1x mirror vdev of 4 drives).


----------



## bbzz (May 4, 2012)

Please correct me if I'm wrong, but it also depends on how and when data and vdevs are added.
If you have 2-way mirror vdev which is nearly full, then you add new 2-way mirror vdev, most new writes will go to new vdev only, meaning reads will also come mostly from new vdev. This assumes period where you don't shuffle/delete your old data. 

Whereas if you have 4-way mirror vdev from the start, reads are most likely to come (in best case scenarios) from all disks in mirror, hence outperforming in reads 2 vdev 2-way mirror.

I think this is even more obvious in more complicated configurations such as multiple raidz2 for example, where new vdevs are added only after old vdev is nearly full.


----------



## jalla (May 4, 2012)

Remember that a 4-way mirror has the capacity of a single drive, similar to a 2-way mirror. It doesn't make sense to compare that to a 2 x 2-way mirror.


----------



## bbzz (May 4, 2012)

But we aren't talking about size. When you additionally add two disks to 2-way mirror vdev, creating 4-way mirror, you theoretically double read times, since now theres 4 disks.

If on the other hand you make another 2-way mirror vdev, most writes will go to that vdev, not affecting read (assuming first vdev is nearly full). Unless, in time, data gets shuffled enough to equally distribute data across both vdevs.

Or no?


----------



## jalla (May 5, 2012)

A 4-way mirror has almost twice the read speed of a two way mirror, yes.


```
-------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
r1-2x1   16384  7827  2.7  6443  0.8  6098  0.9 114324 32.6 133370  5.5 223.3  0.4
r1-3x1   16384  7811  2.7  6474  0.7  6065  0.9 203852 57.8 181481  7.1 256.4  0.3
r1-4x1   16384  7771  2.7  6406  0.7  6111  0.9 220671 62.7 208167  7.4 239.9  0.3
```
On the other hand a 2x2-way mirror has twice the capacity and twice the write speed of a single 4-way mirror, with read speed about the same.


```
-------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
r1-4x2   16384 18321  4.5 14529  1.7 13860  2.0 237875 69.2 206736  7.0 434.6  0.6
```


----------



## gofer_touch (Dec 16, 2015)

phoenix said:


> No, this won't work the way you think, in that you can't just stick that third drive into another machine and access the pool on the disk.
> 
> Instead, you need to do some reading on "breaking a zfs mirror" aka "zfs split".  Not sure if the FreeBSD version of ZFS supports that or not.
> 
> That allows you to take a 2-disk mirror vdev, add a third disk, resilver the data onto that disk, then "zfs split" the disk off the mirror, turning it into a stand-alone pool.  Then you can use that disk to boot another system and access the data on it.



Hmm. I realize this thread is old, but this seems like it might have a number of additional use cases. 

Does anyone have any experience or advice in using this as a method for installing onto multiple systems? For example, lets say I set up a 10 way mirror, then zfs split off 9 disks in order to use each of those disks in 9 separate machines. This seems quite a bit faster than setting up one system and then cloning over zroot using zfs send|receive.

Similarly, if I wanted to have 9 more machines, I just add 9 new disks back into the install machine and resilver the data from one source disk and repeat until I'm done. Are there any downsides to doing this?


----------



## protocelt (Dec 16, 2015)

To answer your first question; If the machines are similar in hardware, theoretically yes AFAIK, at least with minimal fuss from the system.

To answer your second question, yes, assuming you don't add more disks to each system for redundancy which you did not mention, otherwise besides time, I wouldn't think so.

Installing and managing multiple nodes is really outside my area of understanding though I believe most admins use software such as sysutils/puppet and a build host for doing things such as this to make life easier.


----------



## gofer_touch (Dec 17, 2015)

Noted. I am testing this out in a few machines now it seems to work rather well. All the hardware would be exactly the same.

To answer your question, no the workstations would not have more disks for redundancy, single disks would be fine in this particular application. 

I haven't used puppet before but at first glance it seems to actually require all of the host machines to be active and online at the same time. The zfs mirror method could potentially be used to say create a bunch of boot+data disks  for faster deployment.


----------



## protocelt (Dec 18, 2015)

I guess it really depends on what you want to do/accomplish. To me, faster deployment in most situations means automation.


----------



## storvi_net (Dec 19, 2015)

With nearly all automation software you can push the config to the clients.

You can go several routes. 
Easiest one is to create a minimum OS-Image with the agent of your automation software (e.g. puppet). Then give the client a role and let do puppet the rest.
You can further automate this by applying things like PXE. 

Probably you want to look into foreman (http://theforeman.org/) - this is a complete solution for provisioning...

Regards
Markus


----------



## gofer_touch (Dec 19, 2015)

All of the suggestions are nicely sophisticated. But in this particular use case, the physical machines, with none of them being connected to the Internet and are widely distributed. Thus any thing that requires a network isn't going to work. The easiest really does seem to be cloning disks en masse (because they will all do the same thing) and then inserting these disks into the machines individually.


----------

