# Problems using gptid disk paths in zpool



## bbawn (May 2, 2013)

Hello,

I am experimenting with using gptid paths for my zpool disk vdevs. My rationale is that monitoring scripts and manual procedures around disk replacement seem simpler if the path is constant and globally unique.

zpool creation and status works:


```
[root@devsttiny ~]# uname -a
FreeBSD devsttiny.nirvanix.com 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec  4 09:23:10 UTC 2012     root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
[root@devsttiny ~]# zpool status
  pool: share_501001
 state: ONLINE
  scan: resilvered 36K in 0h0m with 0 errors on Wed May  1 22:25:23 2013
config:

        NAME                                            STATE     READ WRITE CKSUM
        share_501001                                    ONLINE       0     0     0
          raidz3-0                                      ONLINE       0     0     0
            gptid/c3422980-b2aa-11e2-b529-000c29f48d37  ONLINE       0     0     0
            gptid/c36b6df5-b2aa-11e2-b529-000c29f48d37  ONLINE       0     0     0
            gptid/c39628d8-b2aa-11e2-b529-000c29f48d37  ONLINE       0     0     0
            gptid/c3beca60-b2aa-11e2-b529-000c29f48d37  ONLINE       0     0     0

errors: No known data errors
```

However, operations like zpool offline and online don't work:


```
[root@devsttiny ~]# zpool offline share_501001 gptid/c3beca60-b2aa-11e2-b529-000c29f48d37
cannot offline gptid/c3beca60-b2aa-11e2-b529-000c29f48d37: no such device in pool
[root@devsttiny ~]# zpool offline share_501001 /dev/gptid/c3beca60-b2aa-11e2-b529-000c29f48d37
cannot offline /dev/gptid/c3beca60-b2aa-11e2-b529-000c29f48d37: no such device in pool
```

Commands like that work when I use /dev/da<n>:


```
[root@devst06 ~/src/CSN/trunk/src/FreeBSDConfig]# zpool status data
  pool: data
 state: ONLINE
  scan: resilvered 8.50K in 0h0m with 0 errors on Tue Jan 29 13:29:12 2013
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da24    ONLINE       0     0     0
            da25    ONLINE       0     0     0
            da26    ONLINE       0     0     0

errors: No known data errors
[root@devst06 ~/src/CSN/trunk/src/FreeBSDConfig]# zpool offline zfs-raidz3 da13
[root@devst06 ~/src/CSN/trunk/src/FreeBSDConfig]# zpool online zfs-raidz3 da13
[root@devst06 ~/src/CSN/trunk/src/FreeBSDConfig]# zpool offline zfs-raidz3 /dev/da13
[root@devst06 ~/src/CSN/trunk/src/FreeBSDConfig]# zpool online zfs-raidz3 /dev/da13
[root@devst06 ~/src/CSN/trunk/src/FreeBSDConfig]#
```

If I look up the disk vdev guid using zdb(8), it works:


```
[root@devsttiny ~]# zpool status
  pool: share_501001
 state: ONLINE
  scan: resilvered 36K in 0h0m with 0 errors on Wed May  1 22:25:23 2013
config:

        NAME                                            STATE     READ WRITE CKSUM
        share_501001                                    ONLINE       0     0     0
          raidz3-0                                      ONLINE       0     0     0
            gptid/c3422980-b2aa-11e2-b529-000c29f48d37  ONLINE       0     0     0
            gptid/c36b6df5-b2aa-11e2-b529-000c29f48d37  ONLINE       0     0     0
            gptid/c39628d8-b2aa-11e2-b529-000c29f48d37  ONLINE       0     0     0
            gptid/c3beca60-b2aa-11e2-b529-000c29f48d37  ONLINE       0     0     0

errors: No known data errors
[root@devsttiny ~]# zdb|grep -B 1 gptid/c36b6df5-b2aa-11e2-b529-000c29f48d37
                guid: 2222888412714934045
                path: '/dev/gptid/c36b6df5-b2aa-11e2-b529-000c29f48d37'
                phys_path: '/dev/gptid/c36b6df5-b2aa-11e2-b529-000c29f48d37'
[root@devsttiny ~]# zpool offline share_501001 2222888412714934045
[root@devsttiny ~]#
```

Any thoughts? Is there a better way to achieve a stable and unique disk vdev path? I tried glabel(8) labels but they apparently max out at 11 characters, too short for a uuid. I don't know if online/offline works with glabel(8) paths.

Thanks,
Bob


----------



## phoenix (May 2, 2013)

Since you are using GPT to partition the disks, use GPT labels (set using gpart(8) -l option).  That will create /dev/gpt/labelname nodes.  And those work wonderfully with ZFS.  It's what we use on the over 120-odd disks attached to ZFS pools.


----------



## bbawn (May 17, 2013)

After doing some reading (which I should have done in the first place), I think the right solution for me is to use cam(4) to "wire" device units to a bus, target, LUN location. See, for example, http://lists.freebsd.org/pipermail/freebsd-stable/2013-January/071851.html


----------



## DaveQB (Jun 7, 2013)

phoenix said:
			
		

> Since you are using GPT to partition the disks, use GPT labels (set using gpart(8) -l option).  That will create /dev/gpt/labelname nodes.  And those work wonderfully with ZFS.  It's what we use on the over 120-odd disks attached to ZFS pools.



That's a great solution. Thanks for that.


----------



## jkhilmer (Jan 7, 2014)

Forgive me for resurrecting this (and mixing questions), but this is the cleanest discussion I've found on this topic.

Let's say that we don't use labels, but choose to either "wire" the device units or just let them end up with dynamic IDs.  What happens if we were to remove a drive for a minute then re-insert it, and it changes from /dev/da15 to /dev/da14.  Or with "wired" names, a controller dies and we have to move the drive to an empty bay.  How can we get it to re-join a zpool?  It should re-join after `zpool export` and `zpool import` but that's not really a good solution if you want to maintain the pool online.  Using `zpool attach` throws an error because it thinks (correctly) that it's already part of a pool.

I wasn't able to find a good solution to this problem, which is what led me to labels.  Labels are somewhat convenient, although I'm not sure if I see the benefit of a user-generated label: if we already have gptids available, isn't that a reliable, unique, and cross-platform identifier?  Do you use something else like disk serial number for the label?

As an off-topic but related question, is there a way to force gptids to show up in /dev/gptid?  I created a label, then later removed it, and now the disk isn't showing up in /dev/gptid.  I've read that having a disk mounted will prevent it from showing there, so how does this work with a zpool?


----------



## Terry_Kennedy (Jan 9, 2014)

jkhilmer said:
			
		

> I wasn't able to find a good solution to this problem, which is what led me to labels.  Labels are somewhat convenient, although I'm not sure if I see the benefit of a user-generated label: if we already have gptids available, isn't that a reliable, unique, and cross-platform identifier?  Do you use something else like disk serial number for the label?


I use labels with the slot number the drive occupies inside the chassis, like this:


```
[0] rz1:~> zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
data  27.2T  18.0T  9.18T    66%  1.00x  ONLINE  -
[0] rz1:~> zpool status
  pool: data
 state: ONLINE
  scan: scrub repaired 0 in 8h37m with 0 errors on Fri Nov  1 03:44:09 2013
config:

        NAME             STATE     READ WRITE CKSUM
        data             ONLINE       0     0     0
          raidz1-0       ONLINE       0     0     0
            label/slot0  ONLINE       0     0     0
            label/slot1  ONLINE       0     0     0
            label/slot2  ONLINE       0     0     0
            label/slot3  ONLINE       0     0     0
            label/slot4  ONLINE       0     0     0
          raidz1-1       ONLINE       0     0     0
            label/slot5  ONLINE       0     0     0
            label/slot6  ONLINE       0     0     0
            label/slot7  ONLINE       0     0     0
            label/slot8  ONLINE       0     0     0
            label/slot9  ONLINE       0     0     0
          raidz1-2       ONLINE       0     0     0
            label/slot10 ONLINE       0     0     0
            label/slot11 ONLINE       0     0     0
            label/slot12 ONLINE       0     0     0
            label/slot13 ONLINE       0     0     0
            label/slot14 ONLINE       0     0     0
        logs
          label/ssd0     ONLINE       0     0     0
        spares
          label/slot15   AVAIL

errors: No known data errors
```


----------

