# zfs pool moved to bigger drives won't expand



## goertzenator (Jan 5, 2012)

I installed FreeBSD9rc2 to a 1TB disk with root-on-zfs.  I've since mirrored that disk to a pair of 2TB and detached the original 1TB, however the pool will not expand to use 2 TB.

This is what I've tried:
1. Set pool's autoexpand property to on.  Reboot.
2. Check actual size of underlying devs with gpart (both are 1.8T)

What am I missing?
Thanks,
Dan.

The raw info...


```
[root@boondock ~]# zpool status
  pool: pool0
 state: ONLINE
 scan: resilvered 849G in 3h29m with 0 errors on Wed Jan  4 19:50:33 2012
config:

        NAME                STATE     READ WRITE CKSUM
        pool0               ONLINE       0     0     0
          mirror-0          ONLINE       0     0     0
            gpt/wd20earx_0  ONLINE       0     0     0
            gpt/wd20earx_1  ONLINE       0     0     0

errors: No known data errors

[root@boondock ~]# zpool get autoexpand pool0
NAME   PROPERTY    VALUE   SOURCE
pool0  autoexpand  on      local


[root@boondock ~]# zfs list pool0
NAME    USED  AVAIL  REFER  MOUNTPOINT
pool0   848G  65.2G   160K  /mnt


[root@boondock ~]# gpart list
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 65536 (64k)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 076ec511-36f3-11e1-81f3-001cc07d3699
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 65536
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 167
   start: 40
2. Name: ada0p2
   Mediasize: 2000398827520 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   rawuuid: 1bfdb7ad-36f3-11e1-81f3-001cc07d3699
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: wd20earx_0
   length: 2000398827520
   offset: 86016
   type: freebsd-zfs
   index: 2
   end: 3907029127
   start: 168
Consumers:
1. Name: ada0
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
   Mediasize: 65536 (64k)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 34206625-3722-11e1-92c3-001cc07d3699
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 65536
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 167
   start: 40
2. Name: ada1p2
   Mediasize: 2000398827520 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   rawuuid: 3f1d2406-3722-11e1-92c3-001cc07d3699
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: wd20earx_1
   length: 2000398827520
   offset: 86016
   type: freebsd-zfs
   index: 2
   end: 3907029127
   start: 168
Consumers:
1. Name: ada1
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3
```


----------



## phoenix (Jan 5, 2012)

Either reboot, or export/import the pool.

There's also an option (-e) to "zpool online" to expand the vdev.


----------



## ctengel (Jan 5, 2012)

Try kicking it with [cmd=]zpool online -e pool0 gpt/wd20earx_0 gpt/wd20earx_1[/cmd]. No need to offline prior. I can't guarantee it will fix it, but let us know how it goes.


----------



## kpa (Jan 5, 2012)

I resorted to detaching one disk at a time (with zpool detach), attaching a new disk (with zpool attach) and letting the system complete a resilver when I upgraded a pair of disks in a mirror to larger ones.


----------



## Sebulon (Jan 5, 2012)

@goertzenator

Have you tried
`# zpool export pool0`
`# zpool import pool0`
?

/Sebulon


----------



## goertzenator (Jan 5, 2012)

Thanks for all the responses. *zpool online -e ...* did the trick.  Reboot wouldn't do it, and export/import was not a good option because my root fs was on that pool.

Dan.


----------



## ctengel (Jan 6, 2012)

Glad to hear that worked.  A "Sun alert" just came out a few weeks ago about a perhaps-related bug in Solaris where a zpool will not expand when a LUN is grown and autoexpand is set, so this may not be FreeBSD-specific.  "Kicking it" was what they gave as the best workaround until fixed.


----------



## bthomson (Dec 8, 2012)

zpool online -e ... just worked for me on 8.3. Exporting and then importing the pool did NOT work.

Thanks for the tip, I was getting worried there for a minute!


----------

