# ZFS pool not seen after FreeBSD 9 install



## boris_net (Jan 15, 2012)

Hi all,

I installed FreeBSD 9.0 on a ZFS mirror pool and it works fine.
This machine was running FreeBSD 8.2 before and it had a raidz pool in ZFS version 15.

At the moment all drives are detected but the pool is not mounted. I am not sure to understand how to get my zpool back up without losing the data on it. Any help would be appreciated.

A few output to show you what I have:

1- ZFS version from the dmesg:

```
#grep -i zfs /var/run/dmesg.boot 
ZFS filesystem version 5
ZFS storage pool version 28
Trying to mount root from zfs:zroot []...
```

2- zfs list only reports the detected mirror for the system:


```
zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
zroot                      10.9G   438G   659M  legacy
zroot/swap                 8.25G   446G    16K  -
zroot/tmp                    35K   438G    35K  /tmp
zroot/usr                  2.00G   438G  1.36G  /usr
zroot/usr/home             39.5K   438G  39.5K  /usr/home
zroot/usr/ports             310M   438G   310M  /usr/ports
zroot/usr/ports/distfiles   673K   438G   673K  /usr/ports/distfiles
zroot/usr/ports/packages     31K   438G    31K  /usr/ports/packages
zroot/usr/src               349M   438G   349M  /usr/src
zroot/var                   573K   438G   126K  /var
zroot/var/crash            31.5K   438G  31.5K  /var/crash
zroot/var/db                188K   438G    94K  /var/db
zroot/var/db/pkg             94K   438G    94K  /var/db/pkg
zroot/var/empty              31K   438G    31K  /var/empty
zroot/var/log              66.5K   438G  66.5K  /var/log
zroot/var/mail               31K   438G    31K  /var/mail
zroot/var/run                67K   438G    67K  /var/run
zroot/var/tmp                32K   438G    32K  /var/tmp
```

3- zpool list output:


```
houdini# zpool status -v
  pool: zroot
 state: ONLINE
 scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    ada6p3  ONLINE       0     0     0
	    ada7p3  ONLINE       0     0     0

errors: No known data errors
```

There should be a raidz pool named 'vault' made of 6 x 1TB drives, the drives are detected by the system:


```
ada0 at mvsch0 bus 0 scbus3 target 0 lun 0
ada0: <SAMSUNG HD103UJ 1AA01113> ATA-7 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 2048bytes)
ada0: Command Queueing enabled
ada0: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad10
ada1 at mvsch1 bus 0 scbus4 target 0 lun 0
ada1: <SAMSUNG HD103UJ 1AA01113> ATA-7 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 2048bytes)
ada1: Command Queueing enabled
ada1: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad12
ada2 at mvsch2 bus 0 scbus5 target 0 lun 0
ada2: <SAMSUNG HD103UJ 1AA01113> ATA-7 SATA 2.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 2048bytes)
ada2: Command Queueing enabled
ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada2: Previously was known as ad14
ada3 at mvsch4 bus 0 scbus7 target 0 lun 0
ada3: <WDC WD10EADS-00M2B0 01.00A01> ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 2048bytes)
ada3: Command Queueing enabled
ada3: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada3: Previously was known as ad18
ada4 at mvsch5 bus 0 scbus8 target 0 lun 0
ada4: <SAMSUNG HD103UJ 1AA01113> ATA-7 SATA 2.x device
ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 2048bytes)
ada4: Command Queueing enabled
ada4: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada4: Previously was known as ad20
ada5 at mvsch7 bus 0 scbus10 target 0 lun 0
ada5: <WDC WD10EADS-00M2B0 01.00A01> ATA-8 SATA 2.x device
ada5: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 2048bytes)
ada5: Command Queueing enabled
ada5: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada5: Previously was known as ad24
```

Obviously, I noticed the device naming changed for the drive but I am not sure how to fix it without affecting the data in the pool


```
gpart list
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525167
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: ada0s1
   Mediasize: 1000202241024 (931G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 32256
   Mode: r0w0e0
   attrib: active
   rawtype: 165
   length: 1000202241024
   offset: 32256
   type: freebsd
   index: 1
   end: 1953520064
   start: 63
Consumers:
1. Name: ada0
   Mediasize: 1000204886016 (931G)
   Sectorsize: 512
   Mode: r0w0e0

Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525167
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: ada1s1
   Mediasize: 1000202241024 (931G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 32256
   Mode: r0w0e0
   attrib: active
   rawtype: 165
   length: 1000202241024
   offset: 32256
   type: freebsd
   index: 1
   end: 1953520064
   start: 63
Consumers:
1. Name: ada1
   Mediasize: 1000204886016 (931G)
   Sectorsize: 512
   Mode: r0w0e0

Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525167
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: ada2s1
   Mediasize: 1000202241024 (931G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 32256
   Mode: r0w0e0
   attrib: active
   rawtype: 165
   length: 1000202241024
   offset: 32256
   type: freebsd
   index: 1
   end: 1953520064
   start: 63
Consumers:
1. Name: ada2
   Mediasize: 1000204886016 (931G)
   Sectorsize: 512
   Mode: r0w0e0

Geom name: ada4
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525167
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: ada4s1
   Mediasize: 1000202241024 (931G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 32256
   Mode: r0w0e0
   attrib: active
   rawtype: 165
   length: 1000202241024
   offset: 32256
   type: freebsd
   index: 1
   end: 1953520064
   start: 63
Consumers:
1. Name: ada4
   Mediasize: 1000204886016 (931G)
   Sectorsize: 512
   Mode: r0w0e0

Geom name: ada5
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525167
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: ada5s1
   Mediasize: 1000202241024 (931G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 32256
   Mode: r0w0e0
   rawtype: 131
   length: 1000202241024
   offset: 32256
   type: linux-data
   index: 1
   end: 1953520064
   start: 63
Consumers:
1. Name: ada5
   Mediasize: 1000204886016 (931G)
   Sectorsize: 512
   Mode: r0w0e0
```


```
gpart show
=>        63  1953525105  ada0  MBR  (931G)
          63  1953520002     1  freebsd  [active]  (931G)
  1953520065        5103        - free -  (2.5M)

=>        63  1953525105  ada1  MBR  (931G)
          63  1953520002     1  freebsd  [active]  (931G)
  1953520065        5103        - free -  (2.5M)

=>        63  1953525105  ada2  MBR  (931G)
          63  1953520002     1  freebsd  [active]  (931G)
  1953520065        5103        - free -  (2.5M)

=>        63  1953525105  ada4  MBR  (931G)
          63  1953520002     1  freebsd  [active]  (931G)
  1953520065        5103        - free -  (2.5M)

=>        63  1953525105  ada5  MBR  (931G)
          63  1953520002     1  linux-data  (931G)
  1953520065        5103        - free -  (2.5M)
```
ada3 is missing from the gpart output although it is detected by the system.
and ada5 seems to have a different type from the others.


```
ls -l /dev/ada3
crw-r-----  1 root  operator    0, 108 Jan 15 22:34 /dev/ada3
```



How should I proceed to be able to access the data on this zpool ?

Thanks for your help,

Boris


----------



## boris_net (Jan 15, 2012)

Apologies for the noise:

zfs import did the trick...


----------

