# Two drives with the same mount path?  That's not right...



## OrangeMan (Jul 15, 2010)

Running 8.0-RELEASE-p1 with ZFS version 13.  I recently swapped out motherboards on my NAS which appears to have broken my zpool (known as 'bigpool').  As you can see below, zfs thinks there are 2 drives named ad6 which makes it think something is corrupted.  If you'll notice, also, that the two drives are mounted to the same path.  That, is the problem.  Somehow, ad10 is being mounted to /dev/ad6 (I think.  I'm not even sure if I've diagnosed this correctly).  If I unplug ad6, a zpool status shows 'disconnected' and ad10 appears on the list.  If I unplug ad10, nothing changes.  I have no idea why.  More importantly, I have no idea how to fix that.  I don't know of any fstab for zfs.  Any idea what's up or how to fix this?  


```
dmesg
ad4: 953868MB <Seagate ST31000340AS SD15> at ata2-master SATA300
ad6: 953869MB <SAMSUNG HD103SJ 1AJ100E4> at ata3-master SATA300
ad8: 953869MB <SAMSUNG HD103SJ 1AJ100E4> at ata4-master SATA300
ad10: 953869MB <SAMSUNG HD103SJ 1AJ100E4> at ata5-master SATA300

wut% zpool status
  pool: bigpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	bigpool     DEGRADED     0     0     0
	  raidz1    DEGRADED     0     0     0
	    ad4     ONLINE       0     0     0
	    ad6     ONLINE       0     0     0
	    ad6     FAULTED      0   259     0  corrupted data
	    ad8     ONLINE       0     0     0

errors: No known data errors

wut% zdb  
bigpool
    version=13
    name='bigpool'
    state=0
    txg=5537976
    pool_guid=4349259675267850994
    hostid=2180312168
    hostname='wut.my.domain'
    vdev_tree
        type='root'
        id=0
        guid=4349259675267850994
        children[0]
                type='raidz'
                id=0
                guid=10851819389901888036
                nparity=1
                metaslab_array=23
                metaslab_shift=35
                ashift=9
                asize=4000795590656
                is_log=0
                children[0]
                        type='disk'
                        id=0
                        guid=1169062141016002989
                        path='/dev/ad4'
                        whole_disk=0
                        DTL=117
                children[1]
                        type='disk'
                        id=1
                        guid=11695249635182739572
                        path='/dev/ad6'
                        whole_disk=0
                        DTL=116
                children[2]
                        type='disk'
                        id=2
                        guid=8413257584631332697
                        path='/dev/ad6'
                        whole_disk=0
                        DTL=115
                children[3]
                        type='disk'
                        id=3
                        guid=16582780852808625699
                        path='/dev/ad8'
                        whole_disk=0
                        DTL=114
```


----------



## phoenix (Jul 15, 2010)

What happens if you export/import the pool from single-user mode?


----------



## OrangeMan (Jul 15, 2010)

It fixes the problem


----------

