# ZFS and Virtual to physical Drive Mapping on an LSI 9201-16 HBA Card



## unixgirl (Jul 5, 2012)

Hello
 I have a Raid array on a FreeBSD 9.0 system I am building and I heard good things about using the LSI 9201-16 card for coupling with ZFS for raidz.  The problem I have found is that unlike other RAID cards, where (even when used as JBOD) the "virtual drive" or drive ID of each drive, such as da0 da1 is not mapping to the physical drive ID. Normally da1 would be the 2nd physical drive in the chain. da5 would be the 5th etc., assuming also starting your count as 0. 
 On the LSi 9201-16 I have found that da5 could be the 7th drive and da2 could be the first. I have even found it to change after a reboot. Once, in my brief testing, I rebooted and my array data vanished as the drives changed their IDs vs physical drives! I had a mishmash of drives so I thought it was because I had different makes of drives for my testing. But having all the same make drives has not helped.

Is there some setting I need to make sure they map correctly and or stay mapped?
If they cannot translate virtual to physical what tool do people use to ID a drive to know which one has actually failed when told da4 died?


Thanks.

  Nicole


----------



## gkontos (Jul 5, 2012)

Using labels always helps the situation. I literally label each disk with gpart() and then with a sticker.

In your case you might want to use 9-STABLE because it includes the native driver of your card. 9.0-RELEASE doesn't.

George


----------



## unixgirl (Jul 5, 2012)

Hello
 I am currently running 9.0-STABLE FreeBSD 9.0-STABLE #0: Tue Jun 19 15:32:51 PDT 2012, So I assume I have the native driver unless it was added very recently. 

Thanks for the Gpart suggestion. I hate to have to resort to something like gpart just for drive mapping. It may help with the drive changing after reboots, but makes the system much less user friendly. Having to re-gpart a drive after replacing it seems annoying as well. Also it's rather trouble prone and annoying since most cases like mine (a Chenbro 4U) comes pre-labeled with expectations of card drive IDs matching a reported ID. Someone not "In the know" would not understand why the drive mapping is off, likely not know what drive to replace and could yank a good drive instead. I think I would feel silly having labels for da0 da1 etc not even being in any order.

 I'm just confused by why the device assignments are being so random for this card. Maybe it was this way with SCSI and I just don't remember? I have been used to actual RAID cards lately.


----------



## gkontos (Jul 6, 2012)

unixgirl said:
			
		

> Hello
> I am currently running 9.0-STABLE FreeBSD 9.0-STABLE #0: Tue Jun 19 15:32:51 PDT 2012, So I assume I have the native driver unless it was added very recently.



Yes, you do!



			
				unixgirl said:
			
		

> Thanks for the Gpart suggestion. I hate to have to resort to something like gpart just for drive mapping. It may help with the drive changing after reboots, but makes the system much less user friendly. Having to re-gpart a drive after replacing it seems annoying as well. Also it's rather trouble prone and annoying since most cases like mine (a Chenbro 4U) comes pre-labeled with expectations of card drive IDs matching a reported ID. Someone not "In the know" would not understand why the drive mapping is off, likely not know what drive to replace and could yank a good drive instead. I think I would feel silly having labels for da0 da1 etc not even being in any order.



Have you identified some sort of order in which drive IDs match with the card drive IDs?
There must be a way to report the correct ID assuming the cables are plugged in correctly. Anything in the BIOS settings?


----------



## unixgirl (Jul 6, 2012)

Interesting. I set the card to OS only for control and I removed the drives and replaced them physically (by serial number) in Da order.  That so far has stuck. How odd.


----------



## Terry_Kennedy (Jul 9, 2012)

unixgirl said:
			
		

> Hello
> I hate to have to resort to something like gpart just for drive mapping. It may help with the drive changing after reboots, but makes the system much less user friendly. Having to re-gpart a drive after replacing it seems annoying as well.


You can use glabel without the rest of the GEOM stuff. I'm using this to label the drives in my ZFS pools, which have the whole drive allocated to ZFS.


> Also it's rather trouble prone and annoying since most cases like mine (a Chenbro 4U) comes pre-labeled with expectations of card drive IDs matching a reported ID. Someone not "In the know" would not understand why the drive mapping is off, likely not know what drive to replace and could yank a good drive instead. I think I would feel silly having labels for da0 da1 etc not even being in any order.


I don't know if the newer LSI cards have a dedicated "identify drive" function. The LSI/3Ware controller cards I'm using can blink one of the drive bay LEDs on command. I think the older 3Ware cards simulated this by doing I/O to the drive to turn on the native activity light. 



> I'm just confused by why the device assignments are being so random for this card. Maybe it was this way with SCSI and I just don't remember? I have been used to actual RAID cards lately.


The only thing I've observed here is that if there's some other drive that is probed before the card with your drives on it, the drive numbers will be different. When I added a PCIe SSD to my system, my 3Ware drives changed from da0-da15 to da1-da16. That's when I decided to glabel them.


----------



## unixgirl (Jul 10, 2012)

Thanks for the info. Maybe I'm lazy and expect too much, but it's sad when you have to dance through a bunch of hoops just to use a particular piece of hardware. That's why i loved using 3ware cards. They just worked. (especially pre LSI buyout) 

 My da3 drive failed. I had to wait a few days for my spare disks. I got a replacement disk and replaced it. However the device /dev/da3 refused to show up. I tried to rescan and reset the buss using camcontrol. Nothing. 

 I thought perhaps the bay might be flaky. So I installed the drive into previously unused /dev/da7. What showed up? Online /dev/da3! WHAT! So since da3 was offline it just jumped to call tray 7 da3.

 So I removed the drive from bay da7 and replaced into bay da3 and rebooted. When I did, I noticed my array resilvering and several of my serial numbers had swapped places! (so my da6 was now da5 and my da5 was now da6) 

(Is there a way to always force Verbose booting so I don't always have to his 7 at the console?)

It doesn't seem like this card is entirely useful for production on FreeBSD which sucks. I need an inexpensive card that doesn't need a reboot to rescan drives etc. Plus it's re-arranging of drives is totally unacceptable. I have used a 3ware raid card in JBOD mode and never had this problem.
I can't tell if its a software issue or a hardware issue. 

After rebooting again, and after some struggle, I now have this:

  pool: zfs1
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Mon Jul  9 18:45:18 2012
        334G scanned out of 9.26T at 3.93G/s, 0h38m to go
        62.6M resilvered, 3.52% done
config:

        NAME                      STATE     READ WRITE CKSUM
        zfs1                      DEGRADED     0     0     0
          raidz2-0                DEGRADED     0     0     0
            da0                   ONLINE       0     0     0
            da1                   ONLINE       0     0     0
            da2                   ONLINE       0     0     0
            16800329280307363066  OFFLINE      0     0     0  was /dev/da3
            da3                   ONLINE       0     0     0
            da4                   ONLINE       0     0     0
            da5                   ONLINE       0     0     0
            da6                   ONLINE       0     0     3  (resilvering)

 ZFS so far refuses to let me get rid of 16800329280307363066 /was/da3


----------



## kfoda (Jul 23, 2012)

unixgirl said:
			
		

> Hello
> I am currently running 9.0-STABLE FreeBSD 9.0-STABLE #0: Tue Jun 19 15:32:51 PDT 2012, So I assume I have the native driver unless it was added very recently.
> 
> Thanks for the Gpart suggestion. I hate to have to resort to something like gpart just for drive mapping. It may help with the drive changing after reboots, but makes the system much less user friendly. Having to re-gpart a drive after replacing it seems annoying as well. Also it's rather trouble prone and annoying since most cases like mine (a Chenbro 4U) comes pre-labeled with expectations of card drive IDs matching a reported ID. Someone not "In the know" would not understand why the drive mapping is off, likely not know what drive to replace and could yank a good drive instead. I think I would feel silly having labels for da0 da1 etc not even being in any order.
> ...



It's a bit clunky, but now one else seems to be mentioning that you can just statically map the scbus/target/unit ids that the cards/controllers present using "/boot/device.hints". Then the device order will always be the same. I added the following to my own setup to hardwire two 8 port LSI 1068E based cards and the embedded 6 port intel ICH10 to always present the same da or ada device numbers to hot swap bay devices I use in the server. I then just put da0-15/ada0-5 stickers on the drive handle and away you go, they will never change, and I know which drive to swap out on a failure.

#
# Static device allocations for mpt0 "da" devices (to maintain bay slot order)
hint.scbus.0.at="mpt0"
hint.scbus.0.bus="0"
hint.da.0.at="scbus0"
hint.da.0.target="0"
hint.da.0.unit="0"
hint.da.1.at="scbus0"
hint.da.1.target="1"
hint.da.1.unit="0"
hint.da.2.at="scbus0"
hint.da.2.target="2"
hint.da.2.unit="0"
hint.da.3.at="scbus0"
hint.da.3.target="3"
hint.da.3.unit="0"
hint.da.4.at="scbus0"
hint.da.4.target="4"
hint.da.4.unit="0"
hint.da.5.at="scbus0"
hint.da.5.target="5"
hint.da.5.unit="0"
hint.da.6.at="scbus0"
hint.da.6.target="6"
hint.da.6.unit="0"
hint.da.7.at="scbus0"
hint.da.7.target="7"
hint.da.7.unit="0"
#
# Static device allocations for mpt1 "da" devices (to maintain bay slot order)
hint.scbus.1.at="mpt1"
hint.scbus.1.bus="0"
hint.da.8.at="scbus1"
hint.da.8.target="0"
hint.da.8.unit="0"
hint.da.9.at="scbus1"
hint.da.9.target="1"
hint.da.9.unit="0"
hint.da.10.at="scbus1"
hint.da.10.target="2"
hint.da.10.unit="0"
hint.da.11.at="scbus1"
hint.da.11.target="3"
hint.da.11.unit="0"
hint.da.12.at="scbus1"
hint.da.12.target="4"
hint.da.12.unit="0"
hint.da.13.at="scbus1"
hint.da.13.target="5"
hint.da.13.unit="0"
hint.da.14.at="scbus1"
hint.da.14.target="6"
hint.da.14.unit="0"
hint.da.15.at="scbus1"
hint.da.15.target="7"
hint.da.15.unit="0"
#
# Static device allocations for ahci "ada" devices (to maintain bay slot order)
hint.scbus.3.at="ahcich0"
hint.scbus.3.bus="0"
hint.ada.0.at="scbus3"
hint.ada.0.target="0"
hint.ada.0.unit="0"
hint.scbus.4.at="ahcich1"
hint.scbus.4.bus="0"
hint.ada.1.at="scbus4"
hint.ada.1.target="0"
hint.ada.1.unit="0"
hint.scbus.5.at="ahcich2"
hint.scbus.5.bus="0"
hint.ada.2.at="scbus5"
hint.ada.2.target="0"
hint.ada.2.unit="0"
hint.scbus.6.at="ahcich3"
hint.scbus.6.bus="0"
hint.ada.3.at="scbus6"
hint.ada.3.target="0"
hint.ada.3.unit="0"
hint.scbus.7.at="ahcich4"
hint.scbus.7.bus="0"
hint.ada.4.at="scbus7"
hint.ada.4.target="0"
hint.ada.4.unit="0"
hint.scbus.8.at="ahcich5"
hint.scbus.8.bus="0"
hint.ada.5.at="scbus8"
hint.ada.5.target="0"
hint.ada.5.unit="0"


Cheers,
Gavin....


----------



## unixgirl (Jul 24, 2012)

Ah ha! Perfect!  

Thank you!


----------

