# LSI 9240-4i 4K alignment



## gkontos (Aug 7, 2012)

Hi all,

We have a server with a LSI 9240-4i controller configured in JBOD with 4 SATA disks. Running FreeBSD 9.1-Beta1:

Relevant dmesg:

```
FreeBSD 9.1-BETA1 #0: Thu Jul 12 09:38:51 UTC 2012
    root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
CPU: Intel(R) Xeon(R) CPU E31230 @ 3.20GHz (3200.09-MHz K8-class CPU)
  Origin = "GenuineIntel"  Id = 0x206a7  Family = 6  Model = 2a  Stepping = 7
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
Features2=0x1fbae3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,AESNI,XSAVE,OSXSAVE,AVX>
  AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
  AMD Features2=0x1<LAHF>
  TSC: P-state invariant, performance statistics
real memory  = 17179869184 (16384 MB)
avail memory = 16471670784 (15708 MB)
...
mfi0: <Drake Skinny> port 0xe000-0xe0ff mem 0xf7a60000-0xf7a63fff,0xf7a00000-0xf7a3ffff irq 16 at device 0.0 on pci1
mfi0: Using MSI
mfi0: Megaraid SAS driver Ver 4.23 
...
mfi0: 321 (397672301s/0x0020/info) - Shutdown command received from host
mfi0: 322 (boot + 3s/0x0020/info) - Firmware initialization started (PCI ID 0073/1000/9241/1000)
mfi0: 323 (boot + 3s/0x0020/info) - Firmware version 2.130.354-1664
mfi0: 324 (boot + 3s/0x0020/info) - Firmware initialization started (PCI ID 0073/1000/9241/1000)
mfi0: 325 (boot + 3s/0x0020/info) - Firmware version 2.130.354-1664
mfi0: 326 (boot + 5s/0x0020/info) - Package version 20.10.1-0107
mfi0: 327 (boot + 5s/0x0020/info) - Board Revision 03A
mfi0: 328 (boot + 25s/0x0002/info) - Inserted: PD 04(e0xff/s3)
...
mfisyspd0 on mfi0
mfisyspd0: 1907729MB (3907029168 sectors) SYSPD volume
mfisyspd0:  SYSPD volume attached
mfisyspd1 on mfi0
mfisyspd1: 1907729MB (3907029168 sectors) SYSPD volume
mfisyspd1:  SYSPD volume attached
mfisyspd2 on mfi0
mfisyspd2: 1907729MB (3907029168 sectors) SYSPD volume
mfisyspd2:  SYSPD volume attached
mfisyspd3 on mfi0
mfisyspd3: 1907729MB (3907029168 sectors) SYSPD volume
mfisyspd3:  SYSPD volume attached
...
mfi0: 329 (boot + 25s/0x0002/info) - Inserted: PD 04(e0xff/s3) Info: enclPd=ffff, scsiType=0, portMap=00, sasAddr=4433221100000000,0000000000000000
mfi0: 330 (boot + 25s/0x0002/info) - Inserted: PD 05(e0xff/s1)
mfi0: 331 (boot + 25s/0x0002/info) - Inserted: PD 05(e0xff/s1) Info: enclPd=ffff, scsiType=0, portMap=02, sasAddr=4433221102000000,0000000000000000
mfi0: 332 (boot + 25s/0x0002/info) - Inserted: PD 06(e0xff/s2)
mfi0: 333 (boot + 25s/0x0002/info) - Inserted: PD 06(e0xff/s2) Info: enclPd=ffff, scsiType=0, portMap=03, sasAddr=4433221101000000,0000000000000000
mfi0: 334 (boot + 25s/0x0002/info) - Inserted: PD 07(e0xff/s0)
mfi0: 335 (boot + 25s/0x0002/info) - Inserted: PD 07(e0xff/s0) Info: enclPd=ffff, scsiType=0, portMap=01, sasAddr=4433221103000000,0000000000000000
mfi0: 336 (397672376s/0x0020/info) - Time established as 08/07/12 16:32:56; (28 seconds since power on)
```

The problem:

When trying to create a RaidZ pool using gpart and perform a 4K alignment using  gnop, we get the follwoing error immediately after exporting the pool and destroying the .nop devices:


```
id: 8043746387654554958
  state: FAULTED
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
	The pool may be active on another system, but can be imported using
	the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-5E
 config:

	Pool                      FAULTED  corrupted data
	  raidz1-0                ONLINE
	    13283347160590042564  UNAVAIL  corrupted data
	    16981727992215676534  UNAVAIL  corrupted data
	    6607570030658834339   UNAVAIL  corrupted data
	    3435463242860701988   UNAVAIL  corrupted data
```

When we use glabel for the same purpose with the combination of gnop, the pool imports fine.

Any suggestions?


----------



## Savagedlight (Aug 15, 2012)

It would be helpful if you provided the commands you use to create the gpt partitions and zpool.


----------



## gkontos (Aug 15, 2012)

Savagedlight said:
			
		

> It would be helpful if you provided the commands you use to create the gpt partitions and zpool.



Nothing fancy, just the usual way for 4K alignment.

`#gpart create -s gpt mfisyspd0`
`#gpart create -s gpt mfisyspd1`
`#...`

`#gpart add -t freebsd-zfs -l disk0 -b 2048 -a 4k mfisyspd0`
`#gpart add -t freebsd-zfs -l disk1 -b 2048 -a 4k mfisyspd1`
`#...`

`#gnop create -S 4096 /dev/gpt/disk0`
`#gnop create -S 4096 /dev/gpt/disk1`
`#...`

`#zpool create Pool raidz1 /dev/gpt/disk0.nop /dev/gpt/disk1.nop /dev/gpt/disk2.nop /dev/gpt/disk3.nop`

`#zpool export Pool`

`#gnop destoy /dev/gpt/disk0.nop`
`#gnop destoy /dev/gpt/disk1.nop`
`#...`

`#zpool import Pool`


----------



## gkontos (Aug 16, 2012)

More information regarding this controller from the manufacturer:



> My apologies for the wrong information provided in my previous email. I was under the impression that this OS is still supported but after checking with our developer,  FreeBSD is currently not supported with the LSI Megaraid Cards due to some issue with the driver we've provided. It will be supported in our upcoming releases which may come by the end of this year.  Please check back on our website during that time frame for the FreeBSD driver.
> Once again please accept my apologies for the inconvenience.
> 
> Thank you.
> ...



And the result after having to cancel a scrub running for 14 hours:


```
pool: Pool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub canceled on Thu Aug 16 11:19:31 2012
config:

	NAME             STATE     READ WRITE CKSUM
	Pool             ONLINE       0     0     0
	  raidz1-0       ONLINE       0     0     0
	    gpt/261-3    ONLINE       0     0     0
	    gpt/262-4    ONLINE       0     0     0
	    gpt/263-5    ONLINE       0     0     0
	    gpt/264-6    ONLINE       0     0     0
	  raidz1-1       ONLINE       0     0     7
	    mfisyspd0p1  ONLINE       0     0     1
	    mfisyspd1p1  ONLINE       0     0     0
	    mfisyspd2p1  ONLINE       0     0     0
	    mfisyspd3p1  ONLINE       0     0     0
```


----------



## phoenix (Aug 18, 2012)

For the import, add *-d* to point it to the GPT labels directory:
`# zpool import -d /dev/gpt poolName`


----------



## gkontos (Aug 18, 2012)

phoenix said:
			
		

> For the import, add *-d* to point it to the GPT labels directory:
> `# zpool import -d /dev/gpt poolName`



We had tried that also but the system froze x(

The solution that has worked so far was to gnop the drives directly. However, even with that we still get vey strange performance. No errors though.

I had a suggestion in the FreeBSD Hardware mailing list to flash the card with a different firmware. But at this point, given the fact that this will carry data, it makes more sense to use a different controller that falls under the mps() driver.

Thanks


----------



## Ben (Nov 15, 2013)

You have any experience about this issue on FreeBSD 9.2?


----------



## gkontos (Nov 16, 2013)

Ben said:
			
		

> You have any experience about this issue on FreeBSD 9.2?



No, we decided to ditch that card and we only use HBA since then.


----------



## Ben (Nov 16, 2013)

What is HBA?


----------



## gkontos (Nov 16, 2013)

Ben said:
			
		

> What is HBA?



Host Bus Adapters. I would show you some from LSI but their website appears to be down for maintenance now.


----------



## Oko (Nov 17, 2013)

gkontos said:
			
		

> No, we decided to ditch that card and we only use HBA since then.



Card is probably good. I have a bunch of those ( LSI Logic / Symbios Logic MegaRAID SAS 2208) working with RedHat. Excellent hardware RAID cards. Makes me think something is wrong with FreeBSD or the complexity of ZFS. I am glad I stumbled on this post. One more reason to go with Hammer and DragonFly


----------



## Savagedlight (Nov 17, 2013)

Oko said:
			
		

> Card is probably good. I have a bunch of those ( LSI Logic / Symbios Logic MegaRAID SAS 2208) working with RedHat. Excellent hardware RAID cards. Makes me think something is wrong with FreeBSD or the complexity of ZFS. I am glad I stumbled on this post. One more reason to go with Hammer and DragonFly



As you can read earlier in the thread, there was a problem with the driver for the card. I'd expect that to be fixed by now, considering the quote from the manufacturer states it should be fixed in a driver release scheduled for the end of 2012 - or about a year ago.


----------



## gkontos (Nov 18, 2013)

Oko said:
			
		

> Card is probably good. I have a bunch of those ( LSI Logic / Symbios Logic MegaRAID SAS 2208) working with RedHat. Excellent hardware RAID cards. Makes me think something is wrong with FreeBSD or the complexity of ZFS. I am glad I stumbled on this post. One more reason to go with Hammer and DragonFly



Nobody says that RAID cards are bad. They are just not suitable for ZFS. You go ahead and spend big money on hardware RAID cards instead of HBA's. Just don't brag about it though.


----------



## Oko (Feb 15, 2014)

*Re:*



			
				gkontos said:
			
		

> Oko said:
> 
> 
> 
> ...


You can not use ZFS on the top of genuine RAID cards. In the mean time we acquired several File servers on which I have ZFS but instead of hardware RAID I have LSI 9207-8i Gb/s SAS HBA . I still have couple file servers in the lab running RedHat and using hardware RAID, in RAID6 configuration.


----------



## S3TH76 (Dec 19, 2014)

Why can not use ZFS  with genuine RAID cards?  I have a LSI 9240-4i SAS Controller card and I want to install FreeBSD ....  But it's says that couldn't find both HDD's and LSI MEGARAID controller on HW detection and partitioning step on installation.


----------



## phoenix (Dec 19, 2014)

You can use hardware RAID controllers with ZFS ... but it's a waste of hardware to do so.  And, if you configure a RAID array with the controller, then you lose a lot of best features of ZFS (self-healing, mirror/raidz support, etc).  All you're left with is checksumming and snapshots.

If you configure the RAID controller for JBOD mode, then you're left with an extremely expensive SATA controller, and you'd be better off just buying a less expensive (but still capable) SATA controller.

If you configure the RAID controller with individual RAID arrays for each individual drive, then you add some latency into the system as both ZFS and the RAID controller are managing the disks.  And the controller may be caching writes which can lead to Bad Things happening on power failure.

Sometimes, you don't have a choice, and need to run ZFS on top of a RAID array.  You just need to be aware of the issues, and be willing to work around them or within their limits.


----------

