# AOC-USAS2-L8i zfs panics and SCSI errors in messages



## Sebulon (Oct 20, 2011)

Hi,

IÂ´m in the process of vacating a Sun/Oracle system to a another Supermicro/FreeBSD system, doing zfs send/recv between. Two times now, the system has panicked while not doing anything at all, and itÂ´s throwing alot of SCSI/CAM-related errors while doing IO-intensive operations, like send/recv, resilver, and zpool has sometimes reported read/write errors on the hard drives. Best part is that the errors in messages are about all hard drives at one or another, and they are connected with separate cables, controllers and caddies. Specs:


```
[B][U]HW[/U][/B]
1x  Supermicro X8SIL-F
2x  Supermicro AOC-USAS2-L8i
2x  Supermicro CSE-M35T-1B
1x  Intel Core i5 650 3,2GHz
4x  2GB 1333MHZ DDR3 ECC UDIMM
10x SAMSUNG HD204UI (in a raidz2 zpool)
1x  OCZ Vertex 3 240GB (L2ARC)

[B][U]SW[/U][/B]
[CMD="#"]uname -a[/CMD]
FreeBSD server 8.2-STABLE FreeBSD 8.2-STABLE #0: Mon Oct 10 09:12:25 UTC 2011     root@server:/usr/obj/usr/src/sys/GENERIC  amd64
[CMD="#"]zpool get version pool1[/CMD]
NAME   PROPERTY  VALUE    SOURCE
pool1  version   28       default
```

I got the panic from the IPMI KVM:








And an extract from /var/log/messages:

```
Oct 19 17:37:19 fs2-7 kernel: (da6:mps1:0:0:0): WRITE(10). CDB: 2a 0 6 13 66 f 0 0 f 0 
Oct 19 17:37:19 fs2-7 kernel: (da6:mps1:0:0:0): CAM status: SCSI Status Error
Oct 19 17:37:19 fs2-7 kernel: (da6:mps1:0:0:0): SCSI status: Check Condition
Oct 19 17:37:19 fs2-7 kernel: (da6:mps1:0:0:0): SCSI sense: UNIT ATTENTION asc:29,0 (Power on, reset, or bus device reset occurred)
Oct 19 17:37:19 fs2-7 kernel: (da6:mps1:0:0:0): WRITE(6). CDB: a 0 1 b2 2 0 
Oct 19 17:37:19 fs2-7 kernel: (da6:mps1:0:0:0): CAM status: SCSI Status Error
Oct 19 17:37:19 fs2-7 kernel: (da6:mps1:0:0:0): SCSI status: Check Condition
Oct 19 17:37:19 fs2-7 kernel: (da6:mps1:0:0:0): SCSI sense: UNIT ATTENTION asc:29,0 (Power on, reset, or bus device reset occurred)
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI command timeout on device handle 0x000c SMID 859
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI command timeout on device handle 0x000c SMID 495
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI command timeout on device handle 0x000c SMID 725
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI command timeout on device handle 0x000c SMID 722
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI command timeout on device handle 0x000c SMID 438
Oct 19 17:40:38 fs2-7 kernel: mps1: (1:4:0) terminated ioc 804b scsi 0 state c xfer 0
Oct 19 17:40:38 fs2-7 last message repeated 3 times
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_abort_complete: abort request on handle 0x0c SMID 859 complete
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_complete_tm_request: sending deferred task management request for handle 0x0c SMID 495
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_abort_complete: abort request on handle 0x0c SMID 495 complete
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_complete_tm_request: sending deferred task management request for handle 0x0c SMID 725
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_abort_complete: abort request on handle 0x0c SMID 725 complete
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_complete_tm_request: sending deferred task management request for handle 0x0c SMID 722
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_abort_complete: abort request on handle 0x0c SMID 722 complete
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_complete_tm_request: sending deferred task management request for handle 0x0c SMID 438
Oct 19 17:40:38 fs2-7 kernel: mps1: mpssas_abort_complete: abort request on handle 0x0c SMID 438 complete
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): WRITE(10). CDB: 2a 0 6 25 4f 75 0 0 b 0 
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): CAM status: SCSI Status Error
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI status: Check Condition
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI sense: UNIT ATTENTION asc:29,0 (Power on, reset, or bus device reset occurred)
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): WRITE(10). CDB: 2a 0 2d a5 10 ca 0 0 80 0 
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): CAM status: SCSI Status Error
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI status: Check Condition
Oct 19 17:40:38 fs2-7 kernel: (da9:mps1:0:4:0): SCSI sense: UNIT ATTENTION asc:29,0 (Power on, reset, or bus device reset occurred)
Oct 19 17:45:40 fs2-7 kernel: (da1:mps0:0:1:0): SCSI command timeout on device handle 0x000a SMID 976
Oct 19 17:45:41 fs2-7 kernel: (da1:mps0:0:1:0): SCSI command timeout on device handle 0x000a SMID 636
Oct 19 17:45:41 fs2-7 kernel: (da1:mps0:0:1:0): SCSI command timeout on device handle 0x000a SMID 888
Oct 19 17:45:41 fs2-7 kernel: (da1:mps0:0:1:0): SCSI command timeout on device handle 0x000a SMID 983
Oct 19 17:45:41 fs2-7 kernel: mps0: (0:1:0) terminated ioc 804b scsi 0 state c xfer 0
Oct 19 17:45:41 fs2-7 last message repeated 2 times
Oct 19 17:45:41 fs2-7 kernel: mps0: mpssas_abort_complete: abort request on handle 0x0a SMID 976 complete
Oct 19 17:45:41 fs2-7 kernel: mps0: mpssas_complete_tm_request: sending deferred task management request for handle 0x0a SMID 636
Oct 19 17:45:41 fs2-7 kernel: mps0: mpssas_abort_complete: abort request on handle 0x0a SMID 636 complete
Oct 19 17:45:41 fs2-7 kernel: mps0: mpssas_complete_tm_request: sending deferred task management request for handle 0x0a SMID 888
Oct 19 17:45:41 fs2-7 kernel: mps0: mpssas_abort_complete: abort request on handle 0x0a SMID 888 complete
Oct 19 17:45:41 fs2-7 kernel: mps0: mpssas_complete_tm_request: sending deferred task management request for handle 0x0a SMID 983
Oct 19 17:45:41 fs2-7 kernel: mps0: mpssas_abort_complete: abort request on handle 0x0a SMID 983 complete
Oct 19 17:45:41 fs2-7 kernel: (da1:mps0:0:1:0): WRITE(10). CDB: 2a 0 6 40 a7 2 0 0 3 0 
Oct 19 17:45:41 fs2-7 kernel: (da1:mps0:0:1:0): CAM status: SCSI Status Error
Oct 19 17:45:41 fs2-7 kernel: (da1:mps0:0:1:0): SCSI status: Check Condition
Oct 19 17:45:41 fs2-7 kernel: (da1:mps0:0:1:0): SCSI sense: UNIT ATTENTION asc:29,0 (Power on, reset, or bus device reset occurred)
Oct 19 17:45:42 fs2-7 kernel: (da1:mps0:0:1:0): WRITE(10). CDB: 2a 0 6 40 b0 9 0 0 9 0 
Oct 19 17:45:42 fs2-7 kernel: (da1:mps0:0:1:0): CAM status: SCSI Status Error
Oct 19 17:45:42 fs2-7 kernel: (da1:mps0:0:1:0): SCSI status: Check Condition
Oct 19 17:45:42 fs2-7 kernel: (da1:mps0:0:1:0): SCSI sense: UNIT ATTENTION asc:29,0 (Power on, reset, or bus device reset occurred)
```

WhatÂ´s going on?

/Sebulon


----------



## olav (Oct 20, 2011)

You should ask this question to the freebsd-scsi@freebsd.org mailing list. I belive Kenneth D. Merry (ken@FreeBSD.ORG) is the author of the mps driver.


----------



## Sebulon (Oct 20, 2011)

@olav

Excellent suggestion, thank you. IÂ´ll do that right away!

/Sebulon


----------



## Sebulon (Oct 25, 2011)

Update:

I have now experienced the exact same panic four times. It is the exact same panic every time, except that it happens on different cpuÂ´s. It is always at exactly 03:01, when daily periodics is run. I have tried shuffling over the same filesystem from the Oracle machine every time and it always have had time to finish properly. Last time it finished and was idle for about 6 hours and was working fine- I checked in at about 22:00 and looked at zpool status; it was clean. Restarting the machine and running periodics daily manually works. If I donÂ´t send any filesystems over, the machine is stable over the nights, but once IÂ´ve sent something over, at 03:01, it panics.

I am going to try shuffling over another filesystem to see if thereÂ´s anything in that specific filesystem that causes the crash, or if it happens regardless of which filesystem has been recieved.

/Sebulon


----------



## mix_room (Oct 25, 2011)

Have you run all the periodics security as well. There are a number of those that run at the same time. Maybe its on of those.


----------



## Sebulon (Oct 25, 2011)

@mix_room

Yes, I recieved a security mail as well, when I ran daily afterwards without issue. It could be something in either one of them.

Is there as way to follow them verbosely to see at what step it panics perhaps?

/Sebulon


----------



## aragon (Oct 25, 2011)

It's probably one of the periodics that searches the file system:

/etc/periodic/weekly/310.locate
/etc/periodic/security/100.chksetuid

Although if they're causing a panic, the real problem is something else...


----------



## phoenix (Oct 25, 2011)

You'll want to edit /etc/locate.rc and make sure to uncomment the *PRUNEDIRS=".zfs"* line, and then add any directories you don't want searched to *PRUNEPATHS=""*.  The first one is most important as that prevents the 310.locate script from searching all of your snapshots recursively, blowing out the ARC, kmem, and other resources, thus locking up the system.  

Depending on the size of your pool, you may also want to add 
	
	



```
daily_status_security_chksetuid_enable="NO"
```
 to /etc/periodic.conf as that runs a recursive find across the entire pool.

These two run at the same time and will bring a multi-TB pool system to its knees and then some, especially if anything else is accessing the pool at the same time (backups, send/recv, etc).


----------



## Sebulon (Oct 26, 2011)

Hey,

/etc/periodic.conf

```
daily_status_security_chksetuid_enable="NO"
```
Worked like a charm! I can now do periodics without panicking again, thank you!

The question about the timeouts still remains though.

/Sebulon


----------



## aragon (Oct 26, 2011)

Sebulon said:
			
		

> The question about the timeouts still remains though.


FWIW, the chksetuid periodic was causing one of my ZFS systems to lock up too, but the real cause ended up being a failing SATA controller (to which our ZIL and L2ARC was attached).


----------



## danbi (Nov 30, 2011)

phoenix said:
			
		

> You'll want to edit /etc/locate.rc and make sure to uncomment the *PRUNEDIRS=".zfs"* line, and then add any directories you don't want searched to *PRUNEPATHS=""*.  The first one is most important as that prevents the 310.locate script from searching all of your snapshots recursively, blowing out the ARC, kmem, and other resources, thus locking up the system.



According to the comments in that file, the commented variables are the defaults. Therefore, by default .zfs directories should be ignored. Also, they would be included only if already mounted, otherwise are invisible. But, it never hurts to be sure.


----------



## peetaur (Dec 21, 2011)

What firmware version did you try?

I tried version 10 and 11 IR, and version 11 IT. All failed the same way. Today my server even lost a disk even before it started booting (dmesg/messages have no trace of the disk).

In this thread, the guy was told to upgrade to 9 which failed horribly for him, and then he tried 8-fixed which worked for him. (but his issue may be different) So I am considering trying that version, but I'm sure it will be a hassle to downgrade, unless it is as simple as erasing the new one before installing the old one.


----------



## frijsdijk (Dec 22, 2011)

phoenix said:
			
		

> You'll want to edit /etc/locate.rc and make sure to uncomment the *PRUNEDIRS=".zfs"* line, and then add any directories you don't want searched to *PRUNEPATHS=""*.  The first one is most important as that prevents the 310.locate script from searching all of your snapshots recursively, blowing out the ARC, kmem, and other resources, thus locking up the system.



Where did you get that var PRUNEDIRS from? It's not in my /etc/locate.rc nor can I find it in any manpage about locate (or any related manpages). I'm running 8.2-RELEASE-p4.


----------



## Sebulon (Dec 22, 2011)

@peetaur

I flashed mine to phase 10 IT firmware. I am so sorry to hear you are still struggling Peter, I really am. These kinds of issues are really the ones causing hair-loss on us poor sysadmins There where these guys who posted a reply to my original mail-thread that _may_ help:


> On 11-11-07 03:56 AM, Rich wrote:
> > Observation - the LSI SAS expanders, in my experience, sometimes
> > misbehave when there are drives which respond slower than some timeout
> > to commands (as far as I've seen it's only SATA drives it does this
> ...



My problems however seem to have been completly solved, after getting in touch with fs@freebsd.org and got my panics fixed. Turns out, after the filesystems were all replicated, the system panicked as soon as any of them were mounted and I (or the system) tried to do anything with them. That can be read about here:
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=202238+0+/usr/local/www/db/text/2011/freebsd-fs/20111218.freebsd-fs
My deepest thanks to Andriy Gapon, Martin Matuska and Pawel Jakub Dawidek for their assistance!

The SCSI timeouts havenÂ´t reappeared since I applied these loader variables:
/boot/loader.conf

```
vfs.zfs.vdev.min_pending="1"
vfs.zfs.vdev.max_pending="1"
```
First I started a scrub and let it run for three or four days without any issues, besides it going awfully slow because of the dedup. It went on at about 5MB/s and estimated about 300+ hours remaining. I then aborted the scrub and instead started sending about 2TB of data into it without dedup over SMB at about 50MB/s, which took about a day or so. No errors. Then I started a 1TB large zfs send/recv from the machine into another BSD-system at about 50MB/s, also without errors. As a last test IÂ´m going to send that 1TB datastream back again to see if theirÂ´s any errors. But after doing that, IÂ´m going to be a very happy camper, all in all

/Sebulon


----------



## phoenix (Dec 22, 2011)

frijsdijk said:
			
		

> Where did you get that var PRUNEDIRS from? It's not in my /etc/locate.rc nor can I find it in any manpage about locate (or any related manpages). I'm running 8.2-RELEASE-p4



It could be a hold-over from previous versions, as our original storage boxes started with FreeBSD 7.0 and have been upgraded to 8.2-STABLE over the years.


----------



## peetaur (Dec 23, 2011)

My primary system is without dedup. I don't know what runs to make it crash, but it does. I don't really want to purposely crash it.

On the secondary system, I enabled dedup just to test (and not happy with performance). I tried my best to crash it...

I tried running all of these at the same time (causing a load over 20 for a day):

`# find . -exec dd if="{}" of=/dev/null bs=128k \;`
`# cd /etc/periodic/daily; while true; do for f in $(ls -1); do ./$f; done; done`
`# cd /etc/periodic/security; while true; do for f in $(ls -1); do ./$f; done; done`
and filebench with 12 threads going non-stop with the randomrw test

And that is the system with the cheapo 5k RPM consumer grade SATA disks. If one crashes, why not that one? The crashing one has 7k RPM enterprise SATA. All the rest of the hardware (board, chassi, HBA, expanders, memory, etc.) is the same. Also both have the consumer SSDs. (Yesterday I replaced the SSDs with enterprise spinning disks to see if it still crashes.)

And thank you for your sympathy/empathy. It is a terrible experience. I was planning on putting a web server on a VM, and I can't do that until the system is stable. And it consumes so much time.

And I also tried the settings that worked for you, except I am using phase 11 instead of 10 of the IT firmware. Should I try phase 10 again?

Here is my new thread with a list of things I tried: http://forums.freebsd.org/showthread.php?t=28252

And I saw Doug's message about the SMP timeouts. But I don't want to use a 9.0 prerelease, and haven't yet found something that does the same thing in 8-STABLE. I'll email him now.


----------



## Sebulon (Dec 29, 2011)

@peetaur



> And I also tried the settings that worked for you, except I am using phase 11 instead of 10 of the IT firmware. Should I try phase 10 again?


Well, at least it couldnÂ´t hurt, right? I donÂ´t think itÂ´ll make a difference though.
Also worth mentioning is that IÂ´m using the FreeBSD version of the mps driver, and not LSIÂ´s.

I have now completed the 1TB zfs send/recv back into the machine, so it was writing at about 30MB/s for 12h or so without errors, so I am very pleased with the result.

/Sebulon


----------



## Sebulon (Jan 20, 2012)

Yesterday, I had another disk disappearing after this was printed into messages:

```
Jan 18 07:38:19 fs2-7 kernel: (da6:mps1:0:0:0): SCSI command timeout on device handle 0x0009 SMID 130
Jan 18 07:39:02 fs2-7 kernel: mps1: (1:0:0) terminated ioc 804b scsi 0 state c xfer 0
Jan 18 07:39:02 fs2-7 kernel: mps1: mpssas_abort_complete: abort request on handle 0x09 SMID 130 complete
Jan 18 07:39:02 fs2-7 kernel: mps1: (1:0:0) terminated ioc 804b scsi 0 state c xfer 0
Jan 18 07:39:02 fs2-7 kernel: mps1: (1:0:0) terminated ioc 804b scsi 0 state c xfer 0
Jan 18 07:39:02 fs2-7 kernel: mps1: (1:0:0) terminated ioc 804b scsi 0 state 0 xfer 0
Jan 18 07:39:02 fs2-7 kernel: mps1: (1:0:0) terminated ioc 804b scsi 0 state 0 xfer 0
Jan 18 07:39:02 fs2-7 kernel: mps1: mpssas_remove_complete on target 0x0000, IOCStatus= 0x8
Jan 18 07:39:02 fs2-7 kernel: (da6:mps1:0:0:0): lost device
Jan 18 07:39:02 fs2-7 kernel: (da6:mps1:0:0:0): Synchronize cache failed, status == 0xa, scsi status == 0x0
Jan 18 07:39:02 fs2-7 kernel: (da6:mps1:0:0:0): removing device entry
```

I have now installed a different version of the mps-driver, the mpslsi closed driver provided by LSI, after a tip from peetaur about that driver beeing able to handle a certain types of errors better than the FreeBSD version. Working fine so far, but will keep updating, since it took quite a while since last time a disk went bye-bye. Fingers crossed!

/Sebulon


----------



## Sebulon (Jan 26, 2012)

My SSD cache-drive gone fishing:

```
Jan 26 00:08:17 fs2-7 kernel: mpslsi0: mpssas_scsiio_timeout checking sc 0xffffff80003a5000 cm 0xffffff80003e8498
Jan 26 00:08:17 fs2-7 kernel: (da5:mpslsi0:0:6:0): WRITE(10). CDB: 2a 0 1 cc f1 f1 0 0 8 0 length 4096 SMID 603 command timeout cm 0xffffff80003e8498 ccb 0xffffff00076db800
Jan 26 00:08:17 fs2-7 kernel: mpslsi0: mpssas_alloc_tm freezing simq
Jan 26 00:08:17 fs2-7 kernel: mpslsi0: timedout cm 0xffffff80003e8498 allocated tm 0xffffff80003b8148
Jan 26 00:08:21 fs2-7 kernel: (da5:mpslsi0:0:6:0): WRITE(10). CDB: 2a 0 1 cc f1 f1 0 0 8 0 length 4096 SMID 603 completed timedout cm 0xffffff80003e8498 ccb 0xffffff00076db800 during recovery ioc 8048 scsi 0 state c xfer (noperiph:mpslsi0:0:6:0): SMID 1 abort TaskMID 603 status 0x4a code 0x0 count 1
Jan 26 00:08:21 fs2-7 kernel: (noperiph:mpslsi0:0:6:0): SMID 1 finished recovery after aborting TaskMID 603
Jan 26 00:08:21 fs2-7 kernel: mpslsi0: mpssas_free_tm releasing simq
Jan 26 00:08:58 fs2-7 kernel: (da5:mpslsi0:0:6:0): WRITE(10). CDB: 2a 0 1 cc f1 f1 0 0 8 0 length 4096 SMID 1000 terminated ioc 804b scsi 0 state c xfer 0
Jan 26 00:08:59 fs2-7 kernel: mpslsi0: mpssas_alloc_tm freezing simq
Jan 26 00:08:59 fs2-7 kernel: mpslsi0: mpssas_lost_target targetid 6
Jan 26 00:08:59 fs2-7 kernel: (da5:mpslsi0:0:6:0): lost device
Jan 26 00:08:59 fs2-7 kernel: mpslsi0: mpssas_remove_complete on handle 0x000e, IOCStatus= 0x0
Jan 26 00:08:59 fs2-7 kernel: mpslsi0: mpssas_free_tm releasing simq
Jan 26 00:09:04 fs2-7 kernel: (da5:mpslsi0:0:6:0): Synchronize cache failed, status == 0x39, scsi status == 0x0
Jan 26 00:09:04 fs2-7 kernel: (da5:mpslsi0:0:6:0): removing device entry
```

/Sebulon


----------



## Sebulon (Jan 27, 2012)

Updated firmware on the cache-drive today:

```
[CMD="#"]smartctl -i /dev/da5[/CMD]
smartctl 5.41 2011-06-09 r3365 [FreeBSD 8.2-STABLE amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     SandForce Driven SSDs
Device Model:     OCZ-VERTEX3
Serial Number:    OCZ-OWU0P68L5G5108T5
LU WWN Device Id: 5 e83a97 f53e361b4
Firmware Version: 2.15
User Capacity:    240,057,409,536 bytes [240 GB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ACS-2 revision 3
Local Time is:    Fri Jan 27 11:07:16 2012 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
```
Popped it back in and got added as a device in the system again. ItÂ´s now back in the pool as cache and now IÂ´ll just wait and see.

/Sebulon


----------



## peetaur (Apr 10, 2012)

It has been over 2 months since your last post. The suspense is killing me! Did it work?


----------



## Sebulon (Apr 11, 2012)

@peetaur

Thanks for reminding me. Yeah, seems FW update did the trick, it hasn't gone AWOL ever since Sad to see so much buggy FW on SSD's, especially on SandForce controllers, but it's good that they are quickly fixed again.

/Sebulon


----------



## debguy (Apr 12, 2012)

*O*ff the cuff, the kernel shows zfs funs on the stack.  zfs isn't advertized as a stable network file sharing deal (none are). *H*ow about iscsi or rscsi or hardward RAID solutions et al?

*U*se something else - *I* wouldn't touch software based file sharing unless someone was paying for that soluntion and that one only (* until it's terribly stable don't mess around unless you got the time *).

*O*n the other hand (RAID controller) log says 
	
	



```
Power on, reset, or bus device reset occurred
```
 *P*robably it's right! *A*nd maybe you have a bad cable or drive, wrong terminating resistor, as it says, or a funked RAID controller.

*Y*our new machine may not work - don't wait over 30 days to ask for an RMA if need be or at least call their tech staff and let them know.

SCSI hardware is complicated, drivers for software SCSI are complicated (yet used to be rock solid anyway) - new RAID drivers and SCSI 3? *N*ot alawys better than rock solid hardware SCSI 2.

*W*hat *I* mean is: you have had to hack a lib to get it to work this time and you *think* it's fixed.

*B*ut when does it break again? *W*ill you be able to fix it so easily next time?


----------



## phoenix (Apr 16, 2012)

Debguy:  you do realise that *all* network filesystems are software-based, right?  There's no such thing as "hardware-based file sharing".

Also, you do realise that ZFS has been around for almost a decade, right?

And, that the mps(4)-based controller in this situation is *NOT* a RAID controller, but a plain-jane SATA controller, right?

Also, you do realise that SATA is not SCSI, and that SATA and SAS (which is SCSI) are not parallel protocols, and that the cables they use do not have terminators/resistors, right?

IOW, your entire post is not relevant to this discussion.  Please read the entire thread, understand the entire thread, understand the technologies involved, understand how it all works on FreeBSD before replying.

I have yet to remember a single reply of yours that was on-topic and relevant.


----------



## Sebulon (Apr 17, 2012)

Marking thread as solved. I havenÂ´t had any issues since.

@debguy

<- What phoenix said.

/Sebulon


----------

