# NAS using ZFS with FreeBSD, run by a Solaris SysAdmin



## ctengel (Jan 2, 2012)

Pre-script: So I basically realized I gave my whole situation below, so if you have time please read and give suggestions on the whole thing, but primarily the questions/concerns I still have at this point are as follows:


Are there differences or gotchas between Solaris ZFS implementation and FreeBSDâ€™s ZFS implementation?
Anything noteworthy (especially if itâ€™s something I mention I plan on using below) in FreeBSD 9 that is not in 8.2?
Ability to read ext2/3/4, FAT, NTFS?
Whatâ€™s BluRay burning like on a SATA drive?
Any issues with a CF card for a rootdisk? Whatâ€™s the best way to mirror? Whatâ€™s the minimum size?
Thanks!

Iâ€™m new to FreeBSD.  I've used Linux at home and school for 10 years, and been a Solaris SysAdmin professionally for the past 3 years.

I decided recently to build my own home NAS.  From my work with Solaris 10, Iâ€™ve grown fond of ZFS. I realize that many of its capabilities exist in some form with other LVMs and/or filesystems, but I canâ€™t seem to find one thatâ€™s as easy or as free.

FreeBSD was not the first OS I looked at.  Ideally I'd just get some new Sun/Oracle hardware and run Solaris 11 on it, but that option is expensive.  I looked at Linux. Btrfs doesnâ€™t seem to be ready for production yet.  I can do ZFS via FUSE, but since that's the main point of this box, that seems silly. I looked at the LLNL kernel port of ZFS, but aside from the obvious CDDL/GPL licensing issues, it does not yet have a production version with a working ZPL (ZFS POSIX Layer). I looked at the OpenSolaris forks (especially OpenIndiana) but from messing with it for a bit, it doesn't seem ready for primetime.

So after that I looked at FreeBSD.  I am aware of the FreeNAS fork, but it doesnâ€™t seem like Iâ€™d gain much from that other than a WebGUI, and I donâ€™t need that.  Iâ€™ve heard many good about FreeBSD in general, and it seems like out of all the free/open ZFS implementations out there, it seems to have the best.  Also I like the idea of the ports system, and have used Gentoo's Portage, which I'm told is somehow derived from ports.

Iâ€™ve installed FreeBSD 9 RC 3 in a Virtual Box environment to play around with it, and have been reading up alot on FreeBSD documentation.  I have not 100% decided on FreeBSD yet, and there is some hardware I still need to buy, but I just wanted to run through my general plan to see if anyone more experienced with FreeBSD could answer still unanswered questions, provide tips, or perhaps most importantly, point out any â€œgotchas,â€ that me an inexperienced FreeBSD user wouldn't be able to see and avoid.  Since this ideally will be a NAS with a lot of data on it, I want to get this right the first time so I donâ€™t have to reinstall the OS or redo the storage or something like that.


 Iâ€™m using existing old AMD Athlon64 based hardware, single core, motherboard is probably MSI KT4V.  Iâ€™m procuring enough DDR RAM to max it out at about 3 GB.  Going to throw in an old PCI graphics card just for install/maintenance purposes. I had issues with getting Linux to play nice with the on-board NIC.  If I have the same issue with FreeBSD, I have some PCI NICs, and I might jump to 1 Gbps anyway.

 For the OS root/boot disk (and probably a mirror too), I think Iâ€™m going to go with an IDE-CF adapter; that way Iâ€™m not filling up a HD bay and Iâ€™m can be sure the BIOS can boot off it. (vs a PCI SATA card)  I donâ€™t see any reason for these to be ZFS, but they can be.  UFS or FFS is fine with me.  If I decide to mirror them though I guess I need VVM and/or GEOM?  (Iâ€™m still reading up on thse...) Also, any recommendations on the minimum size of the CF cards?  While I do want the OS separate from storage, Iâ€™m OK with throwing in a ZVOL for swap on the main storage array.

 Despite the fact that I am playing around with a 9 RC, Iâ€™d like to use the latest available production release (8.2) for the NAS.  Iâ€™m hoping that none of the features I plan to use (see below) are new in 9...if not I can consider 9 or hold off until 9.0 is released.

 For the storage array, Iâ€™m proably doing a stripe of sets of two mirrored drives, of sizes 2-3 TB.  Theyâ€™ll be attached to two SATA controllers on two PCI cards.  (Iâ€™ll consult the FreeBSD HCL before buying).  These will all be one ZFS zpool.  Is there any point to partitioning them at all (fdisk? gpt?) or doing any kind of LVM? (VVM? GEOM?)  As I mentioned above, aside from NAS storage, I may put some swap on the pool if necessary, and perhaps install additional apps there if the CF cards are too small and ports can be setup to install there.

 The NAS data will be organized on various ZFS datasets on the one zpool, with appropriate settings set.  I am assuming all of the Solaris options for zpool and zfs are supported in FreeBSD?  (aside from of course some of the new v33 features introduced in Solaris 11)  Any gotchas for things that donâ€™t work as in Solaris?

 Initially, this unit will primaril only be a NAS, sharing via SMB/CIFS and NFS.  My understanding is that while ZFS can be set to export via NFS automatically, the CIFS capability hasnâ€™t yet been set up on FreeBSD, but this isnâ€™t an issue for me because I know my way around Samba.  (Iâ€™m assuming that Samba on FreeBSD has all the features of Samba on Linux)

 One thing Iâ€™d like to do is put in a BluRay burner as a way of doing offsite backups using â€œzfs send.â€  For me, cd/dvd burning on Linux and Solaris can sometimes be a pain (less so on Linux now than it was a number of years ago).  I have never done any work with BluRays, but apparently cdrtools now supports that.  Is this known to work well in FreeBSD?

 Another thing maybe this will be used for in the future is as an iSCSI target.  It seems that while targets cannot be setup in ZFS, I can make a ZVOL and setup a target daemon from ports.  Also, in the more distant future, I may like to perhaps reach additional storage on another array directly via iSCSI, but it seems that FreeBSD has built in iSCSI initiator functionality so that is good.

  Assuming this works well, I would like to migrate some of the â€œalways-onâ€ stuff from what is now a distinct Linux box onto this new FreeBSD box, to save power consumption.  This would include an apache daemon, a bittorrent client, and that sort of thing.  Everything I would want seems to be available as ports.  Any reason why these canâ€™t all be installed onto their own dataset on the zpool with the NAS data? (as opposed to the CF card with the base OS)

  One of my goals in doing this NAS is to try to centralize data that I now have scattered over a bunch of different filesystem types, while keeping redundancy and promoting backups.  I donâ€™t need to be able to write to any of these, but can I read from ext2, ext3, ext4, FAT16, FAT32, NTFS, and reiserfs? Any compatibility issues with reading Solaris-created UFS or ZFS? Can FreeBSD understand partition tables/slices from all the other major OSs?

 I think Iâ€™d like to generally keep up with the latest FreeBSD Production Release. Ideally either via an upgrade path, or even by swapping out CF â€œrootdiskâ€ cards for clean installs.  Do ports usually have to be reinstalled after an upgrade? Maybe zfs for rootdisk isnâ€™t a bad idea if I can snapshot prior to upgrade.
I appreciate any advice.  Thanks for your time.


----------



## vermaden (Jan 2, 2012)

> -Are there differences or gotchas between Solaris ZFS implementation and FreeBSDâ€™s ZFS implementation?


Boot Environments are a lot harder to use on FreeBSD: http://anonsvn.h3q.com/projects/freebsd-patches/wiki/manageBE

Setting various ZFS settings is done via /boot/loader.conf OIDs whose names will probably differ from those used on Solaris.

Besides these, I haven't spotted any others.



> -Anything noteworthy (especially if itâ€™s something I mention I plan on using below) in FreeBSD 9 that is not in 8.2?


Sure: http://ivoras.net/freebsd/freebsd9.html



> -Ability to read ext2/3/4, FAT, NTFS?


Read, yes. Don't know about ext4, it has two modes, one is compatible with ext3, this one should be possible to read, don't know about the ext4 incompatible with ext3 mode.



> -Whatâ€™s BluRay burning like on a SATA drive?


Don't know, haven't got one.



> -Any issues with a CF card for a rootdisk? Whatâ€™s the best way to mirror? Whatâ€™s the minimum size?



I use CF 8GB card for my home NAS, but even 512MB will do.

I will answer rest of your questions tomorrow or later (if I find some more time today).


----------



## ctengel (Jan 2, 2012)

vermaden said:
			
		

> Boot Environments are a lot harder to use on FreeBSD: http://anonsvn.h3q.com/projects/freebsd-patches/wiki/manageBE
> 
> Setting various ZFS settings is done via /boot/loader.conf OIDs whom names will probably differ from those used on Solaris.
> 
> Besides these, I havent spotted any others



Hmm interesting.  I hadn't even thought through using ZFS for my boot CF card, until now you mentioned it.  I use ZFS for liveupgrade on Solaris 10 all the time; we patch all our servers twice a year and sometimes more if necessary. (much better then when we had UFS roots "encapsulated" by VxVM. Yuck!) I haven't actually done a Solaris 11 upgrade yet, but would do so using beadm, which it seems the manageBE tool you mention seems to roughly take the place of.  It doesn't look like part of base, is that considered a port?

I need to read up a bit more on how exactly FreeBSD upgrades take place (from what I tell  it seems like I'd want to go from one -RELEASE to the next -RELEASE for this box as I don't need this box to get new stuff terribly fast, aside from security fixes, which I assume come out for any non-EOLed RELEASEs, correct?) to figure out whether I would want to bother with this.  I am kind of under the impression that a FreeBSD release base ought to just work, so I'm not terribly concerned, as long as the upgrade path is smooth; maybe make a backup of the rootdisk to the NAS zpool before upgrading just to be safe.

The idea of a ZFS root does not seem to be supported by either sysinstall (8.x) or bsdinstall (9.x) directly, so maybe as someone new to FreeBSD it may not be the best idea. Just reading over the docs on manageBE, I'm also confused on what's all this about sending and receiving the FS.  Isn't that what ZFS clones are for?



			
				vermaden said:
			
		

> Sure: http://ivoras.net/freebsd/freebsd9.html



Thanks for that; that's the sort of summary I was looking for.  Sometimes it's hard to track down exactly what's new in a new release that hasn't even been released yet!



			
				vermaden said:
			
		

> Read, Yes. Dunno about ext4, it has two mode, one is compatiblewith ext3, this one should be possible to read, dunno about the ext4 incompatible with ext3 mode.



Hmm.  Good to know that the others are supported.  I did a bit of poking around and came up with this: https://github.com/gerard/ext4fuse I can't seem to find anyone using FreeBSD with it. (as it is many Linux users avoid ext4!) My understanding is that FreeBSD has a FUSE implementation available in ports, so if I had an ext4 implementation for FUSE, that would work, right?



			
				vermaden said:
			
		

> I use CF 8GB card for my home NAS, but even 512MB will do.



Cool.  I don't know if I would it do it that small (I had issues trying to bsdinstall 9rc3 on a 512MB virtual disk; 1 GB worked though!), but good to know that I don't necessarily need to shell out for a big one.  I think I'd put most of the ports, source, etc on the NAS zpool, in their own dataset of course. (I'm still learning the FreeBSD directory tree, but I guess /usr and /var can safely be sent off?  Assuming I can get the installer to put them elsewhere or install ports and so forth after.)

Just out of curiosity, do you have your CF card formatted UFS or ZFS?

Thanks for your quick initial reply and I look forward to seeing what you have to say about the rest!


----------



## phoenix (Jan 2, 2012)

Note:  all of your questions could have been answered via the search feature of the forums.  



			
				ctengel said:
			
		

> Are there differences or gotchas between Solaris ZFS implementation and FreeBSDâ€™s ZFS implementation?



Solaris 11 includes ZFSv30-something, which includes support for integrated encryption.  FreeBSD (and everyone other than Oracle) uses ZFSv28, the last fully-opensourced version of ZFS.  If you need/want to encrypt your pool on FreeBSD, then you have to use GEOM-based encryption (GELI) below the pool.  And, you can't use a ZFSv29+ pool (created with Solaris 11) with FreeBSD, since it won't understand the pool format.



> Anything noteworthy (especially if itâ€™s something I mention I plan on using below) in FreeBSD 9 that is not in 8.2?



You will want to use with 9.0 or 8.2-STABLE (aka RELENG_8).  There have been a lot of performance, stability, and other fixes for ZFS since the release of 8.2.  So either install the latest 9.0 RC version and then upgrade to 9.0-RELEASE; or install 8.2 and then upgrade to the latest 8-STABLE.



> Ability to read ext2/3/4, FAT, NTFS?
> Whatâ€™s BluRay burning like on a SATA drive?



FreeBSD can read/write ext2; read/write ext3 in non-journalled (aka ext2-compat) mode, although the filesystem will show as "dirty" and need an fsck if mounted in Linux after writing to it in FreeBSD; there's no support (that I know of) for ext4.

The built-in NTFS support allows for read-only access, write is a little dangerous.  There is a port that provides better read/write support for NTFS (sysutils/fusefs-ntfs).

FAT12 through FAT32x support is provided with the OS.  There's no support (that I know of) for exFAT.



> Any issues with a CF card for a rootdisk? Whatâ€™s the best way to mirror? Whatâ€™s the minimum size?



Depending on the CF card, you may need to disable DMA for all ata-based devices.  IDE-CF adapters are really bad for this.  If at all possible, get high-quality, DMA-supporting CF cards, and a SATA-CF adapter.

We have one ZFS-based storage server using 2x 2GB CF cards in IDE-CF adapters configured in a gmirror(8)-based mirror.  And another one using 2x 4GB CF cards in SATA-CF adapters, also confirumed in a gmirror(8)-based mirror.  The SATA-based one works so much nicer than the IDE-based one.



> Iâ€™m using existing old AMD Athlon64 based hardware, single core, motherboard is probably MSI KT4V.  Iâ€™m procuring enough DDR RAM to max it out at about 3 GB.  Going to throw in an old PCI graphics card just for install/maintenance purposes. I had issues with getting Linux to play nice with the on-board NIC.  If I have the same issue with FreeBSD, I have some PCI NICs, and I might jump to 1 Gbps anyway.



3 GB will not be enough RAM if you want to put a lot of data into the pool, and have decent performance.  If possible, install the 64-bit version of FreeBSD and max out the RAM.  The more, the merrier.  There's no such thing as "too much RAM" in a ZFS system.  



> For the storage array, Iâ€™m proably doing a stripe of sets of two mirrored drives, of sizes 2-3 TB.  Theyâ€™ll be attached to two SATA controllers on two PCI cards.  (Iâ€™ll consult the FreeBSD HCL before buying).  These will all be one ZFS zpool.  Is there any point to partitioning them at all (fdisk? gpt?) or doing any kind of LVM? (VVM? GEOM?)  As I mentioned above, aside from NAS storage, I may put some swap on the pool if necessary, and perhaps install additional apps there if the CF cards are too small and ports can be setup to install there.



I'd recommend using gpart(8) to create a partition starting at the 1 MB boundary, covering the rest of the disk, and label the partition with a name based on where the disk is physically located in the chassis.  Then use the /dev/gpt/<labelname> entries to create the pool.  You'll get output similar to this:

```
$ zpool status
  pool: storage
 state: ONLINE
 scan: none requested
config:

        NAME             STATE     READ WRITE CKSUM
        storage          ONLINE       0     0     0
          raidz2-0       ONLINE       0     0     0
            gpt/disk-a1  ONLINE       0     0     0
            gpt/disk-a2  ONLINE       0     0     0
            gpt/disk-a3  ONLINE       0     0     0
            gpt/disk-a4  ONLINE       0     0     0
            gpt/disk-b1  ONLINE       0     0     0
          raidz2-1       ONLINE       0     0     0
            gpt/disk-b2  ONLINE       0     0     0
            gpt/disk-b3  ONLINE       0     0     0
            gpt/disk-b4  ONLINE       0     0     0
            gpt/disk-c1  ONLINE       0     0     0
            gpt/disk-c2  ONLINE       0     0     0
          raidz2-2       ONLINE       0     0     0
            gpt/disk-c3  ONLINE       0     0     0
            gpt/disk-c4  ONLINE       0     0     0
            gpt/disk-d1  ONLINE       0     0     0
            gpt/disk-d2  ONLINE       0     0     0
            gpt/disk-d3  ONLINE       0     0     0
        cache
          gpt/cache1     ONLINE       0     0     0
          gpt/cache2     ONLINE       0     0     0

errors: No known data errors
```
Then, if you run into any issues with dying disks, it'll be easy to figure out exactly which one.



> The NAS data will be organized on various ZFS datasets on the one zpool, with appropriate settings set.  I am assuming all of the Solaris options for zpool and zfs are supported in FreeBSD?  (aside from of course some of the new v33 features introduced in Solaris 11)  Any gotchas for things that donâ€™t work as in Solaris?



FreeBSD's version of ZFS does not include integrated NFS or CIFS support.  The *sharenfs* property on FreeBSD just gets written out to /etc/zfs/exports verbatim, and that file is passed to the normal NFS daemons.  The *sharesmb* property doesn't do anything on FreeBSD; you have to install and use Samba if you want to export filesystems via SMB/CIFS.

No idea about Blu-Ray support, never touched a Blu-Ray disc, let alone a drive.


----------



## ctengel (Jan 3, 2012)

phoenix said:
			
		

> Note:  all of your questions could have been answered via the search feature of the forums.


Perhaps I should have phrased my post differently.  I guess what my idea in posting here was not so much to ask specific questions, but to run my general "plan" (which makes perfect sense in my head and would surely work well on the OS's I'm more familiar with) by some people who are experienced FreeBSD users and may have pointers on how to do it in a more BSD-style way.  The five questions I mentioned I guess were just resulting from the concerns that seemed to be most likely to cause compatibility issues with FreeBSD.



			
				phoenix said:
			
		

> Solaris 11 includes ZFSv30-something, which includes support for integrated encryption.  FreeBSD (and everyone other than Oracle) uses ZFSv28, the last fully-opensourced version of ZFS.  If you need/want to encrypt your pool on FreeBSD, then you have to use GEOM-based encryption (GELI) below the pool.  And, you can't use a ZFSv29+ pool (created with Solaris 11) with FreeBSD, since it won't understand the pool format.



Most of my zfs usage so far has been on Solaris 10, which I think is older than v28, so I'm not expecting any of those new features, and I don't need whole filesystem encryption.  Good to note though that if I'll ever want to import any Solaris-created zpools.  I need to find out if there's a way to tell Solaris 11 to use an old zpool version, but obviously that's off topic here. 



			
				phoenix said:
			
		

> You will want to use with 9.0 or 8.2-STABLE (aka RELENG_8).  There have been a lot of performance, stability, and other fixes for ZFS since the release of 8.2.  So either install the latest 9.0 RC version and then upgrade to 9.0-RELEASE; or install 8.2 and then upgrade to the latest 8-STABLE.



In all the OS's I've used, each has a very different philosophy/approach to patching/upgrading. It seems I have some more still to learn about how the whole release/versioning thing works for FreeBSD.  I've read the following Linux vs. FreeBSD comparison/guide, and this section addresses it http://www.over-yonder.net/~fullermd/rants/bsd4linux/05, but I think I'm still confused.  Based on what you are saying, I guess I should initially install the latest release (or release candidate) of a particular major version (8.x or 9.x), and then from there upgrade to, and keep in sync with, the -STABLE (as opposed to -RELEASE) to ensure I get all the bugfixes?  Is there a way to get just bugfixes for a particular -RELEASE?



			
				phoenix said:
			
		

> FreeBSD can read/write ext2; read/write ext3 in non-journalled (aka ext2-compat) mode, although the filesystem will show as "dirty" and need an fsck if mounted in Linux after writing to it in FreeBSD; there's no support (that I know of) for ext4.
> 
> The built-in NTFS support allows for read-only access, write is a little dangerous.  There is a port that provides better read/write support for NTFS (sysutils/fusefs-ntfs).
> 
> FAT12 through FAT32x support is provided with the OS.  There's no support (that I know of) for exFAT.



After further searching it seems there is ext4 support via FUSE in ports: sysutils/fusefs-ext4fuse

Luckily aside from UFS, FFS, ZFS, ISO/UDF (for CD/DVD/BluRay, but this would be fine with mkisofs, etc) and some network-based filesystems, I don't believe I need write support for any filesystems, as long as I can read from these filesystems from other OS's.  And luckily in my experience with Linux, mounting a strange filesystem read only is usually much safer than read-write. 



			
				phoenix said:
			
		

> Depending on the CF card, you may need to disable DMA for all ata-based devices.  IDE-CF adapters are really bad for this.  If at all possible, get high-quality, DMA-supporting CF cards, and a SATA-CF adapter.
> 
> We have one ZFS-based storage server using 2x 2GB CF cards in IDE-CF adapters configured in a gmirror(8)-based mirror.  And another one using 2x 4GB CF cards in SATA-CF adapters, also confirumed in a gmirror(8)-based mirror.  The SATA-based one works so much nicer than the IDE-based one.



Hmm.  The CF boot cards were the only thing I needed to put on IDE/ATE.  (except for maybe a temporary CD-ROM to boot the FreeBSD install disk from)  The motherboard has no SATA controller, so I'm not sure if I can boot off the separate PCI card based SATA controllers I plan to install.  Hardware cost is a limitation for me but if I can do SATA I'll try.  When you say that the SATA is better, is the IDE one just a little slower, or does it cause issues?

I'll have to take a look at gmirror; do you have the CF cards formatted FFS?



			
				phoenix said:
			
		

> 3 GB will not be enough RAM if you want to put a lot of data into the pool, and have decent performance.  If possible, install the 64-bit version of FreeBSD and max out the RAM.  The more, the merrier.  There's no such thing as "too much RAM" in a ZFS system.



Hmm...I think you have a point here.  I've been spoiled a bit with using ZFS on high performance SPARC-based server equipment.  I know how much RAM the ZFS ARC cache can take up, and I think the lowest amount of RAM work with on a given server is 8 GB.  I was planning on doing 64 bit FreeBSD.  The issue is that I am (or maybe now WAS) planning on using an old motherboard that simply can't hold more.

Does anyone have experience doing ZFS on similarly old hardware?  I'm looking at a 2-5 TB pool's worth of data, striped and mirrored (not RAID-Z, which I realize takes even more resources).  Like I said, dealing with a single core amd64 processor, 3GB of RAM, and plain PCI card SATA controllers.  I'm not expecting much in performance, but I was kind of hoping the bottleneck would be the Ethernet network link.

I could just try it on the old hardware and see what happens, but that's almost worse on the wallet, because I do need to buy some stuff for *this* system (such as the PCI SATA cards); if the system is just too slow/old then I may as well just get new *everything* now. (and in that case probably end up with PCIx SATA cards)



			
				phoenix said:
			
		

> I'd recommend using gpart(8) to create a partition starting at the 1 MB boundary, covering the rest of the disk, and label the partition with a name based on where the disk is physically located in the chassis.  Then use the /dev/gpt/<labelname> entries to create the pool.  You'll get output similar to this:
> 
> ...
> 
> Then, if you run into any issues with dying disks, it'll be easy to figure out exactly which one.



Good point.  I hadn't thought that far ahead yet.  Maybe I should order a spare too.



			
				phoenix said:
			
		

> FreeBSD's version of ZFS does not include integrated NFS or CIFS support.  The *sharenfs* property on FreeBSD just gets written out to /etc/zfs/exports verbatim, and that file is passed to the normal NFS daemons.  The *sharesmb* property doesn't do anything on FreeBSD; you have to install and use Samba if you want to export filesystems via SMB/CIFS.



Thanks for the info on how NFS works.  It seems that from what I'm reading, NFS in FreeBSD is userland-daemon based rather than kernel based.  I'm comfortable with Samba.  



			
				phoenix said:
			
		

> No idea about Blu-Ray support, never touched a Blu-Ray disc, let alone a drive.



The funny thing is I've never touched a Blu-Ray disk either!  I just figured it seems to be the best inexpensive backup option at this point.  $1000+ for a tape drive is outside my budget.

Thanks for your ideas!


----------



## ctengel (Jan 3, 2012)

As a quick update, just read in another thread that FreeBSD 9.0-RELEASE is being built now, so I will surely go with that over 8.2-RELEASE.  I still need to learn more about the best practices for staying up-to-date with 9.0-STABLE, (or RELENG_9) but at least now I know my starting point and don't have to worry about sacrificing 9 features for 8 stability.


----------



## vermaden (Jan 3, 2012)

ctengel said:
			
		

> Hmm interesting.  I hadn't even thought through using ZFS for my boot CF card, until now you mentioned it.


Personally I use UFS for / and ZFS for everything else, but ZFS will work fine on CF.

About manageBE, its not even in the Ports mate, its just a some script/side project.



> I need to read up a bit more on how exactly FreeBSD upgrades take place (from what I tell  it seems like I'd want to go from one -RELEASE to the next -RELEASE for this box as I don't need this box to get new stuff terribly fast, aside from security fixes, which I assume come out for any non-EOLed RELEASEs, correct?) to figure out whether I would want to bother with this.



Read this oen mate: http://www.daemonforums.org/showthread.php?t=6296

If You want to stay at *-RELEASE, then it would be even easier, just use freebsd-update utility that does binary updates/upgrades and fetches security patches.



> The idea of a ZFS root does not seem to be supported by either sysinstall (8.x) or bsdinstall (9.x) directly, so maybe as someone new to FreeBSD it may not be the best idea.


The installer sucks since 1410 A.D. if I recall correctly, if You want to install FreeBSD on ZFS, easiest approach is to download PC-BSD and install FreeBSD (yes its possible to install PLAIN FreeBSD using PC-BSD DVD install medium), other method is doing it by hand which is not that hard anyway: http://forums.freebsd.org/showthread.php?t=12082

Thanks for that; that's the sort of summary I was looking for.  Sometimes it's hard to track down exactly what's new in a new release that hasn't even been released yet!
The official Release Notes haven't also been published for 9.0, maybe You will also find some interesting info there 



> My understanding is that FreeBSD has a FUSE implementation available in ports, so if I had an ext4 implementation for FUSE, that would work, right?



In theory, yes, fuse-ntfs and fuse-sshfs work ok with fusefs from Ports.



> Cool.  I don't know if I would it do it that small (I had issues trying to bsdinstall 9rc3 on a 512MB virtual disk; 1 GB worked though!)


Its possible doing a 'manual' install (link above in that post), first You unpack kernel, then You remove the debug symbols at /boot/kernel/*.sylbols which strips kernel size from about 300MB to about 50MB. Generally base system containing kernel + base + man pages uses about 240MB.




> (I'm still learning the FreeBSD directory tree, but I guess /usr and /var can safely be sent off?  Assuming I can get the installer to put them elsewhere or install ports and so forth after.)


Check man hier(), should provide needed info.



> Just out of curiosity, do you have your CF card formatted UFS or ZFS?


UFS.




> Thanks for your quick initial reply and I look forward to seeing what you have to say about the rest!


Welcome, so here we go about the rest 




> Iâ€™m using existing old AMD Athlon64 based hardware, single core, motherboard is probably MSI KT4V. Iâ€™m procuring enough DDR RAM to max it out at about 3 GB. Going to throw in an old PCI graphics card just for install/maintenance purposes. I had issues with getting Linux to play nice with the on-board NIC. If I have the same issue with FreeBSD, I have some PCI NICs, and I might jump to 1 Gbps anyway.


This one should work fine, but better just boot FreeBSD there and find out.



> For the OS root/boot disk (and probably a mirror too), I think Iâ€™m going to go with an IDE-CF adapter; that way Iâ€™m not filling up a HD bay and Iâ€™m can be sure the BIOS can boot off it. (vs a PCI SATA card) I donâ€™t see any reason for these to be ZFS, but they can be. UFS or FFS is fine with me. If I decide to mirror them though I guess I need VVM and/or GEOM? (Iâ€™m still reading up on thse...)


GEOM Mirror will be fine gmirror(), it does RAID1 of provided devices.



> Also, any recommendations on the minimum size of the CF cards? While I do want the OS separate from storage, Iâ€™m OK with throwing in a ZVOL for swap on the main storage array.


A reasonable minimum would be 1-2GB, 4GB will be plenty, 8GB more then enought 



> Despite the fact that I am playing around with a 9 RC, Iâ€™d like to use the latest available production release (8.2) for the NAS. Iâ€™m hoping that none of the features I plan to use (see below) are new in 9...if not I can consider 9 or hold off until 9.0 is released.


9.0-RELEASE will be released this Friday.



> For the storage array, Iâ€™m proably doing a stripe of sets of two mirrored drives, of sizes 2-3 TB. Theyâ€™ll be attached to two SATA controllers on two PCI cards. (Iâ€™ll consult the FreeBSD HCL before buying). These will all be one ZFS zpool. Is there any point to partitioning them at all (fdisk? gpt?) or doing any kind of LVM? (VVM? GEOM?)


No need, use whole disks for ZFS.



> I am assuming all of the Solaris options for zpool and zfs are supported in FreeBSD? (aside from of course some of the new v33 features introduced in Solaris 11) Any gotchas for things that donâ€™t work as in Solaris?


I do not know any other differences between Illumos ZFS and FreeBSD ZFS besides the ones I already mentioned.




> Initially, this unit will primaril only be a NAS, sharing via SMB/CIFS and NFS. My understanding is that while ZFS can be set to export via NFS automatically, the CIFS capability hasnâ€™t yet been set up on FreeBSD, but this isnâ€™t an issue for me because I know my way around Samba. (Iâ€™m assuming that Samba on FreeBSD has all the features of Samba on Linux)


Samba works good, I also use it without problems.



> One thing Iâ€™d like to do is put in a BluRay burner as a way of doing offsite backups using â€œzfs send.â€ For me, cd/dvd burning on Linux and Solaris can sometimes be a pain (less so on Linux now than it was a number of years ago). I have never done any work with BluRays, but apparently cdrtools now supports that. Is this known to work well in FreeBSD?


I haven't had used BluRay yet, not only with FreeBSD bot generally 



> Another thing maybe this will be used for in the future is as an iSCSI target. It seems that while targets cannot be setup in ZFS, I can make a ZVOL and setup a target daemon from ports. Also, in the more distant future, I may like to perhaps reach additional storage on another array directly via iSCSI, but it seems that FreeBSD has built in iSCSI initiator functionality so that is good.


True.



> Assuming this works well, I would like to migrate some of the â€œalways-onâ€ stuff from what is now a distinct Linux box onto this new FreeBSD box, to save power consumption. This would include an apache daemon, a bittorrent client, and that sort of thing. Everything I would want seems to be available as ports. Any reason why these canâ€™t all be installed onto their own dataset on the zpool with the NAS data? (as opposed to the CF card with the base OS)


None against, will work fine.



> can I read from ext2, ext3, ext4, FAT16, FAT32, NTFS, and reiserfs?


Dunno about ext4 as I mentioned earlier, fat16 should work but I do not 'saw' such thing since long time, also dunno about reading reiserfs, there was an effort to port at least read functionality, but I also did not tried it.



> Any compatibility issues with reading Solaris-created UFS or ZFS?


If FreeBSD ZFS version is equal or higher, then You should be able to use that ZFS, UFS will probably wont work.



> Can FreeBSD understand partition tables/slices from all the other major OSs?


Generally yes, it supports primary/logical/extended partitions.



> Do ports usually have to be reinstalled after an upgrade?


Definitely not, for example when You upgrade from 8.1-RELASE to 8.2-RELEASE (or even 8.2-STABLE), then the installed packages will work as before, the only exception are packages that has kernel modules (virtualbox/fuse), these may be needed to re-add.



> Maybe zfs for rootdisk isnâ€™t a bad idea if I can snapshot prior to upgrade.


Many seem to create small mirror ZFS pool for the root and another 'big' pool for all the rest and this way it will also work, should be good idea if You want to use manageBE.


----------



## bsus (Jan 3, 2012)

You should know that ZFS on root will provide more flexibility but it still provides less performance in I/O, snapshots are also possible with UFS2. I would also recommand UFS2 beacause the CF card is already little slow, adding ZFS might cause a slow system.


----------



## hopla (Jan 3, 2012)

phoenix said:
			
		

> I'd recommend using gpart(8) to create a partition starting at the 1 MB boundary, covering the rest of the disk, and label the partition with a name based on where the disk is physically located in the chassis. Then use the /dev/gpt/<labelname> entries to create the pool. You'll get output similar to this:





			
				vermaden said:
			
		

> No need, use whole disks for ZFS.



Ok, so who is right?

A year ago everyone said 'use whole disks' and I would have agreed. But now I'm not so sure anymore. Partitioning/labeling avoids problems with slightly smaller replacements drives, helps identify drives in the array and - if I'm not mistaken - partitioning with a 1MB offset will also avoid problems with the large sector sizes found in Advanced Format drives and SSDs (can someone confirm this, or is the gnop trick still required?).

Where are the disadvantages?


----------



## bsus (Jan 3, 2012)

> can someone confirm this, or is the gnop trick still required


If you mean the gnop trick to use the advanced format, yes this is still required. But it could be that FreeBSD-9.0 will "fix" this


----------



## hopla (Jan 3, 2012)

bsus said:
			
		

> If you mean the gnop trick to use the advanced format, yes this is still required. But it could be that FreeBSD-9.0 will "fix" this



Hmm, then I'm not understanding the difference *between* the '*ashift=12*' that ZFS does when it detects a 4k drive (either because it really is one and advertises as one, or because we force it too using gnop) *and* just *offsetting your partition 1MB* (which is a multiple of 512 and 4k, so it should mathematically start on a 4k boundry right?).


----------



## bsus (Jan 3, 2012)

> and just offsetting your partition 1MB (which is a multiple of 512 and 4k, so it should mathematically start on a 4k boundry right?).


This should work too but I didn't tried yet.
The Threadstarter could try both and report


----------



## vermaden (Jan 3, 2012)

hopla said:
			
		

> Ok, so who is right?
> 
> A year ago everyone said 'use whole disks' and I would have agreed. But now I'm not so sure anymore. Partitioning/labeling avoids problems with slightly smaller replacements drives, helps identify drives in the array and - if I'm not mistaken - partitioning with a 1MB offset will also avoid problems with the large sector sizes found in Advanced Format drives and SSDs (can someone confirm this, or is the gnop trick still required?).
> 
> Where are the disadvantages?



I also 'label' all my disks with glabel() but I did not used the 4K sector disks, just 'classic' 512B sector disks, the GPT partition + 1MB boundary may probably be used to avoid 'not fit problem' of the 4K drives, but I do not have any experience with 4k drives, so ask *phoenix* for details.


----------



## phoenix (Jan 3, 2012)

hopla said:
			
		

> Hmm, then I'm not understanding the difference *between* the '*ashift=12*' that ZFS does when it detects a 4k drive (either because it really is one and advertises as one, or because we force it too using gnop) *and* just *offsetting your partition 1MB* (which is a multiple of 512 and 4k, so it should mathematically start on a 4k boundry right?).



ashift sets the minimum block size used by ZFS.  ashift=9 sets the minimum to 512 Bytes.  ashift=12 sets the minimum size to 4096 Bytes.

You cannot add a true 4KB disk to an ashift=9 vdev.  But you can add a 512B disk to an ashift=12 vdev.

Hence, to future-proof your pool, you should create your vdevs using 4KB sectors (whether via hardware or gnop).  Performance on 512B drives will not be affected.

Aligning things via a partition at 1 MB just makes things nicer, and guarantees that everything is lined up for 512B and 4096B accesses.  And provides future-proofing for any other sector sizes coming down the pipe (and works for SSDs as well, which have all kinds of funky write page sizes).

Thus, combining the two gives you the best of all worlds, and future-proofs the pool.


----------



## phoenix (Jan 3, 2012)

ctengel said:
			
		

> I need to find out if there's a way to tell Solaris 11 to use an old zpool version, but obviously that's off topic here.



You can specify the pool version when you create it.  See the zpool(8) man page for the details.



> Is there a way to get just bugfixes for a particular -RELEASE?



Yes.  If you want to do binary upgrades, then use freebsd-update(1).  It will keep you on -RELEASE and provide you with security updates.

If you want to do source upgrades, then you set your csup(1) tag to *RELENG_X_Y* (where X is the major version, and Y is the minor version), and go through a csup/buildworld cycle.

See the Handbook for details.



> Hmm.  The CF boot cards were the only thing I needed to put on IDE/ATE.  (except for maybe a temporary CD-ROM to boot the FreeBSD install disk from)  The motherboard has no SATA controller, so I'm not sure if I can boot off the separate PCI card based SATA controllers I plan to install.  Hardware cost is a limitation for me but if I can do SATA I'll try.  When you say that the SATA is better, is the IDE one just a little slower, or does it cause issues?



The IDE adapter is slow, to say the least.    And it doesn't work with DMA, even though the CF cards show DMA support.  And they occasionally give error when doing heavy writes (like during an installworld).

The SATA adapter doesn't have any of those issues.

It might be the adapters we are using, though.



> I'll have to take a look at gmirror; do you have the CF cards formatted FFS?


Yes.  Personally, I don't trust ZFS-on-root yet, as the diagnostic/repair tools aren't quite there yet.  If anything goes wrong with the ZFS pool, then you have to resort to LiveCDs and whatnot to fix.  With / on UFS, you can always get to single-user mode to do repairs/diagnostics.  And it provides a very nice separation between "OS" (on UFS) and "data" (on ZFS).

Plus, ZFS would be kind of "heavy" for a pair of 2-ish GB CF disks.  



> Does anyone have experience doing ZFS on similarly old hardware?  I'm looking at a 2-5 TB pool's worth of data, striped and mirrored (not RAID-Z, which I realize takes even more resources).  Like I said, dealing with a single core amd64 processor, 3GB of RAM, and plain PCI card SATA controllers.  I'm not expecting much in performance, but I was kind of hoping the bottleneck would be the Ethernet network link.



You can run ZFS on systems with as little as 1 GB of RAM without too many issues, but performance will not be that great, as the ARC will be tiny (and you have to do a lot of manual tuning).  My home server only has 2 GB of RAM and running 32-bit FreeBSD, but the pool is only 1.0 TB in size (4x 500 GB SATA disks in 2x mirror vdevs).  3 GB will work, but the more you can add, the better things will run.  

Just do *not* enable dedupe on any filesystems with that little RAM.    Horrible things will happen.  



> Thanks for the info on how NFS works.  It seems that from what I'm reading, NFS in FreeBSD is userland-daemon based rather than kernel based.  I'm comfortable with Samba.



FreeBSD has an in-kernel NFS daemon.  It's just not integrated into ZFS.


----------



## hopla (Jan 3, 2012)

phoenix said:
			
		

> ashift sets the minimum block size used by ZFS.  ashift=9 sets the minimum to 512 Bytes.  ashift=12 sets the minimum size to 4096 Bytes.
> 
> You cannot add a true 4KB disk to an ashift=9 vdev.  But you can add a 512B disk to an ashift=12 vdev.
> 
> ...



After reading this I still didn't quite understand it, so I went out to _investivigate_!

The key to this is that ZFS uses *variable* block sizes! Hence where the '*minimum* block size' comes in to play and why it's so important. Btw: the 'recordsize' setting on ZFS filesystems refers to the *maximum* blocksize that the FS can use. So if you set recordsize=8k, actual used blocksizes will vary between 4k (with ashift=12) and 8k (with power-of-2 steps I guess).

(Btw2: the ashift value means a power of 2, 2^9 = 512, 2^12=4096)

When using the whole disk, that's all you have to do. Things will be correct. However *if* you use partitions, you also have to make sure they are aligned properly!  No use in fixing your minimum block size, if your first block starts of in the middle of an actual disk sector.

With regards to old hardware: we are running a 2TB zpool on a SuperMicro Core 2 Duo, 4GB RAM, attached to an DAS (Dell MD1000) via a LSI PCI-X (that's X, not Express unfortunatly) SAS card (in IT mode). Has 6 x 750GB disks in 3 mirror vdevs. It does daily backups of our other servers, with snapshots going back a year already. And it's running without a sweat. In fact it also runs 2 Postgres instances (on the zpool, on seperate filesystems with recordsize=8k): one replicating another Postgres over the internet and one for our monitoring tool (which also runs on that same server).
The filesystems where the backups are happening on (seperate one per backed-up server) have also compression enabled, which doesn't seem to add any extra load (ljzb, gzip is terrible). In fact, as others will tell you, they even speed up writes considerably!

I think the key very much is in using striped mirrors (which also give you better options for expanding the pool, we only need to replace 2 drives and our pool is already bigger) and not the fancy RAIDZ options. I did some tests with RAIDZ when building the server and it wasn't as fast as the mirrors. RAM usage I can not say, since I didn't really fill the pool when testing.
We get sequential reads up to 200MB/s if I'm not mistaken, which adequatly fills a 1Gb link 

One last piece of advice: I've read everywhere that it's best to use 64bit FreeBSD for ZFS, if you can, even if you will never have more than 4GB of RAM. Since ZFS does a lot of 64 and 128 bit calculations.


----------



## monkeyboy (Jan 3, 2012)

Just one data point here, concerning NTFS via fuse, on at least FreeBSD 8.2. I do not consider it ready for prime time. In my hands it did not function reliably for large scale usage. Occasional file r/w, it was fine. But it did not tolerate the copying of GB sized file sets very well. To get it to be reliable at all, I had to slow it down by turning on verbose mode.


----------



## bbzz (Jan 4, 2012)

phoenix said:
			
		

> Hence, to future-proof your pool, you should create your vdevs using 4KB sectors (whether via hardware or gnop)...



Or, if you are going to encrypt whole disk with *geli* then *-s4096* switch does the trick (encrypts in 4k blocks). So if you encrypt you only need this.



			
				phoenix said:
			
		

> You can run ZFS on systems with as little as 1 GB of RAM without too many issues...



I use ZFS on pretty much all my machines. One if them is AsusEEE laptop with 1GB ram and Atom processor. I run ZFS on it and it never crashed, yet.

The only departure from default configuration is in loader.conf:


```
vm.kmem_size="512M" 
vm.kmem_size_max="512M"
vfs.zfs.arc_max="160M"
vfs.zfs.vdev.cache.size="5M"
vfs.zfs.prefetch_disable="0"
```


----------



## ctengel (Jan 4, 2012)

Thanks to all, glad I started something interesting! (I'll respond to the ZFS partitioning (or lack thereof) and hardware issues in a separate post).

*ROOT FILESYSTEM, SNAPSHOTS, BOOTENV*



			
				vermaden said:
			
		

> Personally I use UFS for / and ZFS for everything else, but ZFS will work fine on CF.
> 
> About manageBE, its not even in the Ports mate, its just a some script/side project.





			
				bsus said:
			
		

> You should know that ZFS on root will provide more flexibility but it still provides less performance in I/O, snapshots are also possible with UFS2. I would also recommand UFS2 beacause the CF card is already little slow, adding ZFS might cause a slow system.





			
				phoenix said:
			
		

> Yes. Personally, I don't trust ZFS-on-root yet, as the diagnostic/repair tools aren't quite there yet. If anything goes wrong with the ZFS pool, then you have to resort to LiveCDs and whatnot to fix. With / on UFS, you can always get to single-user mode to do repairs/diagnostics. And it provides a very nice separation between "OS" (on UFS) and "data" (on ZFS).
> 
> Plus, ZFS would be kind of "heavy" for a pair of 2-ish GB CF disks.



I'm leaning towards UFS2 (which is what people mean when they say UFS or FFS around here, right?) for the CF cards/root filesystem and boot filesystem.

manageBE is a cool idea, but like I mentioned before I have my reservations for now. 

I was not aware of UFS2's snapshotting capabilities.  I'll have to read up on that.  I'm guessing it works differently from ZFS's COW-based snapshots. In any event the UFS2 snapshotting could be handy in an upgrade scenario I guess!

*HIERARCHY BREAKUP*



			
				vermaden said:
			
		

> Check man hier, should provide needed info.



Indeed it does; so based on my reading of that manpage, my setup would look like:


/ - own partition on CF cards
/bin - part of /
/boot - first partition on CF cards
/cdrom - just a mountpoint in /
/compat - just a symlink in /
/dev - just a mountpoint in /
/dist - just a mountpoint in /
/etc - part of / or separate on CF
/lib - part of /
/libexec - part of /
/media - directory of mountpoints in /
/mnt - just a mountpoint in /
/proc - just a mountpoint in /
/rescue - part of / (if I'm reading rescue(8) correctly)
/root - I guess this could go either way, but most of what I do would be on a nonprivleged account, so this wouldn't be big (can fit on CF card) and maybe good place for any scripts I cook up for dealing with array maintenance (so obviously better on CF card)
/sbin - part of /
/tmp - I've read a bit about tmpfs not playing nice with ZFS on FreeBSD; if I can't get it to work in that sort of way some other way, then probably best to be on the storage zpool and just get cleared out by init scripts or cron job; obviously no need for snapshots 
/usr - haven't decided yet
/usr/local - definitely on storage zpool to conserve space on small CF card
/var - probably on storage zpool (or maybe better off on card?  I'd want to see logs if storage zpool not importing, right?  I don't know.)

One thing I am a bit confused about: /home or /usr/home or /export/home or /opt/home is not mentioned in hier(7)...

*LVM*



			
				vermaden said:
			
		

> GEOM Mirror will be fine gmirror, it does RAID1 of provided devices.



OK that sounds good.  I'll probably use that for the CF cards if I do go ahead and get two of them.  I was getting a bit confused reading the handbook with talk of both GEOM and VVM; wasn't sure which was considered the best to use these days.

*ZPOOLS*



			
				vermaden said:
			
		

> Many seem to create small mirror ZFS pool for the root and another 'big' pool for all the rest and this way it will also work, should be good idea if You want to use manageBE.



Yep that's standard practice for my company with Solaris 10; we always have the "rpool" and the "apool."  In this case the CF card(s) would be my rpool (although not ZFS in my case) and the pseudo-RAID10 (stripe of mirrored pairs) disk array makes up the apool.

*CF*



			
				vermaden said:
			
		

> A reasonable minimum would be 1-2GB, 4GB will be plenty, 8GB more then enought



I'm thinking I will probably go for a 2 GB one, or maybe a mirrored pair of two (would hate to go through all this effort to make my data safe and redundant but not be able to get to it because a CF card went bad!)



			
				phoenix said:
			
		

> It might be the adapters we are using, though.



Do you know what makes/models of adapters and cards you are using? (so I avoid the IDE one if I go that way, and try to get the SATA one if I go that way!)

*UPGRADES/RELEASES*



			
				vermaden said:
			
		

> Read this oen mate: http://www.daemonforums.org/showthread.php?t=6296
> 
> If You want to stay at *-RELEASE, then it would be even easier, just use freebsd-update utility that does binary updates/upgrades and fetches security patches.





			
				phoenix said:
			
		

> Yes. If you want to do binary upgrades, then use freebsd-update(1). It will keep you on -RELEASE and provide you with security updates.
> 
> If you want to do source upgrades, then you set your csup(1) tag to RELENG_X_Y (where X is the major version, and Y is the minor version), and go through a csup/buildworld cycle.
> 
> See the Handbook for details.



To be honest I'm still wrapping my head around it, but from what I'm reading I'm kind of leaning towards sticking with RELEASE for base, and then STABLE for the ports/packages.  Is this possible?

Is there an advantage performance wise in building everything from source? (obviously aside from the resources necessary to compile!)  I seem to remember a bit of one on Gentoo but over time it seemed less significant.  What's the consensus on FreeBSD?  And if there is no performance benefit, do people just do it to stay more up-to-date?

*MANUAL INSTALL*



			
				vermaden said:
			
		

> The installer sucks since 1410 A.D. if I recall correctly, if You want to install FreeBSD on ZFS, easiest approach is to download PC-BSD and install FreeBSD (yes its possible to install PLAIN FreeBSD using PC-BSD DVD install medium), other method is doing it by hand which is not that hard anyway: http://forums.freebsd.org/showthread.php?t=12082
> ...
> Its possible doing a 'manual' install (link above in that post), first You unpack kernel, then You remove the debug symbols at /boot/kernel/*.sylbols which strips kernel size from about 300MB to about 50MB. Generally base system containing kernel + base + man pages uses about 240MB.



Good to know this is possible and thanks for the guide.  I've done Gentoo Linux with no installer (this is the normal way) a number of times, but even so while it would be a good learning experience for me, I'm still leaning towards going with bsdinstall since this will be my first install. (BTW, do you consider bsdinstall to also suck, or were you just referring to sysinstall?  I don't really have any feelings on either! Just trying to understand.)

*UPGRADES AND PORTS*



			
				vermaden said:
			
		

> Definitely not, for example when You upgrade from 8.1-RELASE to 8.2-RELEASE (or even 8.2-STABLE), then the installed packages will work as before, the only exception are packages that has kernel modules (virtualbox/fuse), these may be needed to re-add.


That is good to know.  I'm still getting used to the base vs. ports paradigm (as opposed to the "everything is a deb/ebuild/rpm" mentality), but it's good to know that updating system libraries generally doesn't break everything else.

*TESTING*



			
				vermaden said:
			
		

> This one should work fine, but better just boot FreeBSD there and find out.



I'm thinking very soon (maybe when 9.0 is released) I'll install FreeBSD on the hardware I have (as opposed to a VirtualBox VM) to get a better idea of how things will run.  Need the controller cards and drives and so forth to really get an idea of NAS performance, but even when I have those it would probably make sense that I play with it for at least a week or two before trusting any data to it; in case I decide to go with a different filesystem or layout.


*SUPPORT FOR OTHER FILESYSTEMS*



			
				vermaden said:
			
		

> Dunno about ext4 as I mentioned earlier, fat16 should work but I do not 'saw' such thing since long time, also dunno about reading reiserfs, there was an effort to port at least read functionality, but I also did not tried it.



FYI, I think reiserfs(5) is a kernel module that does it.



			
				monkeyboy said:
			
		

> Just one data point here, concerning NTFS via fuse, on at least FreeBSD 8.2. I do not consider it ready for prime time. In my hands it did not function reliably for large scale usage. Occasional file r/w, it was fine. But it did not tolerate the copying of GB sized file sets very well. To get it to be reliable at all, I had to slow it down by turning on verbose mode.



Hmm...Interesting.  I've used ntfs-3g (a FUSE driver) on Linux for read/write without issue for some time now.  Any reason why it would be not as good on FreeBSD?

*ZPOOL VERSION*



			
				phoenix said:
			
		

> You can specify the pool version when you create it. See the zpool(8) man page for the details.



Very interesting that I have not seen that until now; perhaps that feature wasn't always there.  In Solaris 10-land we have often jumped through hoops to do that... Thanks for pointing it out!

Thanks again!


----------



## vermaden (Jan 4, 2012)

> [*]/dist - just a mountpoint in /




```
% ls -l /dist
ls: /dist: No such file or directory
```
(its not needed)



> [*]/media - directory of mountpoints in /




```
% ls -l /media
ls: /media: No such file or directory
```
(its not needed)



> [*]/proc - just a mountpoint in /




```
% ls -l /proc
ls: /proc: No such file or directory
```
(its not needed)



> [*]/var - probably on storage zpool (or maybe better off on card?  I'd want to see logs if storage zpool not importing, right?  I don't know.)


The only part that makes sense on / is /var/db/pkg (list of installed packages), I once kept /var on ZFS and /var/db/pkg on separate UFS partition on CF card, a big PITA generally, if You decide to go with /var on ZFS, keep whole /var there 



> One thing I am a bit confused about: /home or /usr/home or /export/home or /opt/home is not mentioned in hier(7)...


By defaults its /usr/home and /home -> /usr/home, but that can be modified in many ways, for example a separate dataset with mountpoint set to /home or a separate dataset under /usr with its mountpoint, or part of /usr with using the default symlink ... its generally up to You, there are no bad choices here, it can also be /opt/home or /export/home if You want it that way.



> OK that sounds good.  I'll probably use that for the CF cards if I do go ahead and get two of them.  I was getting a bit confused reading the handbook with talk of both GEOM and VVM; wasn't sure which was considered the best to use these days.


If I recall correctly VVM still works, but GEOM/gmirror is currently probably the most used solution for a mirror on /.



> To be honest I'm still wrapping my head around it, but from what I'm reading I'm kind of leaning towards sticking with RELEASE for base, and then STABLE for the ports/packages.  Is this possible?


Yes, many people especially on desktop follow that route, but RELEASE gets only security fixes, there are no bug fixes for RELEASE. Also packages that use kernel modules can break as You drift away from the RELEASE date, but compiling these several ones from Ports does not hurt that much.



> Is there an advantage performance wise in building everything from source?


TL;DR --> generally none, You only get occasional stability issues 

Default FreeBSD CFLAGS are: -O2 -pipe -fno-strict-aliasing which provide more then enough optimization.



> I'm still leaning towards going with bsdinstall since this will be my first install. (BTW, do you consider bsdinstall to also suck, or were you just referring to sysinstall?  I don't really have any feelings on either! Just trying to understand.)


The *sysinstall* sucks because it has several BUGs that are hard to reproduce and fix, it also has its limitations like not being able to install using GPT, ZFS, Geli, Gmirror, GStripe, etc. The new *bsdinstall* (and even developers mention that) is a temporary sollution, and yes it also sucks, only less then *sysinstall*, it also is not able to install FreeBSD using ZFS, Geli, Gmirror, GStripe.

I personally consider crippled every installer that is not able to install FreeBSD in most useful/powerful ways which of course are ZFS/Gmirror/Geli/GPT.

In short, if You know what You are doing, do it the 'manual' way, after several tries @ virtualbox You will feel as at home. If You do not want to use the 'manual' way, use the PC-BSD installer to install FreeBSD.



> FYI, I think reiserfs(5) is a kernel module that does it.


Good to know, I last time read about its development but haven't tracked later if it works or not 



> Hmm...Interesting.  I've used ntfs-3g (a FUSE driver) on Linux for read/write without issue for some time now.  Any reason why it would be not as good on FreeBSD?


Fuse not fully ported? (or there are bugs?)

For similar issue _Truecrypt_ is not ported to FreeBSD (no fully functional FUSE interface).


----------



## phoenix (Jan 4, 2012)

ctengel said:
			
		

> / - own partition on CF cards
> /bin - part of /
> /boot - first partition on CF cards


No.  /boot is just a directory on FreeBSD, not a separate filesystem.  Leave it as part of / or else you'll have to do a lot of hoop-jumping to make it work.



> /etc - part of / or separate on CF


Never, ever, ever make /etc a separate filesystem.  Leave it as part of /.



> /root - I guess this could go either way, but most of what I do would be on a nonprivleged account, so this wouldn't be big (can fit on CF card) and maybe good place for any scripts I cook up for dealing with array maintenance (so obviously better on CF card)


Again, do not separate this from /.



> /usr - haven't decided yet


This is a personal choice.  Personally, I leave /usr as part of /, so that the entire OS is together in one filesystem, separate from the ZFS pool.  Makes it much easier to work in single-user mode.

I tend to put /usr/local, /usr/ports onto ZFS, though.  And, sometimes, even /usr/src and /usr/obj.



> /var - probably on storage zpool (or maybe better off on card?  I'd want to see logs if storage zpool not importing, right?  I don't know.)


Again, personal choice.  I tend to leave /var on UFS, but put /var/log onto the pool.



> One thing I am a bit confused about: /home or /usr/home or /export/home or /opt/home is not mentioned in hier(7)...


Again, personal choice.  The default setup is /home -> /usr/home.  I tend to create a separate filesystem for /home to make it the same as every other Unix/Linux system.



> Do you know what makes/models of adapters and cards you are using? (so I avoid the IDE one if I go that way, and try to get the SATA one if I go that way!)


I'll have a look when I'm at work tomorrow.



> Good to know this is possible and thanks for the guide.  I've done Gentoo Linux with no installer (this is the normal way) a number of times, but even so while it would be a good learning experience for me, I'm still leaning towards going with bsdinstall since this will be my first install. (BTW, do you consider bsdinstall to also suck, or were you just referring to sysinstall?  I don't really have any feelings on either! Just trying to understand.)



sysinstall is a not-completely-horribly installer, but it's very limited (only supports MBR partitioning, only support UFS, doesn't support any GEOM stuff).  But it also has a tonne of other features that are absolute crap (like all the post-install configuration crap).

bsdinstall is a much nicer installer, supports GPT, ZFS, GEOM, etc.  And, it's a LiveCD, so you can drop to a shell and have access to a full-fledged FreeBSD system to do anything.  So, while the TUI may not support all the GEOM features, you can always drop to a shell, configure things manually, then continue on with the TUI.  Want a gmirror-based / setup?  bsdinstall can do that.  Want a ZFS-based setup?  bsdinstall can do that.  Want some fancy GELI-encrypted setup?  bsdinstall can do that.

And the best part is that bsdinstall is *just* an installer.  There's no broken post-install configuration crap in there.


----------



## phoenix (Jan 4, 2012)

vermaden said:
			
		

> The *sysinstall* sucks because it has several BUGs that are hard to reproduce and fix, it also has its limitations like not being able to install using GPT, ZFS, Geli, Gmirror, GStripe, etc. The new *bsdinstall* (and even developers mention that) is a temporary sollution, and yes it also sucks, only less then *sysinstall*, it also is not able to install FreeBSD using ZFS, Geli, Gmirror, GStripe.
> 
> I personally consider crippled every installer that is not able to install FreeBSD in most useful/powerful ways which of course are ZFS/Gmirror/Geli/GPT.



You can do all that with bsdinstall.  It's not exposed in the TUI, but at the disk partitioning step, you can drop to a shell, set it all up manually, and then carry on with the install.  Anything you can do with a live FreeBSD system, you can do with bsdinstall.

That's the power of bsdinstall.


----------



## vermaden (Jan 4, 2012)

phoenix said:
			
		

> You can do all that with bsdinstall.  It's not exposed in the TUI, but at the disk partitioning step, you can drop to a shell, set it all up manually, and then carry on with the install.  Anything you can do with a live FreeBSD system, you can do with bsdinstall.
> 
> That's the power of bsdinstall.


Its not the power of *bsdisntall*, its power of 'manual' setup by yourself, *bsdinstall *offers only GPT partitioning over *sysinstall* features, nothing more.


----------



## ctengel (Jan 5, 2012)

*PARTITIONING*



			
				vermaden said:
			
		

> No need, use whole disks for ZFS.





			
				bsus said:
			
		

> If you mean the gnop trick to use the advanced format, yes this is still required. But it could be that FreeBSD-9.0 will "fix" this





			
				vermaden said:
			
		

> I also 'label' all my disks with glabel but I did not used the 4K sector disks, just 'classic' 512B sector disks, the GPT partition + 1MB boundary may probably be used to avoid 'not fit problem' of the 4K drives, but I do not have any experience with 4k drives, so ask phoenix for details.





			
				phoenix said:
			
		

> ashift sets the minimum block size used by ZFS. ashift=9 sets the minimum to 512 Bytes. ashift=12 sets the minimum size to 4096 Bytes.
> 
> You cannot add a true 4KB disk to an ashift=9 vdev. But you can add a 512B disk to an ashift=12 vdev.
> 
> ...





			
				hopla said:
			
		

> The key to this is that ZFS uses variable block sizes! Hence where the 'minimum block size' comes in to play and why it's so important. Btw: the 'recordsize' setting on ZFS filesystems refers to the maximum blocksize that the FS can use. So if you set recordsize=8k, actual used blocksizes will vary between 4k (with ashift=12) and 8k (with power-of-2 steps I guess).
> ...
> When using the whole disk, that's all you have to do. Things will be correct. However if you use partitions, you also have to make sure they are aligned properly! No use in fixing your minimum block size, if your first block starts of in the middle of an actual disk sector.





			
				bbzz said:
			
		

> Or, if you are going to encrypt whole disk with geli then -s4096 switch does the trick (encrypts in 4k blocks). So if you encrypt you only need this.



I am not doing encryption on either a block or filesystem level;  this system will be slow enough as is, and there won't be much "sensitive" (As in, all of the data is valuable to me, but little of it would be of value to an attacker.)

I need to learn more about 512B vs 4096B blocksize on disks, what impact that has, and what size I'm likely to get.  I can see why starting a partition at 1MB is better than 512B (planning for the future), but not sure why not just start at 4KB (a multiple of 512 and 4096).  And if I understand correctly, if I forego partitioning altogether, it won't matter because that will be starting at 0B (also a multiple of 512 and 4096  )

I also think i need to read up a bit more on glabel, gnop, GEOM, GPT, and all that fun stuff... 

*OLD/SLOW HARDWARE*



			
				phoenix said:
			
		

> You can run ZFS on systems with as little as 1 GB of RAM without too many issues, but performance will not be that great, as the ARC will be tiny (and you have to do a lot of manual tuning). My home server only has 2 GB of RAM and running 32-bit FreeBSD, but the pool is only 1.0 TB in size (4x 500 GB SATA disks in 2x mirror vdevs). 3 GB will work, but the more you can add, the better things will run.
> 
> Just do not enable dedupe on any filesystems with that little RAM.   Horrible things will happen.





			
				hopla said:
			
		

> With regards to old hardware: we are running a 2TB zpool on a SuperMicro Core 2 Duo, 4GB RAM, attached to an DAS (Dell MD1000) via a LSI PCI-X (that's X, not Express unfortunatly) SAS card (in IT mode). Has 6 x 750GB disks in 3 mirror vdevs. It does daily backups of our other servers, with snapshots going back a year already. And it's running without a sweat. In fact it also runs 2 Postgres instances (on the zpool, on seperate filesystems with recordsize=8k): one replicating another Postgres over the internet and one for our monitoring tool (which also runs on that same server).
> The filesystems where the backups are happening on (seperate one per backed-up server) have also compression enabled, which doesn't seem to add any extra load (ljzb, gzip is terrible). In fact, as others will tell you, they even speed up writes considerably!
> 
> I think the key very much is in using striped mirrors (which also give you better options for expanding the pool, we only need to replace 2 drives and our pool is already bigger) and not the fancy RAIDZ options. I did some tests with RAIDZ when building the server and it wasn't as fast as the mirrors. RAM usage I can not say, since I didn't really fill the pool when testing.
> ...





			
				bbzz said:
			
		

> I use ZFS on pretty much all my machines. One if them is AsusEEE laptop with 1GB ram and Atom processor. I run ZFS on it and it never crashed, yet.
> 
> The only departure from default configuration is in loader.conf:



So definitely doing 64 bit, definitely realize I may have to tune some settings.  Disappointing that I probably can't do dedup, but thanks for the lzjb tip.  Originally wanted to do raidz, but actually just for sheer ease-of-management/expansion, let alone performance, decided to do the RAID10 thing, probably initially as straight RAID1.

Hardware has become I think my biggest remaining concern; I'm reading wildly different things about what is necessary.  In my personal experience, those situations where I've done it with not much RAM (1-2GB) on Solaris/SPARC, I was only doing 250-300GB of data at a time. (And even so, on boxes with 3TB of storage or so, not uncommon at all to see the ARC cache climb to 50% of 32GB of RAM!)

In this situation, to start I'm looking at 2x 2-3 TB drives mirrored, with an optional to add (stripe) additional mirrored pairs, for a max of 8-10 TB of data, but by the time it gets that big, it's quite possible I will have decided on new hardware anyway, so lets say max ~5 TB of data across 4 drives in a RAID10-like setup. (a stripe of two mirrored pairs)

In terms of the other hardware, I'm thinking one or two (if two then each mirror split between the two cards) PCI-attached SATA controllers. The motherboard I'll max out at 3GB of RAM;  the processor is an AMD Athlon64, single core, I forget the clock speed, but I think RAM is really the weakest link in this ZFS setup.  I'd do either 100Mbps or 1Gbps Ethernet; if 100Mbps, I think I'd want THAT to be the weakest link in a typical ZFS read via NFS/CIFS scenario.

BTW, The reason this is a dilemma, and not just a simple case of "let me just try it on what I have, and if it's not fast enough, I'll upgrade" is that the existing hardware requires some investment to bring up to speed (notably the PCI SATA controllers, maxing out the DDR1 RAM, and IDE-CF adapter), and I wouldn't want to bother with that if I'm just going to have to immediately get new mobo, multi-core CPU, DDR3, PCIx SATA cards, SATA-CF adapter etc. (and not able to use the PCI SATA, DDR1, IDE-CF) On the other hand, if I can get 2 years use of the old stuff before I need to upgrade to better hardware, I wouldn't mind it.

Anyway I'll keep on looking for some more definitive info on ZFS hardware reqs; if anyone else has experience running 2-5 TB arrays on less than 4 GB of RAM I'd love to hear about it.

And thanks vermaden and phoenix for the advice on filesystem layout and installers (or lack thereof :\ ) and so forth.


----------



## ctengel (Jan 5, 2012)

I've given some more thought to the ZFS hardware reqs issue and I basically realized three things: (reducing the problem from all the back and forth between reading all sorts of inconsistent things I've read regarding how much RAM is needed):


I could be wrong, but despite the "1 GB of RAM per 1 TB of data" rule I've heard about, I think the load is more relevant than the data.  In my case, we are talking usually at most 3 simultaneous users, usually very light.  There will be some big backups at times, but those can be staggered and done overnight, when I won't really care if they're slow or take a while.  This NAS is intended primarily to be a central place to store big stuff that doesn't need to be accessed that much.  Anything accessed more often would also be on a user's hard drive. (Almost like a cache!)  The only exception to not caring about slowness would be maybe down the road I'd want to stream HD video to a DLNA-aware device or something like that, but that is not essential immediately.
With that being said, all the Solaris servers I'm thinking about with heavy RAM usage and so forth are running very heavy, FS IO intensive, loads. The Solaris system I have with 2 GB of RAM, and only 300 GB of ZFS storage performs just fine for me, and generally I don't see the total FS IO on this server any higher.
Sun/Oracle basically says the minimum RAM is only 768 MB, recommended 1 GB; they also explicitly say that mirrored drives should definitely be on separate controllers, so I'll be sure to do that.  Now these minimums are intended for single-user workstations.  Heavy server loads are much higher, but I think that while my total storage requirement is pretty big (2-5 TB), my load in terms of I/O is not (and the heaviest stuff I don't care how long it takes), and therefore I should be fine with 3 GB of RAM.  All of this assumes Solaris though. (If I try asking an Oracle engineer about ZFS performance on FreeBSD, I can't see that conversation going well.)  I realize FreeBSD has most/all of the features of Solaris ZFS v28; the question I now have is whether it was ported in such a way that memory usage would be similar.

*FILESYSTEM LAYOUT*

Just a note, I guess I didn't say it explicitly before, but I'm envisioning that the CF cards will have everything needed on them for at least running full base OS (maybe not necessarily build it), and certainly relevant ZFS tools; the zpool then would have (in separate filesystems of course) any additional applications/ports/etc, and obviously user data.



			
				vermaden said:
			
		

> _/proc_
> 
> (its not needed)



Not needed or not needed and also useless?  If useless, then I think that's another thing different about FreeBSD I'll have to learn!



			
				vermaden said:
			
		

> The only part that makes sense on / is /var/db/pkg (list of installed packages), I once kept /var on ZFS and /var/db/pkg on separate UFS partition on CF card, a big PITA generally, if You decide to go with /var on ZFS, keep whole /var there



Thanks for the tip



			
				vermaden said:
			
		

> By defaults its /usr/home and /home -> /usr/home, but that can be modified in many ways, for example a separate dataset with mountpoint set to /home or a separate dataset under /usr with its mountpoint, or part of /usr with using the default symlink ... its generally up to You, there are no bad choices here, it can also be /opt/home or /export/home if You want it that way.



I realize it's arbitrary but just think it's interesting that it's not mentioned in the man page as part of the standard.  In any event, definitely will be on zpool.



			
				phoenix said:
			
		

> No. /boot is just a directory on FreeBSD, not a separate filesystem. Leave it as part of / or else you'll have to do a lot of hoop-jumping to make it work.



Good to know. Thanks!

*INSTALLATION*



			
				vermaden said:
			
		

> In short, if You know what You are doing, do it the 'manual' way, after several tries @ virtualbox You will feel as at home. If You do not want to use the 'manual' way, use the PC-BSD installer to install FreeBSD.



Doing it a few times manually in virtualbox is probably a good idea if I decide the installer can't do what i need it to.  Maybe a stupid question but if the PC-BSD installer is better and CAN install vanilla FreeBSD, then why not just use it as the standard installer?



			
				phoenix said:
			
		

> bsdinstall is a much nicer installer, supports GPT, ZFS, GEOM, etc. And, it's a LiveCD, so you can drop to a shell and have access to a full-fledged FreeBSD system to do anything. So, while the TUI may not support all the GEOM features, you can always drop to a shell, configure things manually, then continue on with the TUI. Want a gmirror-based / setup? bsdinstall can do that. Want a ZFS-based setup? bsdinstall can do that. Want some fancy GELI-encrypted setup? bsdinstall can do that.
> 
> And the best part is that bsdinstall is just an installer. There's no broken post-install configuration crap in there.



I think the idea of being able to drop to shell during install is nice.  I'll have to play with that in VirtualBox also. (The 9.0RC3 install I did on VirtualBox that I'm playing with now was with bsdinstaller and basically all defaults onto a single virtual disk...just to get a feel for the OS)

*FUSE*



			
				vermaden said:
			
		

> Fuse not fully ported? (or there are bugs?



Good to know.  For some reason I guess I assumed FreeBSD had a fully working FUSE setup.  I'll have to do some testing. Luckily I only need read only on most of the weird filesystems.

*CF*



			
				phoenix said:
			
		

> I'll have a look when I'm at work tomorrow.



Thanks.  That will be good info for me to have


----------



## vermaden (Jan 5, 2012)

About ZFS and RAM ...

I am using ZFS on my home storage box with Intel T8100 CPU and 965GM MiniITX motherboard along with 1GB of RAM. I have 2 x 2TB Seagate Low Power drives put together in ZFS mirror for storage purposes, everything under control of 64bit FreeBSD 8.2-STABLE (amd64). I share that 2TB ZFS pool over SAMBA/NFS protocols to the local LAN/WLAN and even use that box as a server (converting various video formats using FFMPEG and so) ... and everything is stable as rock, You definitely do not need a lot  RAM to use ZFS with FreeBSD, also do not ave any 'manual' limits set in /boot/loader.conf, only modules loading:


```
$ [b]cat /boot/loader.conf [/b]
ahci_load=YES
zfs_load=YES
aio_load=YES
coretemp_load=YES
```

... and ...


```
$ [B]uptime[/B]
 2:39PM  up 215 days,  5:37, 4 users, load averages: 0.07, 0.03, 0.01
```

But its true that the more the RAM the more ZFS shines


----------



## phoenix (Jan 5, 2012)

In the IDE-CF adapter, we are using 2 GB Transcend CF disks:

```
ad0: DMA limited to UDMA33, device found non-ATA66 cable
ad0: 1911MB <TRANSCEND 20080128> at ata0-master UDMA33 
ad1: DMA limited to UDMA33, device found non-ATA66 cable
ad1: 1911MB <TRANSCEND 20080128> at ata0-slave UDMA33
```
Not sure what model of IDE-CF adapater is in the box.  I'd have to open it to find out, and that's a little hard to do with this box.  I believe it's a StarTech, though,

Hrm, looking through /boot/loader.conf on this system, it appears that DMA is now working.  Guess something changed between FreeBSD 7.0 (what was originally installed) and FreeBSD 8.2-STABLE (what's currently running).  It's still limited to UDMA33 speeds, though.

In the SATA-CF adapter, we are using 4 GB Kingston Elite Pro CF cards:

```
ad4: 3847MB <ELITE PRO CF CARD 4GB Ver2.21K> at ata2-master PIO4 SATA 1.5Gb/s
ad6: 3847MB <ELITE PRO CF CARD 4GB Ver2.21K> at ata3-master PIO4 SATA 1.5Gb/s
```

These are plugged into StarTech SATA2CF adapters.  Looking at /boot/loader.conf on this server, DMA is disabled via 
	
	



```
hw.ata.ata_dma="0"
```
 which is the opposite of what I thought.

These two servers using CF disks are being retired (one has already been retired, the other is one it's last month of usage).  The new storage servers use SSDs.  Much nicer to work with.


----------



## Sylhouette (Jan 5, 2012)

One more significant difference between the solaris and FreeBSD implementation is that you can add spares, but those are not hot spares like solaris.
Human intervention is needed to replace a faulted drive.

FreeBSD accepts the spare without any comments so if you come from solaris, it looks the same, but the inner workings do not..
The spare is coldplay.

Regards,
Johan Hendriks


----------



## ctengel (Jan 5, 2012)

vermaden said:
			
		

> I share that 2TB ZFS pool over SAMBA/NFS protocols to the local LAN/WLAN and even use that box as a server (converting various video formats using FFMPEG and so) ... and everything is stable as rock


Wow that's great! Like I think I mentioned, video streaming is about the most I'd be doing in terms of heavy-load-where-i-would-care how long it takes, so that is good to hear.


			
				phoenix said:
			
		

> Hrm, looking through /boot/loader.conf  on this system, it appears that DMA is now working. Guess something changed between FreeBSD 7.0 (what was originally installed) and FreeBSD 8.2-STABLE (what's currently running). It's still limited to UDMA33 speeds, though.


That is also good news]These two servers using CF disks are being retired (one has already been retired, the other is one it's last month of usage). The new storage servers use SSDs. Much nicer to work with. [/quote]
SSDs were actually what I originally wanted for rootdisk, thinking I could put some ZFS cache on there too (I've actually never done a separate cache vdev before, but that's just because again, most of the servers i deal with have a TON of RAM), but then I realized how much they were and that I'd probably be better buying two CF cards.


			
				Sylhouette said:
			
		

> One more significant difference between the solaris and FreeBSD implementation is that you can add spares, but those are not hot spares like solaris.
> Human intervention is needed to replace a faulted drive.


Hmm that stinks...I wonder why that is...I wouldn't think it would take much logic to get that (If an array is degraded AND there is a spare available, replace faulted drive with spare...seems simple enough to me! (In reality I'm sure slightly more complex, but still...)); maybe I should write a cronjob that just polls the zpool every minute to see if it's OK, and if not, do the replacement. 

In other news, although I'm always full of questions, I think it's about time I stop screwing around in a virtual environment and just take the plunge.  I'm now fairly confident (thanks to all of your feedback) that my old hardware will do the trick! So hopefully tonight will be ordering the necessary parts to bring it up to speed, and obviously the drives!


----------



## phoenix (Jan 5, 2012)

ctengel said:
			
		

> In other news, although I'm always full of questions, I think it's about time I stop screwing around in a virtual environment and just take the plunge.  I'm now fairly confident (thanks to all of your feedback) that my old hardware will do the trick! So hopefully tonight will be ordering the necessary parts to bring it up to speed, and obviously the drives!



ZFS itself doesn't handle hot-spares, even in Solaris.  All ZFS does is send notifications of dead drives to the OS.  What the OS does with that notification ...

On Solaris, you have FMD.  That's what watches for the dead drive notifications, then initiates the "zpool replace" using the configured spare drive.

On FreeBSD, we have devd() which gets the notification of the dead drive.  But, we have nothing in place to actually initiate the "zpool replace".  At least, nothing official.  There are various shell scripts floating around that can be plugged into devd.conf(5) to do this, but they're not exactly bulletproof.


----------



## Sylhouette (Jan 10, 2012)

> There are various shell scripts floating around



Do you have some links to these scripts?
I can not find them on the net.
Thanks.

regards
Johan


----------



## phoenix (Jan 10, 2012)

Search the freebsd-fs, freebsd-stable, and freebsd-current mailing lists for threads on "zfs hot spare".


----------



## ctengel (Jan 30, 2012)

So after getting sidetracked by something completely unrelated, I'm back to work on this, and turns out, despite the fact I swore up and down that "Yes I have a 64 bit processor," it turns out that I do not.  (No wonder my motherboard only supported so little RAM!) So now I'm back to square one again, and I'm thinking I really do need to buy some new motherboard/CPU/RAM.  (While I've definetely had some helpful advice from this thread that maybe I don't need that much RAM, the consensus seems to be that for ZFS, 64-bit is a must.  Although if anyone has any experience with 32 bit, I might be convinced otherwise...)  And this also probably means I'll be using a PCI-e SATA card (maybe only one needed!), SATA CF adapter (instead of IDE), etc.


----------



## throAU (Jan 30, 2012)

It may be worth checking that 64 bit isn't simply disabled in your BIOS.  Look for any options concerning 64 bit support or "long mode" in the BIOS.

Worth a shot if you haven't looked in there already, a lot of 64 bit hardware shipped with 64 bit mode disabled in the BIOS by default.


----------



## phoenix (Jan 30, 2012)

If you want to do any kind of heavy lifting with ZFS (lots of NFS shares, lots of Samba shares, several TB of disk space, compression, dedupe, L2ARC, etc) then you will want to use 64-bit FreeBSD, as 4 GB of RAM just won't be enough.  

If you only have a couple TB of disk space, light compression, no dedupe, only a few clients accessing the pool, then you can get away with 32-bit FreeBSD.  I use 32-bit FreeBSD at home with only 2 GB of RAM, but there's only 1 TB of disk space in the pool (2x 500 GB mirror vdevs), 4 GB L2ARC, no dedupe, lzjb compression, and only 3 clients accessing the pool at any one time.  Every few weeks I have to reboot the box as it runs out of RAM or mbufs or something and locks up.  I'm contemplating migrating it to 64-bit FreeBSD just to get a larger kmem space.


----------



## ctengel (Feb 1, 2012)

Definitely not just the BIOS.  I just was going over my purchase records and realized it was an AMD Athlon XP.  I believe "Thoroughbred" class.  For some reason I could have sworn it was 64 bit, but I believe it is 32 bit.

I am not really planning on much heavy lifting, but am targeting about 5 TB of storage total in mirrored pairs.  I might be able to get a 64 bit system soon with about 8-16 GB RAM.


----------



## ctengel (Feb 27, 2012)

After all that it looks like I will be getting new hardware:

AMD FX-4100 Zambezi 3.6GHz (3.8GHz Turbo) Socket AM3+ 95W Quad-Core
ASUS M5A97 AM3+ AMD 970 SATA 6Gb/s USB 3.0 ATX 
CORSAIR XMS3 8GB (2 x 4GB) DDR3 1333
SYBA SD-ADA40001 SATA II To Compact Flash
ASUS 8400GS-512MD3-SL GeForce 8400 GS 512MB 32-bit DDR3 PCI Express 2.0 x16

Turns out it was about $100 USD more to build with all new hardware.  I thought I must have been missing something, but turns out DDR3 is alot cheaper than DDR1, and I was able to save on the need for SATA controller cards.

I will be getting a much faster system and performance I don't think will at all be an issue.  What I am trying to determine for sure is whether the AMD SB950 SATA controller works with FreeBSD, if the UEFI BIOS can boot FreeBSD, and if using this SATA CF adapter is as straightforward as a IDE CF adapter.

Thanks once again for everyone's input on the old setup; I'm really looking forward to becoming a regular FreeBSD user!


----------



## olav (Feb 28, 2012)

I've been using rat slow Kingston 4GB usb drives mirrored with gmirror for over a year now for my ZFS server. No problems at all.


----------

