# RAID 0



## rpowell47 (May 11, 2018)

I need to be talked to as if I'm a thrid grader! I have copied, read, tried, and searched the BSD handbook, internet, and looked at FreeBSD writing and manuals published by other authors; and have not found a simple easy "step-by-step" procedure for carrying out the correct procedure for installing a set of the exact same four hard drives with a fresh install of FreeBSD 11.1 using a RAID 0 format. Since I'm in my 7th decade of life, I'm not worried about the notion of redundancy of data.

Any time or support in offering places to look will be greatly appreciated.


----------



## SirDice (May 11, 2018)

The "howto" section is not for asking how to do things. Thread moved to storage.

What exactly are you having problems with?

gconcat(8), zpool(8), gvinum(8), graid(8).


----------



## VladiBG (May 11, 2018)

*RAID0 does not provide any redundancy. This means that if one disk in the array fails, all of the data on the disks is lost. *

Do you have raid controller?

If not you can use gstripe(8) setup it before the installation in the shell then exit and continue the installation.
https://www.freebsd.org/doc/handbook/geom-striping.html


----------



## VladiBG (May 11, 2018)

Before the installation you have 3 options (live cd / shell / install) you have to go to Shell then to setup the raid 0 and type exit to continue to installation process.Then during the installation you will see the created stripe and you can install the OS on it.

If you have onboard intel chipset  then do the same but insted of gstripe you can use graid to create the raid0 volume. Even better will be to setup it from your Bios before the boot.

Edit:
using raid 0 is a  bad thing. You may think a twice and setup raid 1+0 or raid5.


----------



## zirias@ (May 11, 2018)

If you have a decent amount of memory, use ZFS. It includes a "volume manager" with configurations equivalent to any RAID level. Don't use a striped setup (aka RAID-0) unless you absolutely need every bit of harddisk space and performance at the same time -- you sacrifice data safety as already mentioned by VladiBG. (it effectively _doubles_ your risk of total data loss)


----------



## ralphbsz (May 11, 2018)

The only information I can find about the RAID on that motherboard is that it is from Nvidia (a graphics card company!), and this is how it is described at the Gigabyte web site: "This NVIDIA RAID function makes the RAID even more accessible by introducing the innovative windows-based facility."  So probably it is not real RAID, but requires a windows driver.

Personally, I would forget about using the motherboard RAID, and use software RAID within FreeBSD instead.

Having said that: With 4 disk drives, using RAID 0 is a pretty bad idea.  Why?  Because the reliability of your RAID array will be 4x worse than the reliability of any individual drive.  Think about it this way: If you put a single file system on the 4 disks, then the file system will be seriously damaged (probably beyond repair) if any of the 4 drives fail.  Therefore the probability or rate of file system damage will be 4x higher than the probability or rate of failure of each drive.

My suggestion is: Since you have 4 drives, a single-fault tolerant RAID setup is both reasonably reliable (not great), and gives you reasonable capacity: You get 3 drives' worth of storage space, and you can tolerate any one drive (or sector) failing.  I would do this within ZFS, by setting up a storage pool that contains the 4 drives, and uses RAID-Z, which is a single-fault tolerant parity based RAID system.  With ZFS you also get checksums, so errors that are not detected by the drives will be found and worked around using the redundant data.

Setting up ZFS is actually quite easy.

UPDATE: Just saw that you posted something else a moment ago: You don't actually need the capacity, but are interested performance.  In that case, a better option would be a 4-disk RAID 1-0 solution.  That means that every bit of data is written to two disks, so this gives you 2 disks' worth of capacity.  You'll have to find some ZFS documentation to see how to set this up; the obvious way (just put all 4 disks into the same mirrored pool) will give you 4-way RAID-1, where every bit of data is written to 4 disks, which gives you lower performance, ridiculously good reliability, and only 1 disks' worth of capacity.  Setting up 2+2 disk RAID 1-0 may be a two-step process in ZFS.


----------



## zirias@ (May 11, 2018)

well a good compromise could be RAID-5 (or, if you use ZFS, RAID-Z), giving you the capacity and performance like three "striped" disks and using one disk per stripe for redundancy. That's what I'm actually using with 4 disks. Beware however that with ever larger disks, RAID-5/RAID-Z isn't considered "fail-safe" any more -- recovering from an error requires to rewrite a whole replaced disk and as those are getting larger, this takes quite some time, increasing the chance that a second disk fails while restoring, which would again mean total data loss. Nevertheless, it's far better than just RAID-0  -- in fact, with RAID-0 on 4 disks, you increase your risk for total data loss considerably. Any single disk failure would make the data on the remaining disks useless.

If you're aiming for really good fail-safety, use some mirrored mode instead (sacrificing half of your total capacity).


----------



## VladiBG (May 11, 2018)

I would connect all those 4 drives directly to the South bridge Nforce4 chipset marked as (s_ata0_sb - s_ata4_sb) not to the other 4 sata ports on Sil3114 that are connected try the pci-bus because the speed rate of the ports that are on Nforce4 chipset will provide you with theoretical maximum of 300Mb/s and the other are limited to 150Mb/s. Then using ZFS would install it as RAID-Z (n-1)


----------



## VladiBG (May 11, 2018)

IF you are going to use ZFS you have to set "NV IDE/SATA RAID function = [Disabled]"  from the Bios.


----------



## robroy (May 11, 2018)

rpowell47, beyond the excellent 'Handbook, I've found Michael W. Lucas's FreeBSD Mastery:  Storage Essentials, FreeBSD Mastery:  ZFS and (with Allan Jude) FreeBSD Mastery:  Advanced ZFS very helpful and eye-opening.


----------



## Phishfry (May 11, 2018)

VladiBG Instructions look spot on. You can use the installer to make gstripe and gmirrors.
http://www.wonkity.com/~wblock/docs/html/gmirror.html
I think I added to my memstick /boot/loader.conf on the FreeBSD memstick installer:
`geom_mirror_load="YES"`
That way it already has RAID1 support off the bat. Purely optional.
But it does allow for file ops on the array in LiveCD mode without loading the module.


----------



## VladiBG (May 11, 2018)

geom_raid is included in the GENERIC kernel and there's no need to load it manually via loader.conf


----------



## zirias@ (May 11, 2018)

rpowell47 said:


> After reading Chpt. 19, I'm not sure about ZFS. I would like to stay with UFS and go ahead with RAID 1+0, but again now I'm branching off into another area, but what I understand RAID 1 + 0 or RAID10 is one the most stable using UFS RAID


Well, "RAID10" is completely mirrored, and any such mode gives you the best safety -- you can achieve something similar with ZFS as well. RAID-Z (or, without ZFS, RAID-5) is a nice compromise, you have to judge for yourself. Just don't use RAID-0.

As for ZFS, if you're not somewhat short of RAM, go for it. It has so many nice features, you'll probably love them. One of the best for me is the ability to clone (copy-on-write) datasets from snapshots in virtually no time. This is great for jails (and also virtual machines) -- for example ports-mgmt/poudriere makes good use of it when building packages in a clean environment. I can only recommend you to try it out.


----------



## VladiBG (May 12, 2018)

there's no any raid configuration in the attached file. It won't work.

How much memory do you have on the computer?


----------



## VladiBG (May 12, 2018)

There's no exact procedure that you can follow. You must understand what each command is doing, not just to blind copy some guide that you found in the internet.


I would suggest you to use the build in RAID that you have on the motherboard the only downside of this is if your motherboard dies in the future you will need to find the other motherboard with the same raid controller to be able to read the raid volume on other computer.
To do this first enter in your BIOS at the boot time by pressing DEL then navigate to Integrated Peripherals and set NV IDE/SATA RAID function to [Enable]. Save the BIOS settings and exit. At the next reboot enter in NVIDIA RAID Utility by pressing F10 and  then define a new array with Raid mode Stripe Mirroring (raid 10) and assign the four disks to the array. This will delete any DATA that you have on the disks so be shure that the disks are empty or you don't have any value information on them. Press F7 to save(finish) then ESC for quit.
Then you can boot your FreeBSD using CD or USB and proceed with the installation. During the installation when you select where to install the OS be sure to select GPT partition scheme and select the  /dev/raid/rXX disk


----------



## ralphbsz (May 13, 2018)

What do people think about trying ZFS with 2.5GByte of RAM?

One one hand, one always hears that ZFS is an absolute memory hog, and won't work or be unreliable with small amounts.  On the other hand, my server at home has been running for ~5 years, admittedly with light workload, with 3GByte of RAM, without any problems whatsoever.


----------



## chrbr (May 13, 2018)

ralphbsz said:


> What do people think about trying ZFS with 2.5GByte of RAM?


Here I run ZFS on a mirror based of two disks with 2GByte of RAM without any problems.


----------



## ralphbsz (May 18, 2018)

You should really cut and paste the output of "zpool status" here.  That will tell us much more.  The output above only tells us that you have file systems, not whether they are RAIDed.


----------



## VladiBG (May 18, 2018)

Having four disks in raid 1 is a little bit overkill. 
At your question:





> is in fact running


 Yes it's running. You are using RAID 1 (mirror) on top of 4 disks.


----------



## SirDice (May 18, 2018)

Yeah, why have 3 copies of the same disk? 

This is a much more efficient configuration:

```
root@hosaka:~ # zpool status stor10k
  pool: stor10k
 state: ONLINE
  scan: scrub repaired 0 in 0h16m with 0 errors on Mon Mar 26 01:14:11 2018
config:

        NAME        STATE     READ WRITE CKSUM
        stor10k     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da1     ONLINE       0     0     0
            da0     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0

errors: No known data errors
```


----------



## VladiBG (May 18, 2018)

You can do this on the fly, no need to reinstall. It's good to have a backup if anything goes wrong.

In short, remove the ada2p3 and ada3p3 from the current mirror-0 and then add a new mirror-1 into zroot using the same two partitions. Because the disk are already in mirror of the installation you don't have to create a new GPT on them and copy the bootcode. The only downside of this is you will end with a bigger swap partition across all 2 mirror sets.

`zpool detach zroot ada2p3
zpool detach zroot ada3p3
zpool add zroot mirror da2p3 ada3p3`

Edit:
This may be wrong because reallocation of the first mirror-0 data will not be norm over the mirror-1. You may want to check `zpool list -v` to see the data allocation.

Edit2:
on a second thought it will be better to reinstall. There's no data  normalization(reallocation) after the pool expansion using the above method.


----------

