# Software RAID problem



## FreeDomBSD (Feb 22, 2012)

I could use some help in figuring out what steps I need to take to solve a problem:
I want to install FreeBSD9 with a softraid mirrored option, but I want to make sure that I do not loose the information of the existing two drives that were mirrored via softraid while adding two more drives to the softraid array.

1. about four years ago my friend set up a FreeBSD7 box with software raid that mirrored 2 drives. 
2. OS was on the separate drive
3. There were three disks total (1 OS, 2 identical for mirrored softRAID)
4. OS drive died two years ago and the system was just sitting there doing nothing
5. at some point it got an upgrade of two more disks that I couldn't figure out how to add to the RAID
6. so there are total of 5 disks now (1 new OS drive; 2 old softraid mirrored drives; 2 new drives that need to be a part of the softraid array)

`


----------



## dave (Feb 22, 2012)

I assume your friend used gmirror to set up the mirror?  If you add two more drives to a mirror, you will have four identical copies, but no extra space.  You want to use ZFS on your new machine.  Look at the difference between RAID1 and RAID5.  So, you will want to install FreeBSD, set up a ZFS raid-z array with the new drives, then access the mirrored drives and copy the data to the new zfs array.  Then, once you have copied all your data from the old array to the new, then you can wipe the old drives and add them to the new array to increase the total available storage.


----------



## Sebulon (Feb 22, 2012)

@dave

Yeah, but how would you boot?

@FreeDomBSD

My suggestion would be this: Install FreeBSD onto the new system drive, create a zfs mirror with the two new data drives, have both the old gmirror and your new zfs mirror mounted and copy the data over from the old to the new. Then you can destroy the old mirror and then add them as another mirror-set to your zpool. You then wind up with a RAID10, and also the combined storage of both your old mirror, and the new. Next time youÂ´re tight on storage, you either buy two more drives and add them as yet another mirror-set into the pool, or replace the drives from the old mirror with two new ones.

/Sebulon


----------



## Sebulon (Feb 22, 2012)

I stand corrected. Apparently, you can boot from a raidz:
http://forums.freebsd.org/showthread.php?t=29900

@FreeDomBSD

If the new system drive and the other two data drives are of the same size, do what dave says. You will then have more storage, better security- because the boot system will also be protected, and you get the performance from a striped RAID5 + RAID1.

/Sebulon


----------



## FreeDomBSD (Feb 22, 2012)

Well, my main concern is the integrity of the data on the old mirror (let's give these drives a name, shall we? How about drive B and drive C and they make up the BC mirror).

Anyway, I have no idea how was the softraid setup. And I never tried to mount the disks separately out of the BSD box for the fear of destroying the data. My friend also doesn't remember how the softraid was setup. Is it possible to destroy the data on the BC mirror if I assume that the mirror was setup using the gmirror command, but in reality it was setup with some other command?

I have 5 drives total:

OS drive -      drive "A"
BC mirror -  drives  "B" and "C"
New drive 1 - drive "D"
New drive 2 - drive "E"

I plan to boot from drive A.

Thanks for all you responses.


----------



## kpa (Feb 22, 2012)

Post the outputs from the following commands:

`# gmirror status`

`# zpool status`

`# gpart show`


----------



## FreeDomBSD (Feb 22, 2012)

As I've mentioned, the current OS drive is faulty and is getting replaced.


----------



## FreeDomBSD (Feb 22, 2012)

I plan to reinstall latest version of FreeBSD on the new drive A.


----------



## dave (Feb 22, 2012)

Sebulon said:
			
		

> @dave
> Yeah, but how would you boot?



You are installing FreeBSD onto, and booting from your "1 new OS drive".  Leave all other drives unplugged until you have installed FreeBSD on that first disk.

But even before you do that - do you know what kind of mirror the old drives are running?  You better figure that out.  Boot the 9 livecd and see if you can access them by starting gmirror.


----------



## FreeDomBSD (Feb 23, 2012)

Excellent idea! I'll post results tonight. Although I'm not sure how a live CD would know my mirror configuration that is saved as a setting on the OS drive.


----------



## dave (Feb 23, 2012)

If the drives are indeed mirrored using gmirror, then there is metadata on the drives themselves that describes the mirror.  Thus, once you boot the livecd with them connected, you should be able to


```
# gmirror load
```

and then


```
gmirror status
```

should show you something like


```
Name    Status  Components
mirror/gm0  COMPLETE  da0
                      da1
```

If you don't get similar results, then your softraid may be some other type.


----------



## FreeDomBSD (Feb 23, 2012)

Makes perfect sense now.

Thanks for the detailed explanation.


----------



## Sebulon (Feb 23, 2012)

@dave



> You are installing FreeBSD onto, and booting from your "1 new OS drive". Leave all other drives unplugged until you have installed FreeBSD on that first disk.


That math is limping. Drive A = OS, BC = old mirror and DE = new mirror. You need minimum 3x drives to create raidz.

@FreeDomBSD

By coincidence, IÂ´ve been writing a HOWTO today on installing 9.0-RELEASE with ZFS boot- and root that you can follow to get your new system up and running. This setup is assuming the disks B,C,D and E are of the same size, and if you follow this guide you will be booting off of a mirrored usb zpool, while having the rest on a raidz zpool made up of disks B,C,D and E that is expandable as much as you like.


*DISCLAIMER*

... nuffÂ´ said, you know the drill. I will show you a path. You decide if you want to walk it.

Start by inserting both USBÂ´s and having the old BC mirror and DE drives connected and boot up using a FreeBSD-9.0-RELEASE CD.


```
Welcome to FreeBSD! Would you
like to begin an installation
of use the live CD?

< Install >
```


```
Would you like to set a
non-default key mapping
for your keyboard

< Yes >

- Swedish ISO-8859-1

< OK >
```
(YouÂ´ll want to choose your own mapping here.)


```
Please choose a hostname for this machine.

If you are running on a managed network, please
ask your network administrator for an appropriate
name.

- foobar

< OK >
```


```
Choose optional system componens to install:

[ * ]  doc
[   ]  games
[   ]  lib32
[   ]  ports
[   ]  src

< OK >
```


```
Would you like to use the guided
partitioning tool (recommended for
beginners) or to set up partitions
manually (experts)? You can also open a
shell and set up partitions entirely by
hand.

< Shell >
```



> Use this shell to set up partitions for the new system. When finished, mount the
> system at /mnt and place an fstab file for the new system at /tmp/bsdinstall_et
> c/fstab. Then type 'exit'. You can also enter the partition editor at any time b
> y entering 'bsdinstall partedit'.


(IÂ´m going to assume that your drives are going to be called adaX from here on.)

*THESE INSTRUCTIONS HAVE BEEN UPDATED. READ LATER POST FOR IN-DETAIL INSTRUCTIONS.*


```
Please select a password for the system management account (root):
Changing local password for root
New Password: ************
Retype New Password: ************
```


```
Please select a network interface to configure:

em0		IntelÂ® PRO/1000 Legacy Network Connection 1.0.3

< OK >
```
(Your card might not be an em specifically)


```
Would you like to
configure IPv4 for this
interface?

< Yes >
```


```
Would you like to use
DHCP to configure this
interface?

< No >
```
(You may of course choose DHCP if that is available)


```
Static Network Interface Configuration

IP Address	XXX.XXX.XXX.XXX
Subnet Mask	XXX.XXX.XXX.XXX
Deafult Router	XXX.XXX.XXX.XXX
```


```
Would you like to
configure IPv6 for this
interface?

< No >
```


```
Resolver Configuration

Search		foo.bar
IPv4 DNS #1	XXX.XXX.XXX.XXX
IPv4 DNS #2	XXX.XXX.XXX.XXX

< OK >
```
("Search" is if you have entered domain name you can e.g. type "ping host" and resolver auto-adds foo.bar after "host" for you)


```
Is this machineÂ´s CMOS clock set to UTC? If it is set to local time,
or you donÂ´t know, please choose NO here!

< No >
- 8 Europe
-- 45 Sweden
```


```
Does the abbreviation 'CET' look reasonable?

< Yes >
```
(Correct as appropriate)


```
Choose the services you would like to be started at
boot:

[ * ]  sshd
[ * ]  moused
[ * ]  ntpd
[ * ]  powerd

< OK >
```
(As you see fit)


```
Would you like to enable crash dumps?
If you start having problems with the
system it can help the FreeBSD
developers debug the problem. But the
crash dumps can take up a lot of disk
space in /var.

< No >
```
(Not possible with ZFS)


```
Would you like to add
users to the installed
system now?

< Yes >
```
(ItÂ´s bad practice doing everything as root)


```
Exit

< OK >
```

Done!

Some things to get you started. Personally, IÂ´ve read it all like, a gazillion times. Things like these are good to have as a reference:

First off, from the good book
Then the wiki
And ultimately, the admin guide. Keep it under your pillow=)

/Sebulon


----------



## dave (Feb 23, 2012)

Sebulon said:
			
		

> @dave
> That math is limping. Drive A = OS, BC = old mirror and DE = new mirror. You need minimum 3x drives to create raidz.



Actually, it is possible to create a raidz array with only two drives, no problem.  I have done it.  Not ideal, but it is only for the time it takes to migrate the data.  Then, add the old drives, and simply export and import the pool.


----------



## Sebulon (Feb 23, 2012)

dave said:
			
		

> Actually, it is possible to create a raidz array with only two drives, no problem.  I have done it.  Not ideal, but it is only for the time it takes to migrate the data.



Yes, it is possible. However due to way ZFS is made there are obvious reasons why you shouldnÂ´t:
http://constantin.glez.de/blog/2010/06/closer-look-zfs-vdevs-and-performance#mirroredperformance

Regarding write performance;



> When writing to mirrored vdevs, ZFS will write in parallel to all of the mirror disks at once and return from the write operation when all of the individual disks have finished writing. This means that for writes, each mirrored vdev will have the same performance as its slowest disk.





> When writing to RAID-Z vdevs, each filesystem block is split up into its own stripe across (potentially) all devices of the RAID-Z vdev. This means that each write I/O will have to wait until all disks in the RAID-Z vdev are finished writing.



(So for write performance, two drives mirror or raidz are about equal, BUT.)

Regarding read perfomance;



> When reading from mirrored vdevs, ZFS will read blocks off the mirror's individual disks in a round-robin fashion, thereby increasing both IOPS and bandwidth performance: You'll get the combined aggregate IOPS and bandwidth performance of all disks.





> When reading from RAID-Z vdevs ... the process is essentially reversed (no round robin shortcut like in the mirroring case)



That means that with two drives, a mirror vdev gives you the read performance from two drives, while a raidz vdev gives you the read performance of just one.




			
				dave said:
			
		

> Then, add the old drives, and simply export and import the pool.


This statement is very fuzzy. It gives the impression that it is possible to grow a raidz vdev, which is false. By chance, I found a mail from Mr. Jeff Bonwick, one of the original developers of ZFS, explaining it very well:
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-July/003544.html


> ZFS uses dynamic striping.  So rather than growing an existing RAID-Z
> group, you just add another one.  That is, suppose you create the pool
> like this:
> 
> ...



This was written in 2006 but is still true today. To achieve growing or reordering, ZFS needs to have a feature called "Block Pointer Rewrite (BPR)" and I spent some time at the goog', trying to find anything official about it to no avail, unfortunately. Just a lot of people wishing for it

Perhaps I misunderstood you, dave. And in that case, I would like you to explain step by step what you ment. It is very important when working with ZFS to keep redundancy or you will end up like this:
Zpool - Incorrecly added a drive to my raidz configuration


I designed my HOWTO for OP after: 1) redunancy, 2) stability, 3) storage and 4) performance, in that order. In a perfect scenario the pool would have a spare disk (RAID5 + RAID1 +1) and also a separate secondary system to make *backups* onto. Redundancy is *not* a backup.

/Sebulon


----------



## dave (Feb 24, 2012)

Sebulon said:
			
		

> Yes, it is possible. However due to way ZFS is made there are obvious reasons why you shouldnÂ´t:
> http://constantin.glez.de/blog/2010/06/closer-look-zfs-vdevs-and-performance#mirroredperformance
> 
> Regarding write performance;
> ...



I was only suggesting it as a temporary setup, in order to migrate the data from the old drives.  Read what I wrote: "Not ideal, but it is only for the time it takes to migrate the data."



			
				Sebulon said:
			
		

> This statement is very fuzzy. It gives the impression that it is possible to grow a raidz vdev, which is false. By chance, I found a mail from Mr. Jeff Bonwick, one of the original developers of ZFS, explaining it very well:
> http://mail.opensolaris.org/pipermail/zfs-discuss/2006-July/003544.html
> 
> 
> ...



OK, but I have done this succesfully:


```
zfs add [poolname] [new device(s)]
zfs export [poolname]
zfs import [poolname]
zfs list  <-- will show expanded capacity
```

I used physically identical drives to do that.

Setting up 2 zfs mirrors would be fine, too.

Your howto is great, but I think it is perhaps too complicated for the OP.  He has a dedicated disk for the OS, and doesn't need to have root on ZFS.  Remember, the OP is not an expert.


----------



## Sebulon (Feb 24, 2012)

dave said:
			
		

> I was only suggesting it as a temporary setup, in order to migrate the data from the old drives.  Read what I wrote: "Not ideal, but it is only for the time it takes to migrate the data."



You never choose two drives in a raidz over two drives in a mirror, unless you actually want bad performance. In which case, knock yourself out



			
				dave said:
			
		

> zfs add [poolname] [new device(s)]
> zfs export [poolname]
> zfs import [poolname]
> zfs list  <-- will show expanded capacity


This worries me that you, just like our fellow from thread Zpool - Incorrecly added a drive to my raidz configuration, has added devices into a pool without any redundancy(fault tolerance). If any of the added drives dies, your whole zpool is toast. And also, that syntax is wrong. You use zpool ..., not zfs .... The correct way to expand a pool is:

```
[B]zpool set autoexpand=on [poolname]
zpool add [poolname] [mirror/raidz(1,2,3)] [new devices]
zpool list  <-- will show expanded capacity[/B]
```
Saves you the trouble of export/importing the pool.




			
				FreeDomBSD said:
			
		

> 4. OS drive died two years ago and the system was just *sitting there doing nothing*


This is the biggest problem with having a separate disk that the system boots off of. Using my suggestion, all of the drives would be bootable, so it doesnÂ´t matter which drives goes boom. Also yields better performance, sequentially, and the same performance random.

And I understand that it can feel more complicated, but it is a superior setup, in every way. ThatÂ´s why I made detailed instructions for the OP to follow.

/Sebulon


----------



## Sebulon (Feb 24, 2012)

@FreeDomBSD

Here is something i learned by simulating the setup on a VMWare virtual machine. I found a logical bug:


```
[CMD="#"]zpool status pool1[/CMD]
  pool: pool1
 state: ONLINE
  scan: resilvered 12.1M in 0h0m with 0 errors on Fri Feb 24 06:28:43 2012
config:

	NAME           STATE     READ WRITE CKSUM
	pool1          ONLINE       0     0     0
	  raidz1-0     ONLINE       0     0     0
	    gpt/disk1  ONLINE       0     0     0
	    gpt/disk2  ONLINE       0     0     0
	    gpt/disk3  ONLINE       0     0     0

errors: No known data errors
[CMD="#"]zpool get bootfs pool1[/CMD]
NAME   PROPERTY  [B]VALUE[/B]       SOURCE
pool1  bootfs    [B]pool1/root[/B]  local
[CMD="#"]zpool add -f pool1 mirror gpt/disk{4,5}[/CMD]
cannot add to 'pool1': root pool can not have multiple vdevs or separate logs
```

You cheap bastardsx( But I can understand why that is. Too bad.

However:

```
[CMD="#"]zpool get bootfs rpool[/CMD]
NAME   PROPERTY  [B]VALUE[/B]   SOURCE
rpool  bootfs    [B]-[/B]       default
[CMD="#"]zpool add rpool mirror gpt/disk{4,5}[/CMD]
[CMD="#"]zpool status rpool[/CMD]
  pool: rpool
 state: ONLINE
  scan: resilvered 180K in 0h0m with 0 errors on Fri Feb 24 14:11:54 2012
config:

	NAME           STATE     READ WRITE CKSUM
	rpool          ONLINE       0     0     0
	  mirror-0     ONLINE       0     0     0
	    gpt/disk6  ONLINE       0     0     0
	    gpt/disk7  ONLINE       0     0     0
	  mirror-1     ONLINE       0     0     0
	    gpt/disk4  ONLINE       0     0     0
	    gpt/disk5  ONLINE       0     0     0

errors: No known data errors
```

So you cannot create a *bootable* RAID5 + RAID1, as I envisioned it. I am going to revise the plan for you, so that it works and still uses all of your disks, just a little differently.

/Sebulon


----------



## FreeDomBSD (Feb 25, 2012)

Hi I'm sorry for the lengthy delay.
Let me explain why it has been so:

I couldn't find a live version of FreeBSD to boot from so I had to figure out how to "burn" the  .img to the USB drive and I'll try it in few hours.

Zfs requires atleast 1gb ram and the current box was more or less "designed" to run on an antiquated hardware (p4/512mb SDRAM) and is non-upgradable (768 mb max ram available).

Here is what I'm going to do in the next 48 hours:

Check the raid status as per Dave's instructions and report back; Upgrade hardware so it can run ddr2 memory and wait for updated instruction set.

My question to Sebulon is as follows: with your instructions, will I have no use for a dedicated OS drive?

Dave and Sebulon: I am ecstatic at the help both of you are providing me. You guys are pretty awesome. I understood that Dave's solution is temporary one from the get-go and that Sebulon's is the optimized long-term for the final build. I'm a newbie at bsd and nixes (as far as command sets and such), but very technical and can read/learn fairly easy. It is hard to jump in the middle as I'm doing now, but other than that I assimilate information quite well. I appreciate the hand-holding a lot!


----------



## dave (Feb 25, 2012)

FreeBSD 9 CD/DVD ISOs and the USB IMGs all have LiveCD built in, just boot any one and choose Live CD at the first prompt.  Even the "bootonly" ISOs.  That will give you a prompt so you can try accessing the old (gmirror?) drives.


----------



## FreeDomBSD (Feb 25, 2012)

Hardware woes. Sorry this is taking so long. Thanks for the tip btw I didn't know they were all live.


----------



## FreeDomBSD (Feb 26, 2012)

I can't boot into fbsd FreeBSD 9. 

:/

I'll be looking for help in another thread

-

Finally booted into live. At login prompt I typed root. At # I typed *gmirror*. Next # I typed *gmirror status*. Nothing happened . All drives are detected by FreeBSD and BIOS.

--

Great news! I was able to bring back up the old OS drive and mount the file system via single user mode. It's not happy though. More details soon.

--

Looks like I have GEOM RAID. Does that sound right? It kicks me back out to single user mode after working for a bit on a file system and encountering some problems:



```
THE FOLLOWING FILE SYSTEMS HAD AN UNEXPECTED INCONSISTENCY:
ufs:  /dev/ad0s1g (/home), ifs: /dev/ad0s1e (/temp), ups: /dev/ad0s1f(/usr), ufs: /dev/ ad0s1d (/var)
Unknown error; help!
ERROR: ABORTING BOOT (sending SIGTERM to patent)!
Dec 31 18:42:24 init: /bin/sh on /etc/rc terminated abnormally, going to single user mode
Enter full pathname of shell or RETURN for /bin/sh:
```


```
#gmirror status
Name Status Components
mirror/raid DEGRADED ad6
mirror/raid1 DEGRADED ad8
```
Running

`# fsck -y`

Still the same. 

--

It tries to boot me from ad0s1a and fails so I have to manually boot it into ad2s1a. After *fsck -y* the "can't stat" errors are still present . I don't know how to tell FreeBSD to use ad2 instead of ad0. It also thinks my RAID mirror is broken.

[ Merged a bunch of rambling posts. -- Mod. ]


----------



## DutchDaemon (Feb 26, 2012)

@FreeDomBSD, you need to start formatting your posts now. And please stop this 'live blogging', this isn't an IRC channel.


----------



## FreeDomBSD (Feb 26, 2012)

Yessir!


----------



## Sebulon (Feb 26, 2012)

FreeDomBSD said:
			
		

> My question to Sebulon is as follows: with your instructions, will I have no use for a dedicated OS drive?



Correct. You will have no use for a dedicated OS drive.

You will be using the old mirrored drives for a zfs mirror on /, /usr, /usr/local and /var, and the three new drives in a zfs raidz on /usr/home as per the instructions. You will also be able to grow the home-pool when it is filled. Feel free to improvise the layout as you see fit. Some people have also different filesystems for /usr/ports, /var/db, etc, but in my own experience, the first ones have been enough for me, for what I do.

I have now simulated my revised setup and will update my previous post shortly. I will however state one more time that the setup requires disks A,D and E to be of the same size, and you never replied as to how big they are. Please, do share that.

Before, you said that the drives were organized like this:
OS drive - drive "A"
BC mirror - drives "B" and "C"
New drive 1 - drive "D"
New drive 2 - drive "E"

They need to reorganized for the server to be able to boot properly, like this:
BC mirror - drive "B"
BC mirror - drive "C"
OS drive - drive "A"
New drive 1 - drive "D"
New drive 2 - drive "E"

Correct this before trying to install using my instructions, or the drive-names wonÂ´t match and youÂ´ll end up trashing everything.

BTW, are you going to using this server as a workstation or NAS?

/Sebulon


----------



## FreeDomBSD (Feb 26, 2012)

NAS.

Drives are as follow:

A: 250GB
BCDE: 1TB/each

If we are going away with the dedicated OS drive I wouldn't mind getting rid of the A. It's the only PATA in the system and I also have no idea how to rearrange the drives do getting rid of A seems the easiest.

I think I should bring up the old mirror first. I don't want to lose data on the BC mirror.


----------



## FreeDomBSD (Feb 26, 2012)

I also failed to mention that it seems that DE drives are indeed part of a mirror:

I may have forgot this over the years but I may have added the DE drives to another mirror because it seems that I have RAID and RAID1 mirrors.

I remember working on adding the drives, but I guess I fit remember succeeding if at all.


----------



## FreeDomBSD (Feb 27, 2012)

Being that the obvious issue of having one terabyte drive less than we need to perform Sebulon's solution I have some thoughts on how to achieve a similar goal:

1. Verify full data integrity of the BC mirror
2. Use CDE drives for ZFS mirror
3. Copy B's data to the ZFS mirror

Will this work?


----------



## FreeDomBSD (Feb 27, 2012)

I am pretty sure that DE drives are empty, but I need to bring this system fully up to verify this. Can someone help me with the remapping of ad2s1a from the old ad0s1a please?


----------



## Sebulon (Feb 27, 2012)

@FreeDomBSD

If we use B and C to boot, that will give 1TB of storage just for booting the machine and compiling ports or storing packages. That will only sum up to to a few GBÂ´s and the rest of the 1TB would essentially be waste space sooner or later, since you cannot grow the pool you are booting from. Then, D and E would become a separate mirror 1TB big mounted on /usr/home.

Getting rid of drive A would be the best choice. But using B and C just for booting would be a waste of storage space. My best advice I can give you is to buy two cheap 4GB USB-drives that you can boot with and have / on, then create a big pool using B,C,D and E, giving you 3TB storage that you mount /usr, /usr/local, /usr/home and /var on.

In short:

Using B and C for boot and /, /usr, /usr/local and /var, and D and E for /usr/home gives you essentially only 1TB of storage and the ability to grow /usr/home.

Using two USB-drives for boot and /, and B,C,D and E for /usr, /usr/local, /usr/home and /var gives you 3TB storage and the ability to grow everything except /.

The only downside to having / on the USB-drives is that doing make installworld or freebsd-update install is going to SUCK, since USB are so IOPS constraint. ItÂ´s just going to take longer, thatÂ´s all. But this is how we setup our servers and we have never had any problems with this kind of setup.

/Sebulon


----------



## dave (Feb 27, 2012)

FreeDomBSD said:
			
		

> I am pretty sure that DE drives are empty, but I need to bring this system fully up to very this.
> Can someone help me with the remapping of ad2s1a from the old ad0s1a please.



To re-map the boot drive to a different device, reboot the machine into Single User Mode, then edit your /etc/fstab using vi(1).  This is the file that contains the info about what partitions are mounted at boot.

Be sure to make a note of what's currently in your fstab file, in case you need to change it back, or just copy it to a backup.  Also, if you are a beginner, vi can be a little tricky.  You may want to open the vi(1) man page on a nearby screen.  The mount command is to switch to read-write so you can make changes to the file you are editing, because Single User Mode it read-only by default.


```
mount -uw /
cp /etc/fstab /etc/fstab.old
vi /etc/fstab
```


----------



## wblock@ (Feb 27, 2012)

ee(1) is part of the base system and has on-screen instructions, so I generally recommend it for people who don't know vi(1).


----------



## dave (Feb 27, 2012)

wblock@ said:
			
		

> ee(1) is part of the base system and has on-screen instructions, so I generally recommend it for people who don't know vi(1).



Didn't know you could ee(1) in single user mode.  Good to know, thanks. :stud


----------



## FreeDomBSD (Feb 28, 2012)

Thanks Dave and wblock!

I'm actually familiar with vi, but didn't know the mount command.

I will check out ee because I never heard of it before.


----------



## FreeDomBSD (Feb 29, 2012)

I'm ready to go with the USB stick solution Sebulon proposed (I'm be using an identical pair of 8gb USB sticks) unless anyone else has anything to add?

I will try to bring up the BC & DE RAIDs. Wish me luck!


----------



## Sebulon (Feb 29, 2012)

*IN-DETAIL INSTRUCTION SET:*

Locate and identify your drives (example):

```
[CMD="#"]camcontrol devlist[/CMD]
<ATA SOMETHING 0001>         at scbus0 target 0 lun 0 (ada0,pass0) (Disk B?)
<ATA SOMETHING 0001>         at scbus0 target 1 lun 0 (ada1,pass1) (Disk C?)
<ATA SOMETHING 0001>         at scbus0 target 2 lun 0 (ada2,pass2) (Disk D?)
<ATA SOMETHING 0001>         at scbus0 target 3 lun 0 (ada3,pass3) (Disk E?)
<USB SOMETHING 1.0>          at scbus0 target 4 lun 0 (pass4,da0) (USB A?)
<USB SOMETHING 1.0>          at scbus0 target 5 lun 0 (pass5,da1) (USB B?)
```

If you want to know more about one of them, you can use e.g.:
`# diskinfo -v da0`

Load up gmirror, identify the partitions and slices, check the filesystem/s, mount and then split the mirror to free up a drive to use for your new data-pool:
`# gmirror load`

```
[CMD="#"]ls -1 /dev/mirror[/CMD]
gm0
gm0s1
gm0s1a
```
`# fsck -t ufs /dev/mirror/gm0s1a`
Repeat for each slice, if more than just the "a".
`# gmirror remove gm0 /dev/ada1`
`# gpart delete -i 1 ada1`
`# gpart destroy ada1`

Now to check if thereÂ´s any partitioning on D or E (the sizes and sectors are different from yours, itÂ´s just an example):

```
[CMD="#"]gpart show ada2[/CMD]

=>     34  8388541  ada2  BSD  (4.0G)
       34  8388541    1  freebsd-ufs  (4G)
[CMD="#"]gpart show ada3[/CMD]
=>     34  8388541  ada3  BSD  (4.0G)
       34  8388541    1  freebsd-ufs  (4G)
```
And if any, clean them:
`# gpart delete -i 1 ada2`
`# gpart delete -i 1 ada3`
`# gpart destroy ada2`
`# gpart destroy ada3`

Create new partitioning scheme and set partition start- and stop aligned after 4k:
`# gpart create -s gpt ada1`
`# gpart create -s gpt ada2`
`# gpart create -s gpt ada3`
`# gpart add -t freebsd-zfs -l diskC -b 2048 -a 4k ada1`
`# gpart add -t freebsd-zfs -l diskD -b 2048 -a 4k ada2`
`# gpart add -t freebsd-zfs -l diskE -b 2048 -a 4k ada3`

This is a tricky part. For retarted reasons, you need to create "fake" drives to create your pool with. You shouldnÂ´t have to do this really, but itÂ´s the only workaround IÂ´ve found for the gnop-trick to work and the labels to show up. The recipe for dd is to seek forwards to your total size of your drives, minus one MB, for the partition start- and stop boundary. Use diskinfo to see the size of one of your drives in bytes:

```
[CMD="#"]diskinfo -v ada0[/CMD]
	512         	# sectorsize
	1000199467008	# mediasize in bytes (1T)
	1953514584  	# mediasize in sectors

... ( I didnÂ´t have any 1TB drives available, so these numbers are based on a 2TB drive divided by 2 )

[CMD="#"]echo "1000199467008 / 1024000 - 1" | bc[/CMD]
976756
```

`# dd if=/dev/zero of=/tmp/tmpdsk2 bs=1m seek=976756 count=1`
`# dd if=/dev/zero of=/tmp/tmpdsk3 bs=1m seek=976756 count=1`
`# dd if=/dev/zero of=/tmp/tmpdsk4 bs=1m seek=976756 count=1`
`# dd if=/dev/zero of=/tmp/tmpdsk5 bs=1m seek=976756 count=1`

`# mdconfig -a -t vnode -f /tmp/tmpdsk2 md2`
`# mdconfig -a -t vnode -f /tmp/tmpdsk3 md3`
`# mdconfig -a -t vnode -f /tmp/tmpdsk4 md4`
`# mdconfig -a -t vnode -f /tmp/tmpdsk5 md5`

Do the files need to be called the same as the mdÂ´s? No, but itÂ´s alot easier to remember what was what.

`# gnop create -S 4096 md2`
`# zpool create -O mountpoint=none -o autoexpand=on -o cachefile=/var/tmp/zpool.cache pool2 raidz md2.nop md{3,4,5}`
`# zpool export pool2`
`# gnop destroy md2.nop`
`# zpool import -o cachefile=/var/tmp/zpool.cache pool2`

So the data-pool is now imported again with the "fake" drives. By using the gnop-trick, we tell ZFS that the smallest write it can send is 4k. Now we begin to replace the fake ones with the real, minus drive B which we will have to deal with after weÂ´ve copied the data out from it.
`# zpool offline pool2 md2`
`# mdconfig -d -u 2`
`# rm /tmp/tmpdsk2`

`# zpool offline pool2 md3`
`# mdconfig -d -u 3`
`# rm /tmp/tmpdsk3`
`# zpool replace pool2 md3 gpt/diskC`

`# zpool offline pool2 md4`
`# mdconfig -d -u 4`
`# rm /tmp/tmpdsk4`
`# zpool replace pool2 md4 gpt/diskD`

`# zpool offline pool2 md5`
`# mdconfig -d -u 5`
`# rm /tmp/tmpdsk5`
`# zpool replace pool2 md5 gpt/diskE`

Using the same method to create the boot-pool:
`# gpart create -s gpt da0`
`# gpart create -s gpt da1`

`# gpart add -t freebsd-zfs -l usbA -b 2048 -a 4k da0`
`# gpart add -t freebsd-zfs -l usbB -b 2048 -a 4k da1`

`# dd if=/dev/zero of=/tmp/tmpdsk2 bs=1m seek=8191 count=1`
`# mdconfig -a -t vnode -f /tmp/tmpdsk2 md2`
`# gnop create -S 4096 md2`
`# zpool create -O mountpoint=none -o cachefile=/var/tmp/zpool.cache pool1 md2.nop`
`# zpool export pool1`
`# gnop destroy md2.nop`
`# zpool import -o cachefile=/var/tmp/zpool.cache pool1`
`# zpool attach pool1 md2 gpt/usbB`
`# zpool offline pool1 md2`
`# mdconfig -d -u 2`
`# rm /tmp/tmpdsk2`
`# zpool replace pool1 md2 gpt/usbA`

`# zfs create -o mountpoint=legacy -o compress=on pool1/root`
`# zfs create -o mountpoint=legacy -o compress=on pool2/root`
`# zfs create pool2/root/usr`
`# zfs create pool2/root/usr`
`# zfs create pool2/root/usr/local`
`# zfs create pool2/root/usr/home`
`# zfs create pool2/root/var`
`# zfs create -o compress=on -s -V 512m pool2/swap`
Swap is historically decided on "your amount of RAM"x2

`# zpool set bootfs=pool1/root pool1`
`# mount -t zfs pool1/root /mnt`
`# mkdir /mnt/tmp`
`# mkdir /mnt/usr`
`# mkdir /mnt/var`
`# mount -t zfs pool2/root/usr /mnt/usr`
`# mount -t zfs pool2/root/var /mnt/var`
`# mkdir /mnt/usr/home`
`# mkdir /mnt/usr/local`
`# mount -t zfs pool2/root/usr/home /mnt/usr/home`
`# mount -t zfs pool2/root/usr/local /mnt/usr/local`

Edit the configuration files needed for the system to boot up and mount everything in place:

```
[CMD="#"]ee /tmp/bsdinstall_etc/fstab[/CMD]
/dev/zvol/pool2/swap    none            swap    sw      0       0
pool1/root              /               zfs     rw      0       0
tmpfs                   /tmp            tmpfs   rw      0       0
pool2/root/usr          /usr            zfs     rw      0       0
pool2/root/usr/home     /usr/home       zfs     rw      0       0
pool2/root/usr/local    /usr/local      zfs     rw      0       0
pool2/root/var          /var            zfs     rw      0       0
[CMD="#"]ee /mnt/boot/loader.conf[/CMD]
autoboot_delay="5"
zfs_load="YES"
vfs.root.mountfrom="zfs:pool1/root"
```

Now we start copying everything out from the old BC-mirror:
`# mkdir -p /mnt/usr/home/BC-mirror/gm0s1a`
`# mount -t ufs /dev/mirror/gm0s1a /media`
`# cd /media`
`# find -xs . | cpio -pv /mnt/usr/home/BC-mirror/gm0s1a`
`# cd /`
`# umount /media`
Repeat for each slice, if more than just the "a".

Crash the old mirror, clean partitioning, create new partitioning and complete the data-pool:
`# gmirror stop gm0`
`# gmirror unload gm0`
`# gpart delete -i 1 ada0`
`# gpart destroy ada0`
`# gpart create -s gpt ada0`
`# gpart add -t freebsd-zfs -l diskB -b 2048 -a 4k ada0`
`# zpool replace pool2 md2 gpt/diskB`

Resilvering this disk can take a while depending on much you copied in from the old BC mirror. Watch the progress with:
`# zpool status pool2`
`# zpool iostat pool2 1`

When done, wrap it up with:
`# mkdir -p /mnt/boot/zfs`
`# cp /var/tmp/zpool.cache /mnt/boot/zfs/`
`# exit`

I have tried to describe the process as detailed as possible, and in doing so thereÂ´s also a greater risk of typos. I have re-read this a couple of times to minimize this risk and I have also gone through this exact process in a VMWare virtual machine, except with a smaller disk size (4GB), and it worked from start to finish. I have taken as many things as I can into consideration, still there might be things that differs when you attempt this; device names e.g. Either you rearrange how the drives are connected in you chassis to match the instruction set, or you correct the commands as appropriate.

May the Schwartz be with you, always.

/Sebulon


----------



## FreeDomBSD (Mar 1, 2012)

dave said:
			
		

> To re-map the boot drive to a different device, reboot the machine into Single User Mode, then edit your /etc/fstab using vi(1).  This is the file that contains the info about what partitions are mounted at boot.
> 
> Be sure to make a note of what's currently in your fstab file, in case you need to change it back, or just copy it to a backup.  Also, if you are a beginner, vi can be a little tricky.  You may want to open the vi(1) man page on a nearby screen.  The mount command is to switch to read-write so you can make changes to the file you are editing, because Single User Mode it read-only by default.
> 
> ...




```
vi: not found
```

And 


```
ee: not found
```


----------



## FreeDomBSD (Mar 1, 2012)

```
Trying to mount root from ufs:/dev/ad0s1a

manual root filesystem specification:
   <fstype>:<device>   Mount <device> using filesystem <fstype>
                         eg. ufs:da0s1a
?                      List valid disk boot devices
<empty line>           Abort manual input

mountroot> ufs:ad2s1a
Trying to mount root from ufs:ad2s1a
Loading configuration files.
No suitable dump device was found.
Entropy harvesting: interupts ethernet point_to_point kickstart.
swapon: /dev/ad0s1b: No such file  or directory
Starting file system checks:
/dev/ad2s1a: FILE SYSTEM CLEAN: SKIPPING CHECKS
/dev/ad2s1a: clean 3171 free (3171 frags, 0 blocks, 0.3% fragmentation)
Can't stat /dev/mirror/raid: No such file or directory
Can't stat /dev/mirror/raid: No such file or directory
Unknown error; help!
ERROR: ABORTING BOOT (sending SIGTERM to parent)!
Jan 4 15:16:04 init: /binsh on /etc/rc terminated abnormaly, going to single user mode
Enter full pathname of shell or RETURN for /bin/sh:
#
```


```
# mount -uw /
# cp /etc/fstab
vi: not found
#
```


----------



## wblock@ (Mar 1, 2012)

vi(1) and ee(1) are on /usr, which is not mounted.  Instead of mount -uw /, use
`# mount -a`


----------



## FreeDomBSD (Mar 1, 2012)

```
#mount -a
mount: /dev/ad0s1g : No such file or directory
#
```

The reason I am attempting to edit fstab is to change all ad0 references to ad2.

Thanks for your help wblock.


----------



## wblock@ (Mar 1, 2012)

Sorry, until fstab is fixed, you will have to do the manual equivalent of mount -a.

```
mount -uw /
mount /dev/ad0s1d /var
mount /dev/ad0s1e /tmp
mount /dev/ad0s1g /usr
```

Adjust those based on what is in the current fstab.  Use labels and it won't be a problem ever again.


----------



## FreeDomBSD (Mar 3, 2012)

Thanks for the link Mr. Block!

OT: I just realized that that's not the first time I've been to your site! I read the heck out of your All About Bikes in 2011.
Internet becoming a smaller world?


----------



## wblock@ (Mar 3, 2012)

The stuff on my "home page" is pretty much untouched from 1998.  All the FreeBSD stuff is newer: http://www.wonkity.com/~wblock/docs/index.html


----------



## FreeDomBSD (Mar 5, 2012)

Hi Sebulon,

I encountered some problems following your instructions and am unsure if I should follow further without resolving them first.



```
#camcontrol devlist
<ST31000340NS SN04> at scbus0 target 0 lun 0 (ada0,pass0)
<ST31000340NS SN04> at 1 target 0 lun 0 (ada1,pass1)
<SAMSUNG HD103J 1AJ10001> at 2 target 0 lun 0 (ada2,pass2)
<SAMSUNG HD103J 1AJ10001> at 1 target 0 lun 0 (ada3,pass3)
<Kingston DataTraveler 2.0 1.00> at 8 target 0 lun 0 (pass4,da0)
<  1.20> at 9target 0 lun 0 (pass5,da1)
<  1.20> at 10target 0 lun 0 (pass6,da2)
```

* ada0&ada1 --> Drives B&C in unknown order
* ada2&ada3 --> Drives D&E in unknown order
* da0 --> USB drive with FreeBSD 9
* da1&da2 --> pair of 8GB USB drives (drives F&G, shall we?)


```
# gmirror load
GEOM_MIRROR: Cannot add disk ada3 to raid1 (error=17).
GEOM_MIRROR: Component ada1 (device raid) broken, skipping.
GEOM_MIRROR: Device mirror/raid launched (1/2).
#GEOM_MIRROR: Force device raid1 start due to timeout.
GEOM_MIRROR: Device mirror/raid1 launched (1/2).
GEOM_MIRROR: integrity check failed (mirror/raid1, BSD)
```


----------



## dave (Mar 5, 2012)

From earlier in this thread:


			
				FreeDomBSD said:
			
		

> ```
> #gmirror status
> Name Status Components
> mirror/raid DEGRADED ad6
> ...



And recently:


			
				FreeDomBSD said:
			
		

> ```
> # gmirror load
> GEOM_MIRROR: Cannot add disk ada3 to raid1 (error=17).
> GEOM_MIRROR: Component ada1 (device raid) broken, skipping.
> ...



It looks like you might have two separate mirrors there, each missing it other half.  What happens if you mount them and see what data is there?


```
# mkdir /mnt/raid
# mount /dev/mirror/raid /mnt/raid
# ls -lah /mnt/raid

# mkdir /mnt/raid1 
# mount /dev/mirror/raid1 /mnt/raid1
# ls -lah /mnt/raid1
```


----------



## FreeDomBSD (Mar 6, 2012)

After the first command it announces that it is a read-only file system:


```
# mkdir /mnt/raid
mkdir: /mnt/raid: Read-only file system
```

Keep in mind that this is while installing FreeBSD 9 (not my old installation of FreeBSD).


----------



## FreeDomBSD (Mar 6, 2012)

wblock@ said:
			
		

> Sorry, until fstab is fixed, you will have to do the manual equivalent of mount -a.
> 
> ```
> mount -uw /
> ...



After mounting /var/tmp/usr on my old install I still was unable to use ee or vi (same error message as before)


----------



## dave (Mar 6, 2012)

FreeDomBSD said:
			
		

> After the first command it announces that it is a read-only file system:
> 
> 
> ```
> ...



Hmmm, OK.  Thought you have FreeBSD 9 running on your OS drive.  If not, you could use the Live CD to boot up, access those mirrors and at least figure out what's what.


----------



## peetaur (Mar 6, 2012)

@Sebulon, for the one vdev limit issue, I've found that it is simple enough to slice up your disks to put your root slices on the same disks as your big pool that uses all disks. For example with 2 mirror vdevs:

1st data vdev disks:
disk 1: boot, root, and data slices
disk 2: boot, root, and data slices

2nd data vdev disks:
disk 3: data slice
disk 4: data slice

Then put a 2 way mirror on the first 2, and make a pool (using -f since you have different size disks) with the 4 data slices. That way you have as many vdevs as you want for data, but a single one for root. (and then you can put /usr/src or whatever else on data if you run out of space on the root mirror).

`# zpool create zroot mirror gpt/root1 gpt/root2`
`# zpool create -f data mirror gpt/data1d1 gpt/data1d2 mirror gpt/data2d1 gpt/data2d2`

Optional for a small performance boost. I do this to all my slow disks (USB sticks), and non-root pools.
`# zfs set atime=off data`


Only problems with the above suggestion I see:
-slightly more complex (eg. What happens when disk 1 fails and is replaced? Does the admin know it is part of 2 pools plus bootcode? Is a hotspare replace script going to handle this? data with raidz instead, etc.)
-performance would be hurt if you put stress on the root slices (maybe /tmp) (in my setups, my root disks see nearly nothing, so I assume this is a non-issue in most cases)

[And on a fun and dangerous side note, from reading mailing lists, it seems if you change the bootfs property to trick it into allowing 2 vdevs, then set bootfs back, and (maybe not necessary: ) set copies=<number of vdevs+1> (for whichever datasets are needed for boot, /boot only?), booting seems to work]


----------



## FreeDomBSD (Mar 7, 2012)

Dave, 

I tried to run the commands you gave me in post #46


```
# gmirror load
GEOM_MIRROR: Cannot add disk ada3 to raid1 (error=17).
GEOM_MIRROR: Component ada1 (device raid) broken, skipping.
GEOM_MIRROR: Device mirror/raid launched (1/2).
#GEOM_MIRROR: Force device raid1 start due to timeout.
GEOM_MIRROR: Device mirror/raid1 launched (1/2).
GEOM_MIRROR: integrity check failed (mirror/raid1, BSD)
```


```
# mkdir /mount/raid
mkdir: /mnt/raid: Read-only file system
# mount /dev/mirror/raid /mnt/raid
mount: /mnt/raid: No such file or directory
```

I didn't bother continuing attempting to run the command sets


----------



## peter200285 (Mar 7, 2012)

I am also facing similar problem but with different specifications so *I* want to create a new topic for this, am *I* allowed to do so? *A*sking this as *I* am a new member here. Please guide me people.



			
				FreeDomBSD said:
			
		

> I could use some help in figuring out what steps I need to take to solve a problem:
> I want to install FreeBSD9 with a softraid mirrored option, but I want to make sure that I do not loose the information of the existing two drives that were mirrored via softraid while adding two more drives to the softraid array.
> 
> 1. about four years ago my friend set up a FreeBSD7 box with software raid that mirrored 2 drives.
> ...


----------

