# After an install and adding devices, I'm unable to boot.



## eydaimon (Oct 7, 2014)

In order to not make any accidents with my valuable data, I removed two
devices (geli/zfs) from the system.

I installed FreeBSD freshly on my drive to get the geli/zfs configuration I
desired.

Once the install was finished, I booted from the drive to make sure
everything worked. And so it did.

I then added the two devices again and booted. This was met with a prompt
of "mountroot>" picture and  was unable to boot.

How can I solve this? #freebsd mentioned using glabels but I'm unclear on
how this is done, and because I'm using geli/zfs no one is entirely sure
how.

Help appreciated.


----------



## asteriskRoss (Oct 7, 2014)

Are the devices you added in part of the ZFS pool containing your root filesystem?  If so are you attaching them on boot correctly (by setting the -b flag on the GELI devices)?  If the devices you added have separate ZFS pools, have you tried putting a line in your /boot/loader.conf to tell the boot loader where to find the root filesystem?  Something like:

```
vfs.root.mountfrom="zfs:yourrootzfspool"
```
If neither of these helps, could you explain your configuration in more detail?  Posting the output of `gpart show`, `zpool status` and your GELI configuration from /boot/loader.conf and /etc/rc.conf would be useful.


----------



## eydaimon (Oct 7, 2014)

asteriskRoss said:
			
		

> Are the devices you added in part of the ZFS pool containing your root filesystem?



No.



			
				asteriskRoss said:
			
		

> if the devices you added have separate ZFS pools, have you tried putting a line in your /boot/loader.conf to tell the boot loader where to find the root filesystem?  Something like:
> 
> ```
> vfs.root.mountfrom="zfs:yourrootzfspool"
> ```



When I tried removing the added devices later and booted again, I got an error saying that a filesystem cannot be found. The error indicated that it was using the following setting:

```
vfs.root.mountfrom="zfs:bootpool"
```

According to some (as near as I can tell) on #freebsd the device info was embedded in the zpool.cache. The solution would be to use gpart label to label the device during install. 
But that makes it sound as if a completely fresh install is possible for this to be possible which sounds odd. I'm hoping it can be corrected without that process.



			
				asteriskRoss said:
			
		

> If neither of these helps, could you explain your configuration in more detail?  Posting the output of `gpart show`, `zpool status` and your GELI configuration from /boot/loader.conf and /etc/rc.conf would be useful.



I'm unable to boot the drive as it is, but this was a vanilla freebsd 10.1-BETA install with a single drive (ada0) present. I hope that helps.

Thank you much


----------



## asteriskRoss (Oct 7, 2014)

Unfortunately I'm not familiar with the default layout of the 10.1-BETA installer and have only ever configured my GELI/ZFS machines by hand.  To investigate and fix, you should be able to boot from the FreeBSD installation media, then select Live CD and log in as root with no password.  You can then manually attach the GELI device(s) (see geli attach syntax in the geli(8) man page), and import the zpools by hand, setting an alternate root with the -R option (see zpool import in the zpool(8) man page).

Things to check first are:

the GELI device holding your boot pool is configured to be attached on boot (configured with the -b option -- see the geli(8) man page)
the configuration for GELI devices in /boot/loader.conf is correct (correct devices and keys if used)
Particularly since you're using a beta release, it's also possible it's a bug.  Did you search for known issues on FreeBSD bugzilla?


----------



## eydaimon (Oct 7, 2014)

asteriskRoss said:
			
		

> Particularly since you're using a beta release, it's also possible it's a bug.  Did you search for known issues on FreeBSD bugzilla?



I had not searched for a bug.  Do you think perhaps this is related? https://bugs.freebsd.org/bugzilla/show_ ... ?id=174310

I'm going to try your suggestion. However, I think a piece of the puzzle to make it work would be to use gpart label to label the drive and then import it using the label. Thoughts on that?


----------



## asteriskRoss (Oct 7, 2014)

I've had success attaching other ZFS pools after the root filesystem is mounted using GPT labels and associated configuration in /etc/rc.conf.  To attach the GELI device containing the root ZFS pool I have always had to reference the device directly (something like /dev/ada0p3) rather than via the label to get it working properly.  It's possible there has been a change to GELI for the 10.1 release.  I don't see how labels would help but I wasn't involved in your conversation on IRC. Since your system isn't working now, there is no harm in trying it out.  The bug you found could be related, though your system was booting fine before you added the other devices. If you are able to post the configuration details I suggested, we can at least eliminate the simpler possible causes.


----------



## eydaimon (Oct 7, 2014)

Looks like they're already labeled :/


----------



## eydaimon (Oct 7, 2014)

So I ended up just reinstalling and trying again. I reinstalled using just one device in the system. Then I added the other devices later. And viola, this time it works.

That's quite concerning, but hey at least now I know I can add drives without things getting corrupted


By the way, adding a device with geli() causes every single device get attempted to have a geli attach done to it. Is there a way to limit that?


----------



## asteriskRoss (Oct 8, 2014)

I'm pleased your configuration is working, even if we didn't get the bottom of what was causing the issue.


			
				eydaimon said:
			
		

> By the way, adding a device with geli() causes every single device get attempted to have a geli attach done to it. Is there a way to limit that?


I haven't seen that behaviour. Do you mean when initialising or attaching?  What command did you run and what did you see to indicate every device was being accessed?


----------



## eydaimon (Oct 8, 2014)

This happened during booting but apparently only at first boot because I've not seen it since.


----------

