# Unable to boot (root on zfs)



## Hidendra (Sep 27, 2013)

This machine (9.1-RELEASE-p4) is made up of three drives in RAIDZ. I was removing an unused drive (not mounted or used anywhere) and accidentally left one of the three drives in use unplugged. When booting this began happening and continued happening even after I fixed the drive not plugged in. However, after plugging in the unplugged drive something is still missing.

When booting I get past the past the first boot phase and into booting the OS itself. Everything looks fine up until this point:


```
Mounting local file systems:.
internal error: failed to initialize ZFS library
Setting hostname: dev
```

I can boot into single user mode (which is just about directly before that) just fine and everything works fine including zfs and I can see the drives (already mounted when I go into single user mode). The third drive I plugged back in resilvered 500 KB with no other errors there.

From there it generally tries to start everything (operative word tries) for example devd fails:


```
ps: cannot mmap corefile
Starting devd
devd: Can't open devctl device /dev/devctl: No such file or directory
```

and as well sshd fails to start


```
PRNG is not seeded
```

Other services start just fine, for example Nginx.

Once it finishes booting I am not able to do anything directly at the console, it is frozen at:


```
Thu Sep 26 23:23:29 UTC 2013
<cursor>
```

<cursor> being the cursor box

I've looked at most anything I could. I am using a locally compiled kernel however it has always worked. I am only using GENERIC with no changes (I do see zfs.ko, random.ko, etc all in the kernel folder)

What I've tried:


updating to latest svn for 9.1-p7, result is the same
putting the unused drive (just an old drive formatted w/ with NTFS) back in
reinstalling the bootcode to all three drives
booting from different drives of the three
recopying the cachefile (using mfsBSD ISO) to the pool
Before this started happening I do not recall doing much that could have done something like this.

Any thoughts would be greatly appreciated.


----------



## Hidendra (Oct 8, 2013)

I forgot to post back and just wanted to follow up after I had figured out the issue here. I don't think anyone else will easily run into the same issue as I've never facepalmed so hard but just wanted to follow up incase someone does.

A couple days before I happened to reboot and found out this started happening I had taken a backup of a live machine I was moving elsewhere (it also had zfs on root). Instead of dumping the backup to a file I put the entire FS onto its own volume as I was modifying it a bit, i.e.:


```
zfs send -R tank@backup | ssh dev zfs recv -dF tank/backups/xxx
```

Naturally, this preserves properties including _mountpoint_. Since it was also zfs on root _mountpoint_ for tank/backups/xxx/root/ so happened to be _/_  I'm sure you can see where this is going and this really messed with _tank/root_ (this backup BTW was running 9.1-RELEASE-p6)

After removing the mountpoint property for the backup volume everything is back to normal again and not having any issues.


----------



## Sverre Eldøy (Dec 13, 2015)

You have no idea. I did the same thing. The machine had been running "for ever" and all of a sudden - after a power failure, the same exact behaviour as you described started. I have scratched my head so hard for so long. Could easily boot into single user and mount the root-pool - just didn't understand what happened once I booted "back to normal".



Hidendra said:


> After removing the mountpoint property for the backup volume everything is back to normal again and not having any issues.



Although I looked at the zfs for quite some time - I did not notice that the backup from the other server had the same mount point. I played around with devd, thinking I had screwed something up there, looked for possible hardware failures, even suspected a kernel bug. So thank you for saving my sanity.


----------



## kpa (Dec 13, 2015)

Never import another pool on to your system without setting the altroot (with the -R option to `zfs import`) property to for example /mnt if there's even a slightest chance that the other pool has mountpoints that conflict with the running system. That was one of the hard lessons I learned with ZFS when I played with it.


----------

