# Restoring corrupted ZFS



## wisdown (Jun 12, 2015)

Hey guys,

After see my system freezing when was trying compiling the package:

www/webkit-gtk3

I was ready to test alternatives, my first guess was about have made an slice for log and another slice for cache on same HDD. So for eliminate this issue, I have did:


```
zfs remove zroot /dev/ada0p2
zfs remove zroot /dev/ada0p3
```

ada0p2 = Log (2 GB)
ada0p3 = Cache (8GB)

Then following some "tuning guides", unfortunately I have set on zfs creation "failmode=continue" since I have read somewhere this would help on the hiccups on busy servers (This install is not for an server, but for my personal use on an old laptop).

Then for get the things worse, I have did an reboot for test:


```
reboot
```

And before the reboot I have read something about missing blocks lost because the reboot...

Then after type the password for mount the encrypted ZFS, I have received this error:


```
panic: solaris assert: nvlist_lookup_uint64(configs, ZPOOL_CONFIG_POOL_TXG, &txg) == 0, file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spac.c, line: 3967
cpuid = 1
KDB: stack backtrace:
#0 0xffffffff80963000 at kdb_backtrace+0x60
#1 0xffffffff80928125 at panic+0x155
#2 0xffffffff81bbf1fd at assfail+0x1d
#3 0xffffffff819bbc53 at spa_import_rootpool+0x73
#4 0xffffffff81a1082d at zfs_mount+0x34d
#5 0xffffffff809c0659 at vfs_donmount+0xde9
#6 0xffffffff809c320d at kernel_mount+0x3d
#7 0xffffffff809c5cdc at parse_mount+0x62c
#8 0xffffffff809c404c at vfs_mountroot+0x9ac
#9 0xffffffff808d7533 at start_init+0x53
#10 0xffffffff808f8b6a at fork_exit+0x9a
#11 0xffffffff80d0b67e at fork_trampoline+0xxe
```

So my guess about what happen on the situation is:

The failmode=continue broken any sync on reboot, or in other words "worked as expected", then I have lost something used for mount the partition...

Or after remove the cache and log device, my guess is the system was working like when we do an scrubbing command (processing and sync data), but unfortunately I haven't checked for it or wait finished...

There any chance to revert this?

I have tried enter on live CD mode, but, can't mount or see the .eli files.


----------



## wisdown (Jun 12, 2015)

I'm formating the system, should be more fast.

Anyway, I think this bad experience should help someone else for don't try the steps I have did.


----------



## junovitch@ (Jun 15, 2015)

If it had a panic just importing the pool, then the go to recommendation of doing a force import and attempting to roll back to the last good transaction group may have still caused a panic.  It's hard to say for sure.

A comment, you don't have to add a log device or cache device.  They are helpful in certain workloads but just because a feature is there doesn't mean it needs to be used.  Have both the log device and cache device on the same physical device would likely have made performance worse in your case.


----------

