# Gmirror + Geli + Dump restore



## xy16644 (Jul 22, 2012)

I am wanting to encrypt the mirrored drives in my server using Geli. I currently use gmirror to mirror them. I have read some of the geli articles on this forum (thanks bbzz!) but this is how I did it for a mirrored system. Most articles/how tos are for a single drive. 

Any way, here is what I have done. I have two 320GB drives in this test machine. Here are the steps:

```
glabel label MirrorDisk0 /dev/ada0

glabel label MirrorDisk1 /dev/ada1

gmirror load

gmirror label -v RootMirror0 /dev/label/MirrorDisk0 /dev/label/MirrorDisk1

gpart create -s MBR mirror/RootMirror0

gpart add -t freebsd -a 4k -s 768m mirror/RootMirror0 (mirror/RootMirror0s1)

gpart add -t freebsd -a 4k mirror/RootMirror0 (mirror/RootMirror0s2) 

gpart create -s BSD mirror/RootMirror0s1

gpart create -s BSD mirror/RootMirror0s2

gpart add -t freebsd-ufs  -a 4k mirror/RootMirror0s1

gpart add -t freebsd-ufs -a 4k mirror/RootMirror0s2

gpart bootcode -b /boot/mbr mirror/RootMirror0

gpart bootcode -b /boot/boot mirror/RootMirror0s1

gpart set -a active -i 1 mirror/RootMirror0

glabel label -v encrypt mirror/RootMirror0s2

geli init -b -s4096 -l256 /dev/label/encrypt

kldload geom_eli

geli attach /dev/label/encrypt

gpart create -s bsd /dev/label/encrypt.eli

gpart add -t freebsd-ufs -s 293g /dev/label/encrypt.eli 

gpart add -t freebsd-swap /dev/label/encrypt.eli 

newfs mirror/RootMirror0s1a

newfs -j /dev/label/encrypt.elia

cd /mnt

umount /tmp

mount -w /dev/da0a /tmp

mount -t ntfs /dev/da1s1 /media

mount /dev/label/encrypt.elia /mnt

restore -rf /media/restore.dump

/mnt/boot/loader.conf:

geom_eli_load="YES"
vfs.root.mountfrom="ufs:/dev/label/encrypt.elia"

/etc/fstab:

/dev/label/encrypt.elia /     ufs    rw               1 1
/dev/label/encrypt.elib none  swap   sw               0 0

mount /dev/mirror/RootMirror0s1a /tmp
cp -Rvp /mnt/boot /tmp/
```

When I rebooted I was prompted with a passphrase prompt and the system booted ;-) 

I see you get 3 attempts at entering your passphrase...what happens after this?

Would appreciate any feedback regarding these steps outlined as this is my first geli mirror system (using dump to restore).

Thank you!

PS: Whats the correct way of *completely* wiping ALL info from a hard drive? Not just data and partitions but gmirror/gpart/boot code/etc?


----------



## xy16644 (Jul 22, 2012)

Is there a "special" way of backing up an encrypted system using geli? I tried my usual:

```
dump -b64 -0uaLf /tmp/backup.dump /dev/label/encrypt.elia
```

and it didn't seem to like that. Is there another command or way I need to use to backup my system now that its using geli and encrypted?


----------



## kpa (Jul 22, 2012)

dump(8) has no way of understanding encrypted data, it expects a standard UFS filesystem as the source. Use the mountpoint of the filesystem as the argument:

`# dump -b64 -0uaLf /tmp/backup.dump /`


----------



## xy16644 (Jul 22, 2012)

kpa said:
			
		

> dump(8) has no way of understanding encrypted data, it expects a standard UFS filesystem as the source. Use the mountpoint of the filesystem as the argument:
> 
> `# dump -b64 -0uaLf /tmp/backup.dump /`



Thanks, I tried that. There seems to be a lot of activity when I hit enter to run the command and then after a minute or two theres no more activity on the hard drive. Normally dump takes about 15min to run to backup root. Even after waiting that time theres no activity and it doesn't take me back to the command prompt. Any other ideas? Am I doing something wrong?

Also, if I ctrl z out of the dump command the system is VERY slow and sluggish to the point of me having to power off the machine.


----------



## kpa (Jul 22, 2012)

Press CTRL-T to see what it's doing, changes are that your system is affected by the UFS SU+J bug that is still present in FreeBSD 9 (even in 9-STABLE). If  CTRL-T shows mksnap_ffs(8) and you're not able to stop it it's a sure sign of the problem. The only remedy at the moment is to run
`# tunefs -j disable  /dev/devnode` on the filesystem in livecd/memstick environment, /dev/devnode is the device node that matches the filesystem.


----------



## xy16644 (Jul 22, 2012)

Wow, my first experience with a FreeBSD bug! Thank you! When I tried CTRL-T it did indeed show mksnap_ffs. I tried:
[CMD="tunefs -j disable /dev/label/encrypt.elia"]#[/CMD]

but it says:

```
tunefs: Failed to write journal inode: Operation not permitted
```


----------



## kpa (Jul 22, 2012)

Boot with a livecd/memstick and load up geli so the unecrypted root filesystem can be changed, don't mount it.


----------



## xy16644 (Jul 22, 2012)

kpa said:
			
		

> Boot with a livecd/memstick and load up geli so the unecrypted root filesystem can be changed, don't mount it.



I've booted off a Live CD and tried:

```
gmirror load
kldload geom_eli
geli attach /dev/label/encrypt
mount /dev/label/encrypt.elia /mnt
tunefs -j disable /mnt
```

But I'm still getting the "Operation Not permitted" error?


----------



## kpa (Jul 22, 2012)

Leave out the mount(8) command, mounted filesystem can not be changed by tunefs(8).


----------



## xy16644 (Jul 22, 2012)

That did the trick:

```
gmirror load
kldload geom_eli
geli attach /dev/label/encrypt
tunefs -j disable /dev/label/encrypt.elia
```

I have rebooted and dump is now running! Will this bug be fixed in FreeBSD 9.1? Seems like quite a bad one...

I noticed my dump backup time went from about 11 minutes (pre geli) and is now about 30 minutes (post geli). Is this the encryption slowing things down? Not complaining, just wanting to make sure this is how it'll be with geli enabled.


----------



## wblock@ (Jul 22, 2012)

xy16644 said:
			
		

> I have rebooted and dump is now running! Will this bug be fixed in FreeBSD 9.1? Seems like quite a bad one...



Some work has been committed.  Don't know if it will be fixed completely in time for 9.1.



> I noticed my dump backup time went from about 11 minutes (pre geli) and is now about 30 minutes (post geli). Is this the encryption slowing things down?



Yes.  Some fast processors with AES-NI instructions can keep up with the hardware, but most people don't have them yet.


----------



## xy16644 (Jul 23, 2012)

I defintely don't have AES-NI instructions as my CPU is way too old.

Whats the best way to completely wipe a hard drive? I want to remove ALL gmirror, gpart, labels, bootcode, metadata etc from the drive so I have a completely blank hard drive.

I have tried:

```
glabel clear -v /dev/ada0
gmirror clear -v /dev/label/MirrorDisk0
gpart delete -i 1 label/MirrorDisk0
```

Is this correct? 

Sometimes I find with all the testing I do on the same drive it picks up old gmirror settings or old labels etc. What commands do I need to run (and in what order) to completely wipe a disk so it has nothing on it?


----------



## jb_fvwm2 (Jul 23, 2012)

Using dd  ? Someday someone may update the gpart manpage so that each EXAMPLES it lists is [eventually] [1]ten times as verbose, as if someone using a particular command subset has concise examples of about twenty ways it could be used.  I imagine that would be expensive, gpart being a relatively new utility and the time involved... I used dd  yesterday, then found that the drive needs a bios update anyway. 
RFC:
[1] If everyone who uses gpart updates a wiki somewhere with their usage, results, overview... it could
incrementally grow...


----------



## wblock@ (Jul 23, 2012)

xy16644 said:
			
		

> I defintely don't have AES-NI instructions as my CPU is way too old.
> 
> Whats the best way to completely wipe a hard drive? I want to remove ALL gmirror, gpart, labels, bootcode, metadata etc from the drive so I have a completely blank hard drive.



Completely blank?  Use dd(1) to write zeros to it, and a bs= of 64K or more to make it go as fast as possible.



> I have tried:
> 
> ```
> glabel clear -v /dev/ada0
> ...



The first two will only work if that type of information is on the drive, and that metadata is only at the end.  The last only deletes a single partition.

If you just want to get rid of just partition tables and metadata, erase the first and last 64K or so of the drive.  Some people erase more, as it doesn't take very long.

If the drive is ada9:

Erase the first 1M.

```
# dd if=/dev/zero of=/dev/ada9 bs=1m count=1
```
The block size and count are just for convenience there, not speed.


Erase the last 1M.

```
# diskinfo -v /dev/ada9
/dev/ada9
	512         	# sectorsize
	1000204886016	# mediasize in bytes (931G)
	[color="Red"]1953525168[/color]  	# mediasize in sectors
	0           	# stripesize
	0           	# stripeoffset
	1938021     	# Cylinders according to firmware.
	16          	# Heads according to firmware.
	63          	# Sectors according to firmware.
	WD-WC123456789	# Disk ident.
# dd if=/dev/zero of=/dev/ada9 seek=1953523120
```

dd(1)'s block size is left at the default, 512 bytes.  The seek= number is the number of 512-byte blocks on the drive (1953525168) minus 2048 blocks, or 1M.  (I wish, not for the first time, that dd(1) would take negative seek values.)  So dd(1) starts writing at 1M from the end of the drive and then continues until it runs out of drive.


----------



## wblock@ (Jul 23, 2012)

jb_fvwm2 said:
			
		

> Using dd  ? Someday someone may update the gpart manpage so that each EXAMPLES it lists is [eventually] [1]ten times as verbose, as if someone using a particular command subset has concise examples of about twenty ways it could be used.  I imagine that would be expensive, gpart being a relatively new utility and the time involved... I used dd  yesterday, then found that the drive needs a bios update anyway.
> RFC:
> [1] If everyone who uses gpart updates a wiki somewhere with their usage, results, overview... it could
> incrementally grow...


But gpart(8) can't delete all metadata anyway.  destroy only works on partition tables.


----------



## xy16644 (Jul 23, 2012)

Many thanks wblock@. I was trying the following command to completely wipe the drive but it didn't seem to do the trick:
[CMD=""]dd if=/dev/zero of=/dev/ada0 bs=512 count=1[/CMD]

I'll give your commands a try and report back. The reason I ask this is that I use the same disks to test stuff on my one test machine and I've noticed that it sometimes picks up labels, mirrors etc from past tests I've done. Whenever I start a new test I want the disk to be completely blank so that I can start afresh.


----------



## wblock@ (Jul 23, 2012)

No, clearing just the first block won't get everything.  GEOM metadata (glabel(8), gmirror(8), lots of others) is stored in the last block of the device.  GPT partition tables are usually 34 blocks long, and at both the beginning and end of the drive.


----------



## xy16644 (Jul 24, 2012)

Thanks wblock@. I have two disks that have metadata stored on them so I will try these commands out on the weekend.

On another note:

Can one label disks that are going to be used with ZFS? Or is it better to use the /dev/adaX names?


----------



## xy16644 (Jul 27, 2012)

wblock@: I can confirm that your commands worked very nicely to completely wipe my disks. Thank you.


----------



## xy16644 (Jul 28, 2012)

I am interested in adding a hardware encryption card for my server since my CPU does not support AES-NI instructions. Would the following card work with GELI encryption:

Soekris vpn 1401


----------



## xy16644 (Jul 29, 2012)

Is it possible to use labels with GPT, ZFS and GELI? I am having difficulty getting this to all work together!

(I have only included the steps that reference drives/partitions)

I've created the partitions as follows:

```
gpart add -s 128 -t freebsd-boot -l BootLoader0 ada0
gpart add -s 128 -t freebsd-boot -l BootLoader1 ada1
gpart add -s 10G -t freebsd-zfs -l BootPartition0 ada0
gpart add -s 10G -t freebsd-zfs -l BootPartition1 ada1
gpart add -t freebsd-zfs -l RootPartition0 ada0
gpart add -t freebsd-zfs -l RootPartition1 ada1
```

Create Boot pool:

```
zpool create bootdir mirror /dev/gpt/BootPartition0 /dev/gpt/BootPartition1
```

Do the GELI stuff:

```
dd if=/dev/random of=/boot/zfs/bootdir/encryption.key bs=4096 count=1

geli init -b -B /boot/zfs/bootdir/RootPartition0.eli -e AES-XTS -K /boot/zfs/bootdir/encryption.key -l 256 -s 4096 /dev/gpt/RootPartition0

geli init -b -B /boot/zfs/bootdir/RootPartition1.eli -e AES-XTS -K /boot/zfs/bootdir/encryption.key -l 256 -s 4096 /dev/gpt/RootPartition1

geli attach -k /boot/zfs/bootdir/encryption.key /dev/RootPartition0

geli attach -k /boot/zfs/bootdir/encryption.key /dev/RootPartition1

zpool create zroot mirror /dev/gpt/RootPartition0.eli /dev/gpt/RootPartition1.eli
```

/boot/loader.conf:


```
geli_RootPartition0_keyfile0_load=â€YESâ€

geli_RootPartition0_keyfile0_type=â€RootPartition0:geli_keyfile0"

geli_RootPartition0_keyfile0_name=â€/boot/encryption.keyâ€

geli_RootPartition1_keyfile0_load=â€YESâ€					

geli_RootPartition1_keyfile0_type=â€RootPartition1:geli_keyfile0"		

geli_RootPartition1_keyfile0_name=â€/boot/encryption.keyâ€
```

After I rebooted it is referencing the hard drive as ada0p3 or ada1p3 instead of the label name when booting up.

Is there something special I have to do to get labels, GPT, ZFS and GELI to all work together? If I follow my guide and use the hard drive names (ie: non label names) then all works fine.

Can someone help me out as to where I am going wrong? I can provide more info if the above is not sufficient.

Thank you! :e


----------

