# Using labels overhead? or something else



## fgordon (May 30, 2012)

I switched to FreeBSD 9 lately and to a CPU with AESNI. When I looked at top (geli-based ZFS) I saw something strange as it's an old install, I did not use label, but do now.

With top (S) I saw the only harddisk using a label has 300%! More CPU-usage than all other drives (regardless of the load it's always ~ 4x more).

Is this just a coincidence? (changing the port of the harddisk made no difference) as I cannot really believe labels prodce that much overhead, or is it maybe the combo geli on label / zfs on geli_label?


```
PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   11 root          4 155 ki31     0K    64K CPU3    3 229:47 216.36% idle
    0 root        156  -8    0     0K  2496K -       0  61:18 69.34% kernel
   12 root         18 -84    -     0K   288K WAIT    0  15:48 20.46% intr
 1696 root          1  33    -     0K    16K geli:w  0  14:38 17.68% g_eli[0] label/zfdi
 1744 root          4  -8    -     0K    80K zio->i  0   9:17 11.28% zfskern
   13 root          3  -8    -     0K    48K -       1   6:42  7.42% geom
 1095 root          1 -16    -     0K    16K crypto  3   5:40  6.40% crypto returns
 1677 root          1  23    -     0K    16K geli:w  3   4:15  5.27% g_eli[0] ada1
 1700 root          1  22    -     0K    16K geli:w  3   4:11  5.08% g_eli[0] ada4
 1689 root          1  23    -     0K    16K geli:w  1   4:12  4.98% g_eli[0] ada3
 1672 root          1  23    -     0K    16K geli:w  0   4:15  4.88% g_eli[0] ada0
 1712 root          1  23    -     0K    16K geli:w  3   4:07  4.88% g_eli[0] ada7
 1724 root          1  23    -     0K    16K geli:w  1   4:10  4.79% g_eli[0] ada12
 1704 root          1  23    -     0K    16K geli:w  3   4:10  4.79% g_eli[0] ada5
 1720 root          1  23    -     0K    16K geli:w  1   4:09  4.79% g_eli[0] ada11
 1708 root          1  23    -     0K    16K geli:w  3   4:09  4.79% g_eli[0] ada6
 1716 root          1  23    -     0K    16K geli:w  1   4:09  4.59% g_eli[0] ada10
 1683 root          1  23    -     0K    16K geli:w  0   4:12  4.39% g_eli[0] ada2
```


----------



## wblock@ (May 30, 2012)

Some specifics might narrow it down.  Show the output, if possible.  Labels should have essentially no overhead.  Were the other drives GELI-encrypted?


----------



## fgordon (May 30, 2012)

Hmm what specs?

All harddisks are the exact same model, Samsung HD203WI
All harddisks are conntected to the same controller chips (88SX7042) (changing ports same result the labeled is always CPU intensive the other not, regardless on which SATA port they are)
All hardisks are crypted the same way AES-CBC 128 bit (hardware AESNI)
As you see in top all other drives are geli encrypted like "g_eli[0] ada0" 

Encryption is for all disks exactly the same (geli init via script, attached via script)

I was just wondering - and because maybe not many use labeled and "raw devices" in a mix (especially in one zfs tank) I thought it might be at least worth a look 

Performance is still great. I was just curious about the AESNI effect when I saw this in top (S).

```
cryptosoft0: <software crypto> on motherboard
aesni0: <AES-CBC,AES-XTS> on motherboard
WARNING: TMPFS is considered to be a highly experimental feature in FreeBSD.
re0: link state changed to UP
GEOM_ELI: Device ada0.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada1.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada2.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada3.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device label/zfdisk_H69A.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada4.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada5.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada6.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada7.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada10.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada11.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device ada12.eli created.
GEOM_ELI: Encryption: AES-CBC 128
GEOM_ELI:     Crypto: hardware
ZFS filesystem version 5
ZFS storage pool version 28


===========================================================================

zpool status
  pool: tank
 state: ONLINE
 scan: scrub in progress since Wed May 30 15:37:16 2012
    4,07T scanned out of 17,5T at 672M/s, 5h52m to go
    0 repaired, 23,26% done
config:

        NAME                       STATE     READ WRITE CKSUM
        tank                       ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            ada0.eli               ONLINE       0     0     0
            ada12.eli              ONLINE       0     0     0
            ada6.eli               ONLINE       0     0     0
            ada7.eli               ONLINE       0     0     0
            label/zfdisk_H69A.eli  ONLINE       0     0     0
            ada5.eli               ONLINE       0     0     0
            ada11.eli              ONLINE       0     0     0
            ada3.eli               ONLINE       0     0     0
            ada10.eli              ONLINE       0     0     0
            ada1.eli               ONLINE       0     0     0
            ada2.eli               ONLINE       0     0     0
            ada4.eli               ONLINE       0     0     0
```


----------



## bbzz (May 30, 2012)

You know I noticed similar thing, but with *gstat*. I use encrypted labeled GPT partitions. For example:


```
dT: 1.001s  w: 1.000s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    0      0      0      0    0.0      0      0    0.0    0.0  ada0
    0      0      0      0    0.0      0      0    0.0    0.0  ada0p1
    0      0      0      0    0.0      0      0    0.0    0.0  ada0p2
    0      0      0      0    0.0      0      0    0.0    0.0  ada0p3
    0      0      0      0    0.0      0      0    0.0    0.0  ada0p4
    0      0      0      0    0.0      0      0    0.0    0.0  ada1
    0      4      4     32    5.3      0      0    0.0    2.1  ada2
    0      6      6     48    3.9      0      0    0.0    2.3  ada3
    [B]0    943      0      0    0.0    943 117998    0.9   61.4  ada4[/B]
    0      0      0      0    0.0      0      0    0.0    0.0  gpt/system
    0      0      0      0    0.0      0      0    0.0    0.0  gpt/cache
    0      0      0      0    0.0      0      0    0.0    0.0  gpt/log
    0      0      0      0    0.0      0      0    0.0    0.0  da0
    0      0      0      0    0.0      0      0    0.0    0.0  gpt/swap
    0      0      0      0    0.0      0      0    0.0    0.0  ada1p1
    0      4      4     32    5.3      0      0    0.0    2.1  ada2p1
    0      6      6     48    3.9      0      0    0.0    2.3  ada3p1
    [B]0    943      0      0    0.0    943 117998    1.0   65.8  ada4p1[/B]
    0      1      1      8    0.1      0      0    0.0    0.0  ada5
    0      1      1      8   14.1      0      0    0.0    1.4  ada6
    0      0      0      0    0.0      0      0    0.0    0.0  gpt/system.eli
    9    931    931 117902   10.5      0      0    0.0  100.0  ada7
    0      0      0      0    0.0      0      0    0.0    0.0  gpt/swap.eli
    0      0      0      0    0.0      0      0    0.0    0.0  gpt/disk6
    0      0      0      0    0.0      0      0    0.0    0.0  da0s1
    0      4      4     32    5.3      0      0    0.0    2.1  gpt/disk0
    0      0      0      0    0.0      0      0    0.0    0.0  da0s2
    0      6      6     48    3.9      0      0    0.0    2.3  gpt/disk1
    [B]0    943      0      0    0.0    943 117998    1.1   70.9  gpt/disk2[/B]
    0      1      1      8    1.4      0      0    0.0    0.1  ada5p1
    0      1      1      8   14.1      0      0    0.0    1.4  ada6p1
    0      0      0      0    0.0      0      0    0.0    0.0  da0s1a
   10    930    930 117774   10.6      0      0    0.0   99.9  ada7p1
    0      0      0      0    0.0      0      0    0.0    0.0  da0s2a
    0      1      1      8    1.4      0      0    0.0    0.1  gpt/disk3
    0      1      1      8   14.1      0      0    0.0    1.4  gpt/disk4
   10    930    930 117774   10.7      0      0    0.0   99.9  gpt/disk5
    0      4      4     32    5.4      0      0    0.0    2.2  gpt/disk0.eli
    0      6      6     48    4.1      0      0    0.0    2.5  gpt/disk1.eli
    0      1      1      8    1.9      0      0    0.0    0.2  gpt/disk3.eli
    0      1      1      8   14.3      0      0    0.0    1.4  gpt/disk4.eli
    [B]3    943      0      0    0.0    943 117998    3.4   99.8  gpt/disk2.eli[/B]
```

Not terribly good example since I don't have encrypted non-labeled disk ATM, but utilization is always about 10-20% lower. Is this just a coincidence, or maybe eli encrypted disk can got above 100%?


----------



## fgordon (May 30, 2012)

Hehe I just found it strange because I'm sure using labels is basically only a lookup in a table in the memory.

Maybe the disk is by pure chance not as good as the others or has some "hidden" hardware issues - stranger things have happened. Unlikely is not impossible.

But I though I might post it - even if there is a tiny tiny chance that someone says   haaah yes mixing labels/unlabeled will cause xy.


----------

