# df output



## abatie (Jan 30, 2014)

I'm used to df output where "blocks" is the size of the partition.  With ZFS, in the past (which means OpenSolaris), "blocks" has always been the size of the pool for all the file systems in the pool.  With FreeBSD, that seems not to be the case - the blocks vary, quite a lot even, and bear little resemblance to the size of the pool, which makes space management "interesting" and I'm hoping someone can explain what's going on so I can modify a script I wrote for OpenSolaris to essentially do a pool df so I can tell how much space is actually getting used in a pool.  Thanks!


```
<zbsd1.rdrop.com> [181] # dmesg | grep sector
da0: 30533MB (62533296 512 byte sectors: 255H 63S/T 3892C)
da1: 30560MB (62586880 512 byte sectors: 255H 63S/T 3895C)
da2: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C)
da3: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da4: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da5: 915715MB (1875385008 512 byte sectors: 255H 63S/T 116737C)
da6: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da7: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
<zbsd1.rdrop.com> [182] # zpool status
...
  pool: data
 state: ONLINE
  scan: none requested
config:

	NAME                                            STATE     READ WRITE CKSUM
	data                                            ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    diskid/DISK-%20%20%20%20%20WD-WXJ1E23LHJN3  ONLINE       0     0     0
	    da4                                         ONLINE       0     0     0
	  mirror-1                                      ONLINE       0     0     0
	    da7                                         ONLINE       0     0     0
	    da6                                         ONLINE       0     0     0
```

HJN3 is da3, making data a slightly less than 1T pool, yet:


```
<zbsd1.rdrop.com> [179] # df -k
...
Filesystem                        1K-blocks       Used     Avail Capacity  Mounted on
data                              509912321        521 509911800     0%    /data
data/home                         509911834         36 509911798     0%    /data/home
data/home/alan                    592652246   82740448 509911798    14%    /data/home/alan
...
data/home/web/alan.batie.org      706028736  196116937 509911798    28%    /data/home/web/alan.batie.org
```


----------



## worldi (Jan 30, 2014)

As you have already noticed df() is the wrong tool for this. Use zfs() instead:


```
% zfs get -p -o name,value used 
NAME            VALUE
tank            84168622080
tank/home       55709466624
tank/tmp        6660096
tank/usr        10605862912
tank/usr/obj    937508864
tank/usr/ports  2070507520
tank/usr/src    1091227648
tank/var        1257177088
% zfs get -Hp -o name,value,property used,avail | paste - - | cut -f1-3,5-6 | column -t
tank            84168376320  used  220237430784  available
tank/home       55709478912  used  220237430784  available
tank/tmp        6660096      used  220237430784  available
tank/usr        10605862912  used  220237430784  available
tank/usr/obj    937508864    used  220237430784  available
tank/usr/ports  2070507520   used  220237430784  available
tank/usr/src    1091227648   used  220237430784  available
tank/var        1257177088   used  220237430784  available
%
```

Edit:
Note that querying _used_ returns the amount of data on disk, i.e. after compression and deduplication. This value is useless sometimes (e.g. if you plan to copy the data over to another filesystem and want know how big the other filesystem's partition need to be). To get the "true" amount of data use _logicalused_ (aka _lused_).


----------



## abatie (Jan 31, 2014)

Thanks, I didn't realize there was that much info available there.  Poking around led me to this, which is close to what I want; I may write something that adds up the snapshots, but this will do for now:


```
<zbsd1.rdrop.com> [106] $ zfs list -o space | grep -v /
NAME                              AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
data                               485G   429G       44K    521K              0       429G
vm-images                          765G   110G         0     35K              0       110G
zroot                             24.3G  3.01G         0    144K              0      3.01G
```


----------

