# confused about free disk space



## kel (Dec 5, 2022)

I'm not sure what I'm missing, I keep running out of disk space when I shouldn't. I have a cloud-based VM running FreeBSD 12.3 with 80GB of storage. I am using ZFS and GPT partitioning.

```
#gpart show
=>       40  167772080  vtbd0  GPT  (80G)
         40        512      1  freebsd-boot  (256K)
        552    4194304      2  freebsd-swap  (2.0G)
    4194856  163577264      3  freebsd-zfs  (78G)
```

When I run zpool it shows that it's almost completely full:


```
#zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot  77.5G  74.2G  3.25G        -         -    66%    95%  1.00x  ONLINE  -
```

When I run df it shows a much smaller partition. As I delete files to save space, the root partition also shrinks and always shows 95% full no matter what I do. Where has all that free space gone???


```
#df -h
Filesystem                  Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default           18G     17G    852M    95%    /
devfs                       1.0K    1.0K      0B   100%    /dev
fdescfs                     1.0K    1.0K      0B   100%    /dev/fd
zroot/usr/ports             852M     88K    852M     0%    /usr/ports
zroot/usr/src               852M     88K    852M     0%    /usr/src
zroot/var/crash             852M     88K    852M     0%    /var/crash
zroot/var/log               1.1G    238M    852M    22%    /var/log
zroot/var/mail              1.5G    722M    852M    46%    /var/mail
zroot/tmp                   853M    1.4M    852M     0%    /tmp
zroot/var/audit             852M     88K    852M     0%    /var/audit
zroot/var/tmp               853M    1.0M    852M     0%    /var/tmp
zroot/usr/home              1.3G    480M    852M    36%    /usr/home
zroot/usr/jails             852M    108K    852M     0%    /usr/jails
zroot/usr/jails/newjail     861M    9.3M    852M     1%    /usr/jails/newjail
zroot/usr/jails/basejail    1.4G    625M    852M    42%    /usr/jails/basejail
/dev/vtbd1                  434K    434K      0B   100%    /var/lib/cloud/seed/config_drive
```

I've run commands like `zpool online -e` but can't seem to figure out where all the disk space went.


----------



## Profighost (Dec 5, 2022)

Check the snapshots, could be a clue.

If there are any as far as I understand ZFS it reserves space for snapshots.
Not the snapshots themselves occupying space of course,
but if you made snapshots containing (large) files which are now deleted,
ZFS still  reserves space for you may restore the snapshots and you do not run out of space then.

You may also see if you may free some space by doing a pgk clean.

You also may consider to move your root to a larger pool anyhow.
It's recommended to have no more than 60% capacity to ensure a smooth work.


----------



## Alain De Vos (Dec 5, 2022)

df is a good command on ufs-filesystem but not on zfs-filesystem.
What is the output of

```
zfs list -o space
```


----------



## mer (Dec 5, 2022)

What is this cloud based vm running?  Databases?  default location of say mysql or other databases may wind up under /.
Snapshots/clones sound like a possible explaination

zfs list -r -t snap


----------



## kel (Dec 5, 2022)

Thanks I'm checking on the zfs snapshots. In the meantime I did a "pkg clean" which freed up about 1GB of space. But now when I run df the partition just looks smaller and is still 95% full:


```
#df -h
Filesystem                  Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default           17G     16G    826M    95%    /
devfs                       1.0K    1.0K      0B   100%    /dev
fdescfs                     1.0K    1.0K      0B   100%    /dev/fd
zroot/usr/ports             826M     88K    826M     0%    /usr/ports
zroot/usr/src               826M     88K    826M     0%    /usr/src
zroot/var/crash             826M     88K    826M     0%    /var/crash
zroot/var/log               1.0G    238M    826M    22%    /var/log
zroot/var/mail              1.5G    722M    826M    47%    /var/mail
zroot/tmp                   828M    1.4M    826M     0%    /tmp
zroot/var/audit             826M     88K    826M     0%    /var/audit
zroot/var/tmp               827M    1.0M    826M     0%    /var/tmp
zroot/usr/home              1.3G    480M    826M    37%    /usr/home
zroot/usr/jails             826M    108K    826M     0%    /usr/jails
zroot/usr/jails/newjail     836M    9.3M    826M     1%    /usr/jails/newjail
zroot/usr/jails/basejail    1.4G    625M    826M    43%    /usr/jails/basejail
/dev/vtbd1                  434K    434K      0B   100%    /var/lib/cloud/seed/config_drive
```

Here's what I'm seeing for snapshots, I think I need to dig into this more. I have not manually created any snapshots, I wonder if freebsd-update(8) is doing this automatically somehow.


```
#zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
zroot/ROOT/default@base_installation           500K      -  2.07G  -
zroot/ROOT/default@digitalocean_installation   289M      -  2.38G  -
zroot/ROOT/default@2022-08-09-22:42:54-0      9.97G      -  32.2G  -
zroot/ROOT/default@2022-08-31-09:34:07-0      10.6G      -  33.0G  -
zroot/ROOT/default@2022-11-20-20:57:29-0      9.56G      -  32.9G  -
zroot/ROOT/default@2022-12-01-22:37:27-0      7.21G      -  32.3G  -
zroot/usr/home@digitalocean_installation        56K      -    88K  -
zroot/usr/jails/basejail@20181010_22:44:04      72K      -    88K  -
zroot/usr/jails/basejail@20181010_22:54:30        0      -   624M  -
zroot/usr/jails/basejail@20190215_12:24:04        0      -   624M  -
zroot/usr/jails/newjail@20181010_22:54:30        8K      -  9.34M  -
zroot/usr/jails/newjail@20190215_12:24:04        8K      -  9.34M  -
zroot/usr/ports@base_installation                 0      -    88K  -
zroot/usr/src@base_installation                   0      -    88K  -
zroot/usr/src@digitalocean_installation           0      -    88K  -
```


----------



## kel (Dec 5, 2022)

Alain De Vos said:


> df is a good command on ufs-filesystem but not on zfs-filesystem.
> What is the output of
> 
> ```
> ...



Here's the output:


```
#zfs list -o space
NAME                                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
zroot                                          826M  74.3G         0     88K              0      74.3G
zroot/ROOT                                     826M  72.1G         0     88K              0      72.1G
zroot/ROOT/12.3-RELEASE-p5_2022-08-09_224254   826M     8K         0      8K              0          0
zroot/ROOT/12.3-RELEASE-p6_2022-08-31_093407   826M     8K         0      8K              0          0
zroot/ROOT/12.3-RELEASE-p7_2022-11-20_205729   826M     8K         0      8K              0          0
zroot/ROOT/12.3-RELEASE-p9_2022-12-01_223727   826M     8K         0      8K              0          0
zroot/ROOT/default                             826M  72.1G     56.1G   16.0G              0          0
zroot/tmp                                      826M  1.44M         0   1.44M              0          0
zroot/usr                                      826M  1.16G         0     88K              0      1.16G
zroot/usr/home                                 826M   480M       56K    480M              0          0
zroot/usr/jails                                826M   704M         0    108K              0       704M
zroot/usr/jails/basejail                       826M   695M     70.3M    625M              0          0
zroot/usr/jails/newjail                        826M  9.35M       16K   9.34M              0          0
zroot/usr/ports                                826M    88K         0     88K              0          0
zroot/usr/src                                  826M    88K         0     88K              0          0
zroot/var                                      826M   962M         0     88K              0       962M
zroot/var/audit                                826M    88K         0     88K              0          0
zroot/var/crash                                826M    88K         0     88K              0          0
zroot/var/log                                  826M   238M         0    238M              0          0
zroot/var/mail                                 826M   722M         0    722M              0          0
zroot/var/tmp                                  826M  1000K         0   1000K              0          0
```


----------



## kel (Dec 5, 2022)

mer said:


> What is this cloud based vm running?  Databases?  default location of say mysql or other databases may wind up under /.
> Snapshots/clones sound like a possible explaination



Yes databases and a nginx web server. It's a MySQL db in /var/db/mysql and is only using 5.8GB. The nginx is in /usr/local/www/nginx and is just over 1.1GB.  

I'm confused why there are large snapshots taking up so much space when I didn't create them manually.


----------



## Alain De Vos (Dec 5, 2022)

zroot/ROOT/default  USEDSNAP  56.1G.
Probably you installed some ports.

```
pkg info | wc -l
```
And should extend your zroot zpool with additional space.


----------



## mer (Dec 5, 2022)

If you've been doing freebsd-update, then yes, there have been snapshots created.
bectl list  output would help
/var/db/mysql by default is part of your root dataset, so every boot environment (freebsd-update/snapshot) can result in space used (Copy On Write).


----------



## Alain De Vos (Dec 5, 2022)

His snapshots only take 1G each,
zroot/ROOT/12.3-RELEASE-p5_2022-08-09_224254   826M    
zroot/ROOT/12.3-RELEASE-p6_2022-08-31_093407   826M     
zroot/ROOT/12.3-RELEASE-p7_2022-11-20_205729   826M     
zroot/ROOT/12.3-RELEASE-p9_2022-12-01_223727   826M 
My analysis something else is eating space.


----------



## SirDice (Dec 5, 2022)

`du -sk /var/* | sort -n`


----------



## kel (Dec 5, 2022)

I wasn't aware that freebsd-update creates snapshots and doesn't seem to tidy up after itself. 


```
#bectl list
BE                                Active Mountpoint Space Created
12.3-RELEASE-p5_2022-08-09_224254 -      -          9.97G 2022-08-09 22:42
12.3-RELEASE-p6_2022-08-31_093407 -      -          10.6G 2022-08-31 09:34
12.3-RELEASE-p7_2022-11-20_205729 -      -          9.56G 2022-11-20 20:57
12.3-RELEASE-p9_2022-12-01_223727 -      -          7.23G 2022-12-01 22:37
default                           NR     /          72.2G 2018-07-02 10:09
```

Can I safely remove these snapshots using `zfs destroy` ? 

I don't use ports much but I do have quite a few binary packages installed.


```
#pkg info | wc -l
     128
```


----------



## SirDice (Dec 5, 2022)

kel said:


> Can I safely remove these snapshots using `zfs destroy` ?


Those are best cleaned up with bectl(8). Destroying the boot environment will also remove its snapshots. Check `bectl list`.

But I don't think these are very big though, so you may not recover the 'lost' 50something GB. I suspect it's locked up somewhere else.


----------



## jbo (Dec 5, 2022)

kel said:


> I wasn't aware that freebsd-update creates snapshots and doesn't seem to tidy up after itself.


As per my knowledge, there is no decent way to determine whether an update succeeded other than the user actually using the system. Therefore, it would be hard to implement a sane (!) set of criteria after which a snapshot would be removed automatically.


----------



## kel (Dec 6, 2022)

That makes sense.

I deleted a few GB of old backups to save more space and now my / has shrunk again, down to 15G and still shows 95% capacity. I don't understand why it keeps shrinking. It's a 80G partition that shows just 15G now. The same system that had an 18G size this morning:


```
#df -h
Filesystem                  Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default           15G     14G    735M    95%    /
devfs                       1.0K    1.0K      0B   100%    /dev
fdescfs                     1.0K    1.0K      0B   100%    /dev/fd
zroot/usr/ports             735M     88K    735M     0%    /usr/ports
zroot/usr/src               735M     88K    735M     0%    /usr/src
zroot/var/crash             735M     88K    735M     0%    /var/crash
zroot/var/log               973M    238M    735M    24%    /var/log
zroot/var/mail              1.4G    722M    735M    50%    /var/mail
zroot/tmp                   736M    1.4M    735M     0%    /tmp
zroot/var/audit             735M     88K    735M     0%    /var/audit
zroot/var/tmp               736M    1.0M    735M     0%    /var/tmp
zroot/usr/home              1.2G    480M    735M    39%    /usr/home
zroot/usr/jails             735M    108K    735M     0%    /usr/jails
zroot/usr/jails/newjail     744M    9.3M    735M     1%    /usr/jails/newjail
zroot/usr/jails/basejail    1.3G    625M    735M    46%    /usr/jails/basejail
/dev/vtbd1                  434K    434K      0B   100%    /var/lib/cloud/seed/config_drive
```

And now the horror. Right after deleting about 3GB more old files, and not believing the filesystem is really still 95% full with just 735MB available, I created a 740MB file to test my hypothesis that df was wrong. Nope. System ran out of disk space, apps crash, bad things start happening.

The tale of the incredible shring root partition? How is this happening? I may need to abandon this system and quickly migrate the data and apps.


----------



## Alain De Vos (Dec 6, 2022)

To safe a few Gig,

```
pkg clean -a
```
To see how many Gig your packages take,

```
du -hs /usr/local
```


----------



## kel (Dec 6, 2022)

I created a new VM of this system from a backup made just 3 days ago. Three days ago the filesystem showed a size of 35G, not the 18G or 15G it was showing today.

The filesystem or partitions must be corrupt. Have never seen this before.


----------



## Profighost (Dec 6, 2022)

The idea that occurs to me spontanioulsy on that is that your VM may use a virtual disk with dynamical allocated space,
and the virtualization software decides due the "disk is not full to reduce its size."
So the problem may not come from FreeBSD but from the 'outside', the VM and/or the OS's fs the diskimage is on.

Since your virtual disk suffered that much of capacity loss in that short time maybe you have the possibility to make a test and keep a copy of it running a couple of days more.
If the diskspace drops below its content, you'd at least knew the problems comes not from dynamic allocation... 

(And of course always check the integrity of the hardware.
It's a common issue that people deeply well versed in software and operation systems sometimes simply just oversee that hardware does not live forever and may even fail when already classified as 'still new'.)


----------



## kel (Dec 8, 2022)

I think you're right, the underlying virtual infrastructure is most likely doing something funky.

This system has been on DigitalOcean for the past 5 years and only recently started performing poorly. They dropped support for FreeBSD earlier this year (sadly).  I opened a ticket with DigitalOcean support since I believe the problem in on their end. I got only canned responses that didn't address the problem at all. They used to have great FreeBSD support but times change I guess.

Maybe it's time to look at other cloud providers that do support FreeBSD.


----------

