# ZFS: How to properly remove unnecessary snapshots and not damage data?



## ogogon (Jun 8, 2022)

Colleagues, tell me, please, how can I remove unnecessary snapshots without error?

Apparently, every time, with a binary update of the operating system, a snapshot was put on zroot.


```
ogogon@server:/tmp# zfs list
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
zroot                                          899G  38,3M    88K  /zroot
zroot/ROOT                                     899G  38,3M    88K  none
zroot/ROOT/12.3-RELEASE-p1_2022-05-02_184810     8K  38,3M   893G  /
zroot/ROOT/12.3-RELEASE_2022-01-13_095203        8K  38,3M   894G  /
zroot/ROOT/default                             899G  38,3M   736G  /
zroot/tmp                                     9,19M  38,3M  9,19M  /tmp
zroot/usr                                      264K  38,3M    88K  /usr
zroot/usr/home                                  88K  38,3M    88K  /usr/home
zroot/usr/src                                   88K  38,3M    88K  /usr/src
zroot/var                                     21,3M  38,3M    88K  /var
zroot/var/audit                                 88K  38,3M    88K  /var/audit
zroot/var/crash                                 88K  38,3M    88K  /var/crash
zroot/var/log                                 8,55M  38,3M  8,55M  /var/log
zroot/var/mail                                 120K  38,3M   120K  /var/mail
zroot/var/tmp                                 12,4M  38,3M  12,4M  /var/tmp
ogogon@server:/tmp# zfs list -t snapshot
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
zroot/ROOT/default@2022-01-13-09:52:03-0  2,49G      -   894G  -
zroot/ROOT/default@2022-05-02-18:48:10-0  1,22G      -   893G  -
ogogon@server:/tmp#
```

Apparently, these snapshots take up a lot of space, and besides, there is no need for them - the binary updates went well and there was no need to return to the previous points.

I tried deleting these snapshots, but I don't want the current contents of zroot to change. It suits me just fine.


```
ogogon@server:/tmp# zfs destroy zroot/ROOT/default@2022-05-02-18:48:10-0
cannot destroy 'zroot/ROOT/default@2022-05-02-18:48:10-0': snapshot has dependent clones
use '-R' to destroy the following datasets:
zroot/ROOT/12.3-RELEASE-p1_2022-05-02_184810
ogogon@server:/tmp#
```

Let me remind you once again that I don’t need these snapshots and I just want to get rid of them without affecting the current state of the file system.

I don't really understand what dataset means in this context, and I'm unsure of my actions.
If I use the -R option, will I be able to complete this task? It is very important for me not to get problems with the current zroot content. It must not change its current value!

Grateful for the answer,
Ogogon.


----------



## SirDice (Jun 8, 2022)

First check which boot environments you have and which one is being loaded; `bectl list`. It looks like you have three (default, 12.3-RELEASE_2022-01-13_095203 and 12.3-RELEASE-p1_2022-05-02_184810). You want to clean those up first.


----------



## mer (Jun 8, 2022)

You are using ZFS as your root filesystem, correct?
You are using Boot Environments, correct?
You do understand that a Boot Environment is a "clone" under the hood?
You do understand that a ZFS clone is based off a ZFS snapshot?

Those questions out of the way, you currently have 3 Boot Environments (bectl list or beadm list will verify).
Those 2 snapshots are your Boot Environments that are NOT named "default".

You can probably get rid of one of them BUT you need to do it the right way, by destroying one of your Boot Environments.
As root do 
bectl list or beadm list to figure out which one is your active Boot Environment, look at the Active column, the line with "NR" and mountpoint of "/" is the currently active BE.
Take a look at the created date, I'm guessing that "default" is the oldest probably from the original install.
If it is not the active BE you can probably do:
bectl destroy -o default or
beadm destroy default

NOTE/Question:
Do you have databases or something else on the system?  I'd check the configuration there because the Used and Refer values on the zroot datasets (Boot Environments) seem pretty high.


----------



## Andriy (Jun 8, 2022)

As been already suggested, those snapshots are branch points for boot environment filesystems.
Those are created via cloning.
You cannot remove those snapshots without removing the corresponding boot environments.


----------



## sko (Jun 8, 2022)

if you have to reverse a origin/clone (parent/child) relationship, use `zfs promote` 

zfs(8)


> zfs promote clone-filesystem
> Promotes a clone file system to no longer be dependent on its "origin"
> snapshot.  This makes it possible to destroy the file system that the
> clone was created from.  The clone parent-child dependency relationship
> ...


----------



## ogogon (Jun 8, 2022)

SirDice said:


> First check which boot environments you have and which one is being loaded; `bectl list`. It looks like you have three (default, 12.3-RELEASE_2022-01-13_095203 and 12.3-RELEASE-p1_2022-05-02_184810). You want to clean those up first.


Thanks. Here is the output of this command:

```
ogogon@server:/tmp# bectl list
BE                                Active Mountpoint Space Created
12.3-RELEASE-p1_2022-05-02_184810 -      -          1.22G 2022-05-02 18:48
12.3-RELEASE_2022-01-13_095203    -      -          2.49G 2022-01-13 09:52
default                           NR     /          899G  2017-01-01 03:51
ogogon@server:/tmp#
```

I assume that the default environment is bootable.


----------



## Lamia (Jun 8, 2022)

You should now be able to delete this - zroot/ROOT/default@2022-05-02-18:48:10-0 - given that it is not active or running. Try the beadm/bectl destroy. Zfs promote followed by destroy may also work.


----------



## sko (Jun 8, 2022)

If those snapshots/clones were created by bectl/beadm (freebsd-update leverages bectl behind the scenes), stick with those tools to manage them. There might be some additional logic involved (e.g. corresponding snapshots of other datasets) which also can/need to be removed.

To prevent automatic creation of boot environments by freebsd-update in the future (e.g. because you create and manage them manually), edit freebsd-update.conf(5) and add "CreateBootEnv no".


----------



## ogogon (Jun 8, 2022)

mer said:


> You are using ZFS as your root filesystem, correct?


Without any doubt.



mer said:


> You are using Boot Environments, correct?


I guess I've always used it. But I found out about it, with all obviousness, only now.



mer said:


> You do understand that a Boot Environment is a "clone" under the hood?
> You do understand that a ZFS clone is based off a ZFS snapshot?


I'm just starting to delve into this technology, and it's still not very clear to me...



mer said:


> Those questions out of the way, you currently have 3 Boot Environments (bectl list or beadm list will verify).
> Those 2 snapshots are your Boot Environments that are NOT named "default".
> 
> You can probably get rid of one of them BUT you need to do it the right way, by destroying one of your Boot Environments.


I think I should leave the default boot environment and remove the other two, which are frilly and long-winded.

Do I understand correctly that I need to issue commands for this:

```
bectl destroy -o 12.3-RELEASE-p1_2022-05-02_184810
bectl destroy -o 12.3-RELEASE_2022-01-13_095203
```
or

```
zfs -R destroy zroot/ROOT/default@2022-01-13-09:52:03-0
zfs -R destroy zroot/ROOT/default@2022-05-02-18:48:10-0
```
?



mer said:


> NOTE/Question:
> Do you have databases or something else on the system?  I'd check the configuration there because the Used and Refer values on the zroot datasets (Boot Environments) seem pretty high.


There's a huge amount of sound files in the asterisk spool. Ten to fifteen thousand a day, and so on for several years.


----------



## ogogon (Jun 8, 2022)

Andriy said:


> As been already suggested, those snapshots are branch points for boot environment filesystems.
> Those are created via cloning.
> You cannot remove those snapshots without removing the corresponding boot environments.


Thanks, I'm starting to understand this...


----------



## mer (Jun 8, 2022)

It looks like "default" is the one you are currently booted into, so you can safely destroy the other ones.  But if you look at the space column from bectl list, you gain maybe 3.5GB or so.

But if the data is in asterisk spool, then that is where your space is being used up as shown by the Space for "default".

I would move them into their own dataset and off of the "root".
From the asterisk config, what is the directory for storing them?

From the creation dates in your bectl list, default is probably from your initial install back in 2017 and you should probably be running something more recent, which would be the BE for 12.3 created in may 2022.
Have you tried booting into that and making sure everything works?
In your current BE (default) what is the output of the following command
freebsd-version -kru


----------



## Lamia (Jun 8, 2022)

There is no harm in keeping one of those snapshots. They come handy at the least expected time, particularly when it is remotely stored in a backup and you lost the zpool on the main box or it crashed for reasons beyond you.


----------



## grahamperrin@ (Jun 15, 2022)

Lamia said:


> … handy at the least expected time, …



+1 to not destroying boot environments too soon.



ogogon said:


> … Do I understand correctly that I need to issue commands for this: …



`-o` should be unnecessary. 

(If necessary, there'll be a hint at the time of destruction.)



ogogon said:


> … I guess I've always used it. …



Automated creation of snapshots is relatively new. I'm surprised that the feature was not mentioned in release notes for 12.3 or 13.1. 









						Solved - freebsd-update(8) and boot environments
					

With freebsd-update(8) in FreeBSD 12.3 and 13.1 on ZFS:  a single upgrade will typically add two boot environments:    freebsd-update.conf(5) % uname -KU ; tail -n 3 /etc/freebsd-update.conf 1400056 1400056  # Create a new boot environment when installing patches # CreateBootEnv yes %...




					forums.freebsd.org


----------



## mer (Jun 15, 2022)

bectl destroy -o is the same as beadm destroy.  Yes, bectl will warn about not destroying the origin but at that point it's already destroyed the BE (clone) so the user has to explicitly destroy the snapshot by hand so "extra work after the fact".
I think I heard that bectl destroy may wind up automatically doing the "-o" to give the same behavior as beadm destroy.  That's a good thing in my opinion (decide for yourself how humble it is) because my thinking is:
I do bectl create blah blah blah then bectl destroy.  Why would I not WANT the snapshots/clone (origin) deleted at the same time?  ZFS is Copy On Write so the BE typically grows in relation to changes so the deleting the snapshot simply gets rid of the space added to the snapshots.


----------



## Erichans (Jun 15, 2022)

For ZFS disk usage, look at zfs-list(8): `zfs list -o space`


----------



## grahamperrin@ (Jun 16, 2022)

mer said:


> … Why would I not WANT the snapshots/clone (origin) deleted at the same time? …



Please see:









						bectl clarifications
					

I have some questions regarding bectl after reading the corresponding manual.  1. The section for bectl create tells us that the -r flag creates a recursive boot environment. I'm not sure whether I understand what this exactly implies. So far I've only used bectl create without any additional...




					forums.freebsd.org


----------



## mer (Jun 16, 2022)

And see my reply in that thread, but keep in mind I am only talking about the snapshot/clones related to a Boot Environment, not to snapshots/clones in general.


----------

