# ZFS snapshot managment



## Sylgeist (Aug 16, 2011)

Not sure if this should go in a different location - please move if necessary!

I'm using ZFS (obviously) and set up zfs-snapshot-mgmt. My config is as follows:


```
snapshot_prefix: auto-
filesystems:
  tank/Backups:
    recursive: true
    creation_rule:
      at_multiple: 1440
      offset: 120
    preservation_rules:
      - { for_minutes: 47520, at_multiple: 1440, offset:  120 }
```

Based on the manpage this should create snapshots once a day (1440 minutes) at 2AM based on the 120 minute offset from midnight. When I let this run for a couple days I get this when I list snapshots:


```
tank/Backups@auto-2011-08-14_20.00
tank/Backups@auto-2011-08-15_20.00
```

I can determine any pattern that makes sense with that. 20:00 isn't 120 minutes from midnight or any obvious time. Can anyone explain what's happening?


----------



## Sebulon (Aug 16, 2011)

Hey,

I have actually noticed something similar. My config looks like this:

```
snapshot_prefix: auto-
filesystems:
  pool1/root:
    # Create snapshots recursively for all filesystems mounted under this one
    recursive: true
    # Create snapshots every 60 minutes, starting at midnight
    creation_rule:
      at_multiple: 60
      offset: 0
    preservation_rules:
      - { for_minutes:   300, at_multiple:    60, offset: 0 }
      - { for_minutes: 10080, at_multiple:  1440, offset: 0 }
      - { for_minutes: 40320, at_multiple: 10080, offset: 0 }
```

So the daily and weekly should be created at 00:00 but;

```
# zfs list -t snapshot
NAME                                                           USED  AVAIL  REFER  MOUNTPOINT
pool1/root@auto-2011-08-04_02.00                              3.84M      -   303M  -
pool1/root@auto-2011-08-10_02.00                               287K      -   300M  -
pool1/root@auto-2011-08-11_02.00                               267K      -   300M  -
pool1/root@auto-2011-08-12_02.00                               258K      -   300M  -
pool1/root@auto-2011-08-13_02.00                               256K      -   300M  -
pool1/root@auto-2011-08-14_02.00                               240K      -   300M  -
pool1/root@auto-2011-08-15_02.00                               242K      -   300M  -
pool1/root@auto-2011-08-16_02.00                               137K      -   300M  -
pool1/root@auto-2011-08-16_03.00                               137K      -   300M  -
pool1/root@auto-2011-08-16_04.00                              65.1K      -   300M  -
pool1/root@auto-2011-08-16_05.00                               128K      -   300M  -
pool1/root@auto-2011-08-16_06.00                               137K      -   300M  -
pool1/root@auto-2011-08-16_07.00                               137K      -   300M  -
```
Instead, they are created at 02:00. I thought I was doing something wrong at first, but this worked as intended before upgrading to ZFS V28. Also, date and time is correct, so itÂ´s not that.

/Sebulon


----------



## graudeejs (Aug 16, 2011)

NOTE: zpool v28 has one "bug", I'm not sure if this applies to zfs-snapshot-mgmt (If it tries to delete snapshots recursively, then It may apply on fs that doesn't have snapshot), but check out description:
http://wiki.bsdroot.lv/zfsnap#zpool_v28_zfs_destroy_-r_bug


----------



## poh-poh (Aug 16, 2011)

Have you accounted for timezone offset? It uses raw minutes from config without parsing them with Time.mktime.


----------



## Sylgeist (Aug 16, 2011)

poh-poh,

I think you might be right. At first, I wasn't seeing how the time matched up with a UTC offset properly, but if I'm doing the numbers right it does make sense.

I'm in the NorthAm/MDT time zone which is -06:00 from UTC. Ignoring my zfs-snapshot-mgmt offset of 120 mins that means my snapshots would take place at midnight UTC which would be 18:00 localtime. Add 2 hours for my offset and you have the 20:00.

Is there a different Ruby function that would return a localtime entry instead? I don't mind putting in a 360 min offset to correct my issue, but what happens with daylight savings? How difficult would it be to put a config option in with timezone? I'm not very familiar with Ruby.


----------



## Sebulon (Aug 16, 2011)

Ok,

but why were they taken at 00:00 before, but after upgrade, they are instead taken at 02:00?

/Sebulon


----------



## poh-poh (Aug 16, 2011)

Sylgeist said:
			
		

> How difficult would it be to put a config option in with timezone? I'm not very familiar with Ruby.


Neither me, timezone can be specified in TZ or /etc/localtime, see tzset(3).
	
	



```
Index: sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt
===================================================================
RCS file: /a/.csup/ports/sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt,v
retrieving revision 1.1
diff -u -p -r1.1 patch-zfs-snapshot-mgmt
--- sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	11 Jan 2010 03:42:14 -0000	1.1
+++ sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	17 Aug 2011 18:13:47 -0000
@@ -1,5 +1,14 @@
---- zfs-snapshot-mgmt~
-+++ zfs-snapshot-mgmt
+--- zfs-snapshot-mgmt.orig	2011-08-17 00:15:42.801630764 +0400
++++ zfs-snapshot-mgmt	2011-08-17 00:20:17.751629671 +0400
+@@ -44,7 +44,7 @@ class Rule
+   def initialize(args = {})
+     args = { 'offset' => 0 }.merge(args)
+     @at_multiple = args['at_multiple'].to_i
+-    @offset = args['offset'].to_i
++    @offset = args['offset'].to_i - $utc_offset
+   end
+ 
+   def condition_met?(time_minutes)
 @@ -154,7 +154,11 @@ class FSInfo
    end
  
@@ -12,3 +21,11 @@
    end
  
  private
+@@ -194,6 +198,7 @@ class Config
+ 
+ end
+ 
++$utc_offset = Time.now.utc_offset / 60
+ config_yaml = File.open(CONFIG_FILE_NAME).read(CONFIG_SIZE_MAX)
+ die "Config file too long" if config_yaml.nil?
+ config = Config.new(YAML::load(config_yaml))
```


----------



## Sebulon (Aug 16, 2011)

@poh-poh

Do you mean that we should paste that into e.g. *~/zfs-snapshot-mgmt.patch* and then:
[cmd=]# patch /usr/local/bin/zfs-snapshot-mgmt ~/zfs-snapshot-mgmt.patch[/cmd]
?

And shouldnÂ´t this:

```
+@@ -199,6 +203,7 @@ die "Config file too long" if config_yam
```
really say this instead:

```
+@@ -199,6 +203,7 @@ die "Config file too long" if config_yam[B]1[/b]
```
?

/Sebulon


----------



## poh-poh (Aug 16, 2011)

It should be applied against port, e.g.
`$ patch -d /usr/ports -i ~/zfs-snapshot-mgmt.patch`
And *-p* in diff(1) is known to truncate long lines, this shouldn't be a problem for patch(1).


----------



## Sebulon (Aug 17, 2011)

@poh-poh

here's what cron has been telling me all night

```
/usr/local/bin/zfs-snapshot-mgmt:47:in `+': nil can't be coerced into Fixnum (TypeError)
       from /usr/local/bin/zfs-snapshot-mgmt:47:in `initialize'
       from /usr/local/bin/zfs-snapshot-mgmt:118:in `new'
       from /usr/local/bin/zfs-snapshot-mgmt:118:in `initialize'
       from /usr/local/bin/zfs-snapshot-mgmt:188:in `new'
       from /usr/local/bin/zfs-snapshot-mgmt:188:in `initialize'
       from /usr/local/bin/zfs-snapshot-mgmt:25:in `map'
       from /usr/local/bin/zfs-snapshot-mgmt:187:in `each'
       from /usr/local/bin/zfs-snapshot-mgmt:187:in `map'
       from /usr/local/bin/zfs-snapshot-mgmt:187:in `initialize'
       from /usr/local/bin/zfs-snapshot-mgmt:203:in `new'
       from /usr/local/bin/zfs-snapshot-mgmt:203
```
So that's was a no go. I've reverted it again now.

/Sebulon


----------



## poh-poh (Aug 17, 2011)

Can you try new version? Sorry for wasted time.
http://forums.freebsd.org/posthistory.php?p=144140


----------



## Sebulon (Aug 17, 2011)

@poh-poh

It's quite alright, I don't mind.
But I can't follow the link you posted. I lack the necessary privilege, apparently.

/Sebulon


----------



## poh-poh (Aug 17, 2011)

It's edit history for 7th post. You can ignore the history and just grab *modified* patch from the post again.


----------



## Sebulon (Aug 18, 2011)

@poh-poh

I'm stuck. What am I missing?


```
# cd /usr/ports/sysutils/zfs-snapshot-mgmt
# rm -rf /files/pa*
# make deinstall distclean
# cd
# patch -d /usr/ports -i ~/zfs-snapshot-mgmt.patch
Hmm...  Looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt
|===================================================================
|RCS file: /a/.csup/ports/sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt,v
|retrieving revision 1.1
|diff -u -p -r1.1 patch-zfs-snapshot-mgmt
|--- sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	11 Jan 2010 03:42:14 -0000	1.1
|+++ sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	17 Aug 2011 18:13:47 -0000
--------------------------
File to patch:
```
Or:

```
# cd /usr/ports/sysutils/zfs-snapshot-mgmt
# rm -rf files/pa*
# make deinstall distclean [B]install[/B]
# cd
# patch -d /usr/ports -i ~/zfs-snapshot-mgmt.patch
Hmm...  Looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt
|===================================================================
|RCS file: /a/.csup/ports/sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt,v
|retrieving revision 1.1
|diff -u -p -r1.1 patch-zfs-snapshot-mgmt
|--- sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	11 Jan 2010 03:42:14 -0000	1.1
|+++ sysutils/zfs-snapshot-mgmt/files/patch-zfs-snapshot-mgmt	17 Aug 2011 18:13:47 -0000
--------------------------
File to patch:
```
Same same.

/Sebulon


----------



## poh-poh (Aug 18, 2011)

You didn't restore the files /usr/ports/sysutils/zfs-snapshot-mgmt/files/pa* after removing.


----------



## Sebulon (Aug 18, 2011)

@poh-poh

I'm sorry, did I have to save the original patch as well? I thought you ment that I should start over with the updated patch instead.


```
# make deinstall distclean
# touch files/patch-zfs-snapshot-mgmt
# cd
# patch -d /usr/ports -i ~/zfs-snapshot-mgmt.patch
# cd /usr/ports/sysutils/zfs-snapshot-mgmt
# make install
===>  License check disabled, port has not defined LICENSE
=> zfs-snapshot-mgmt-20090201.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch http://marcin.studio4plus.com/files/zfs-snapshot-mgmt-20090201.tar.gz
zfs-snapshot-mgmt-20090201.tar.gz             100% of 4903  B   93 kBps
===>  Extracting for zfs-snapshot-mgmt-20090201_1
=> SHA256 Checksum OK for zfs-snapshot-mgmt-20090201.tar.gz.
===>  Patching for zfs-snapshot-mgmt-20090201_1
===>  Applying FreeBSD patches for zfs-snapshot-mgmt-20090201_1
No file to patch.  Skipping...
1 out of 1 hunks ignored--saving rejects to Oops.rej
=> Patch patch-zfs-snapshot-mgmt failed to apply cleanly.
*** Error code 1

Stop in /usr/ports/sysutils/zfs-snapshot-mgmt.
```
/usr/ports/sysutils/zfs-snapshot-mgmt/work/zfs-snapshot-mgmt-20090201/Oops.rej

```
***************
*** 194,199 ****
  
  end
  
  config_yaml = File.open(CONFIG_FILE_NAME).read(CONFIG_SIZE_MAX)
  die "Config file too long" if config_yaml.nil?
  config = Config.new(YAML::load(config_yaml))
--- 198,204 ----
  
  end
  
+ $utc_offset = Time.now.utc_offset / 60
  config_yaml = File.open(CONFIG_FILE_NAME).read(CONFIG_SIZE_MAX)
  die "Config file too long" if config_yaml.nil?
  config = Config.new(YAML::load(config_yaml))
```

/Sebulon


----------



## poh-poh (Aug 18, 2011)

Sebulon said:
			
		

> `# touch files/patch-zfs-snapshot-mgmt`


No, the same way you update ports, e.g. tar.gz/portsnap/csup/cvs up/git/etc. or just grab the file from cvsweb.

This may be not obvious but you're supposed to do backups during testing. The ports tree before applying the patch
`$ zfs snapshot tank/usr/ports@before_test`
and automatic snapshots themselves

```
$ zfs list -Ht snapshot -o name |
sed -n '/@auto-/ { p; s|/|_|g; p; }' |
while read snapshot; read file; do
    zfs send -p $snapshot >/var/backups/$file
done
```
as they could be destroyed by wrong preservation rules.


----------



## Sebulon (Aug 19, 2011)

@poh-poh

It works, it works, it flippin' works, look:

```
# zfs list -t snapshot
NAME                                                           USED  AVAIL  REFER  MOUNTPOINT
pool1/root@auto-2011-08-19_00.00                               126K      -   300M  -
pool1/root@auto-2011-08-19_03.00                               126K      -   300M  -
pool1/root@auto-2011-08-19_04.00                                  0      -   300M  -
pool1/root@auto-2011-08-19_05.00                               117K      -   300M  -
pool1/root@auto-2011-08-19_06.00                               126K      -   300M  -
pool1/root@auto-2011-08-19_07.00                               126K      -   300M  -
```
Do you think your patch could be merged into an update, or do I have to save and apply your patch every time there is an update for the application? Perhaps there are more people with this problem that could benefit from this?

/Sebulon


----------



## poh-poh (Aug 19, 2011)

I've submitted it as PR ports/159874. 2 weeks for maintainer timeout + indefinite time when a committer picks it up.


----------



## Sebulon (Aug 19, 2011)

@poh-poh

Nice man, thanks a bunch!

/Sebulon


----------



## curlymo (Jun 3, 2017)

Just to inform others about my experiences on this issue. The patch by poh-poh isn't complete because it doesn't take daylight savings time into consideration.

The situation i bump into that caused me chasing this bug was i wanted to keep a snapshot in februari while i was running the tool in june. In the Netherlands june uses DST and februari doesn't. When running in june the global $utc_offset as proposed by poh-poh sets the global offset in june to 2 hours, while the offsite in februari should be 1 hour.

This patch calculates the offset for the day the snapshot was created, not for the day the tool was executed in case we are removing snapshots. Additionally, it stores the creation timestamp as soon as possible. In case we are creating a huge amount of snaphots, a second could have been passed between the initial call of zfs-snapshot-mgmt and the actual creation of snapshots, creating snapshots which are off by >= 1 seconds. The creation timestamp is now stored at the beginning and reused throughout the program.

```
--- /usr/local/bin/zfs-snapshot-mgmt    2017-04-30 11:45:53.000000000 +0200
+++ /usr/local/bin/zfs-snapshot-mgmt    2017-06-04 09:12:16.838555000 +0200
@@ -24,18 +24,20 @@
 require 'yaml'
 require 'time'

+$now_timestamp = Time.now;
+
 CONFIG_FILE_NAME = '/usr/local/etc/zfs-snapshot-mgmt.conf'
 CONFIG_SIZE_MAX = 64 * 1024     # just a safety limit

 def encode_time(time)
-  time.strftime('%Y-%m-%d_%H.%M')
+  time.strftime('%Y.%m.%d-%H.%M.%S')
 end

 def decode_time(time_string)
-  date, time = time_string.split('_')
-  year, month, day = date.split('-')
-  hour, minute = time.split('.')
-  Time.mktime(year, month, day, hour, minute)
+  date, time = time_string.split('-')
+  year, month, day = date.split('.')
+  hour, minute, second  = time.split('.')
+  Time.mktime(year, month, day, hour, minute, second)
 end

 class Rule
@@ -49,7 +51,7 @@

   def condition_met?(time_minutes)
     divisor = @at_multiple
-    (divisor == 0) or ((time_minutes - @offset) % divisor) == 0
+    (divisor == 0) or (((time_minutes+(Time.at(time_minutes*60).utc_offset / 60)) - @offset) % divisor) == 0
   end
 end

@@ -83,7 +85,7 @@
   end

   def self.new_snapshot(fs_name, snapshot_prefix)
-    name = snapshot_prefix + encode_time(Time.now)
+    name = snapshot_prefix + encode_time($now_timestamp)
     SnapshotInfo.new(name, fs_name, snapshot_prefix)
   end

@@ -202,7 +204,7 @@
 die "Config file too long" if config_yaml.nil?
 config = ZConfig.new(YAML::load(config_yaml))

-now_minutes = Time.now.to_i / 60
+now_minutes = $now_timestamp.to_i / 60

 # A simple way of avoiding interaction with zpool scrubbing
 busy_pools = config.busy_pools
```
I'm storing snapshots in the following format GTM-2017.06.04-10.54.00 so the prefix i use is GMT-


----------

