# SNMP disk space.



## lbl (Nov 6, 2009)

Hi

I think i need some fresh eyes on this one, im trying to calculate the free disk space via snmp (net-snmp)

But it seems to be way off and im not shure why ...

Any clues ?

/lbl


```
[lbl@atom1 ~]$ cat check_snmp_disk 
#!/usr/local/bin/bash

# This script takes:
# <host> <community> <mountpoint> <megs>

snmpwalk="/usr/local/bin/snmpwalk"
snmpget="/usr/local/bin/snmpget"

calc_free()
# takes <size> <used> <allocation>
{
echo "$1 $2 - $3 * 1024 / 1024 / p" | dc
}

if result=`$snmpwalk -v2c -c $2 -Oq $1 hrStorageDescr | grep "$3$"`
	then
		index=`echo $result | sed 's/.*hrStorageDescr//' | sed 's/ .*//'`

		args=`$snmpget -v2c -c $2 -Oqv $1 hrStorageSize$index hrStorageUsed$index hrStorageAllocationUnits$index | while read oid j ; do printf " $oid" ; done`

		free=`calc_free$args`

		if [ "$free" -gt "$4" ]
			then
				echo "DISK OK: mount $3 free $free MB."
				exit 0
			else
				echo "DISK CRITICAL: mount $3 free $free MB."
				exit 2
		fi
	else
		echo "DISK CRITICAL: $3 dosent exist or snmp isent responding."
		exit 3
fi
[lbl@atom1 ~]$ df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
/dev/mirror/ides1a    496M    236M    221M    52%    /
devfs                 1.0K    1.0K      0B   100%    /dev
/dev/mirror/ides1e    496M    1.4M    455M     0%    /tmp
/dev/mirror/ides1f    103G    3.6G     91G     4%    /usr
/dev/mirror/ides1d    2.9G    1.8G    860M    68%    /var
/dev/raid3/sata0      1.8T    1.6T     61G    96%    /storage/download
/dev/raid3/sata1      1.8T    911G    749G    55%    /storage/pub
devfs                 1.0K    1.0K      0B   100%    /var/named/dev
[lbl@atom1 ~]$ df 
Filesystem         1K-blocks        Used     Avail Capacity  Mounted on
/dev/mirror/ides1a     507630     241164    225856    52%    /
devfs                       1          1         0   100%    /dev
/dev/mirror/ides1e     507630       1480    465540     0%    /tmp
/dev/mirror/ides1f  108230294    3783806  95788066     4%    /usr
/dev/mirror/ides1d    3017358    1895590    880380    68%    /var
/dev/raid3/sata0   1892045722 1676425126  64256940    96%    /storage/download
/dev/raid3/sata1   1892045722  955594880 785087186    55%    /storage/pub
devfs                       1          1         0   100%    /var/named/dev
[lbl@atom1 ~]$ ./check_snmp_disk localhost public / 100
DISK OK: mount / free 260 MB.
[lbl@atom1 ~]$ ./check_snmp_disk localhost public /tmp 100
DISK OK: mount /tmp free 494 MB.
[lbl@atom1 ~]$ ./check_snmp_disk localhost public /usr 100
DISK OK: mount /usr free 101998 MB.
[lbl@atom1 ~]$ ./check_snmp_disk localhost public /var 100
DISK OK: mount /var free 1095 MB.
[lbl@atom1 ~]$ ./check_snmp_disk localhost public /storage/download 100
DISK OK: mount /storage/download free 210566 MB.
[lbl@atom1 ~]$ ./check_snmp_disk localhost public /storage/pub 100
DISK OK: mount /storage/pub free 914502 MB.
[lbl@atom1 ~]$
```


----------



## lbl (Nov 6, 2009)

*Seems right.*

Hmm it seems that im actualy calculating it correctly ...

df is way off tho ...

Can anyone point me in the right direction for understanding how this works ?


```
[lbl@atom1 ~]$ df
Filesystem         1K-blocks        Used     Avail Capacity  Mounted on
/dev/mirror/ides1a     507630     241164    225856    52%    /
devfs                       1          1         0   100%    /dev
/dev/mirror/ides1e     507630       1480    465540     0%    /tmp
/dev/mirror/ides1f  108230294    3783924  95787948     4%    /usr
/dev/mirror/ides1d    3017358    1895916    880054    68%    /var
/dev/raid3/sata0   1892045722 1676425126  64256940    96%    /storage/download
/dev/raid3/sata1   1892045722  955594880 785087186    55%    /storage/pub
devfs                       1          1         0   100%    /var/named/dev
[lbl@atom1 ~]$ echo "5 k 1892045722 1676425126 - 1024 / 1024 / p" | dc
205.63182
[lbl@atom1 ~]$ echo "5 k 64256940 1024 / 1024 / p" | dc
61.28019
[lbl@atom1 ~]$ echo "5 k 1892045722 955594880 - 1024 / 1024 / p" | dc
893.06911
[lbl@atom1 ~]$ echo "5 k 785087186 1024 / 1024 / p" | dc
748.71748
[lbl@atom1 ~]$ df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
/dev/mirror/ides1a    496M    236M    221M    52%    /
devfs                 1.0K    1.0K      0B   100%    /dev
/dev/mirror/ides1e    496M    1.4M    455M     0%    /tmp
/dev/mirror/ides1f    103G    3.6G     91G     4%    /usr
/dev/mirror/ides1d    2.9G    1.8G    859M    68%    /var
/dev/raid3/sata0      1.8T    1.6T     61G    96%    /storage/download
/dev/raid3/sata1      1.8T    911G    749G    55%    /storage/pub
devfs                 1.0K    1.0K      0B   100%    /var/named/dev
[lbl@atom1 ~]$
```


----------



## gordon@ (Nov 6, 2009)

The answer is simpler than you might think. Notice how the answers are 5% different? UFS2 reserves 5% disk usage for the root user. Storage as reported by df will go to 105% disk usage if written by root. However, the disk usage reported by net-snmp will only to 100%.

To verify this, check the size of the disks as reported by snmp and by df (don't worry about used space, just total space). You should see the discrepancy there.


----------



## mjb (Nov 6, 2009)

gordon@ said:
			
		

> UFS2 reserves 5% disk usage for the root user


You sure?

6.2 and 8.0 both have this:


```
$ grep MINFREE /usr/src/sys/ufs/ffs/fs.h
 * MINFREE gives the minimum acceptable percentage of filesystem
#define MINFREE         8
```

lbl: See the "-m" option in the man pages for newfs and tunefs


----------



## lbl (Nov 6, 2009)

*Nice*

Thanks ... just what i was looking for ...

If somebody wants the script its here in the new version.:


```
[lbl@atom0 ~/scripts/nagios-plugins]$ cat check_snmp_disk 
#!/usr/local/bin/bash

# This script takes:
# <host> <community> <mountpoint> <megs>

snmpwalk=/usr/local/bin/snmpwalk
snmpget=/usr/local/bin/snmpget

calc_free()
# takes <size> <used> <allocation>
{
if result=`echo "($1*$3*.92/1024-$2*$3/1024)/1024" | bc`
	then
		echo "$result"
	else
		echo "DISK UNKNOWN: Cant calculate free space."
		exit 3
fi
}

fetch_details()
#takes <host> <community> <index>
{
if result=`$snmpget -v2c -c $2 -OqvU $1 hrStorageSize.$3 hrStorageUsed.$3 hrStorageAllocationUnits.$3 | while read oid ; do printf "$oid " ; done`
	then
		echo "$result"
	else
		echo "DISK UNKNOWN: Cant fetch details."
		exit 3
fi
}

if result=`$snmpwalk -v2c -c $2 -Oqs $1 hrStorageDescr | grep "$3$"`
	then
		index=`echo $result | sed 's/hrStorageDescr\.//' | sed 's/ .*//'`

		details=`fetch_details $1 $2 $index`

		free=`calc_free $details`

		if [ "$free" -gt "$4" ]
			then
				echo "DISK OK: mount $3 free $free MB."
				exit 0
			else
				echo "DISK CRITICAL: mount $3 free $free MB."
				exit 2
		fi
	else
		echo "DISK UNKNOWN: $3 dosent exist or snmp isent responding."
		exit 3
fi
[lbl@atom0 ~/scripts/nagios-plugins]$
```


----------



## peetaur (Oct 5, 2012)

To get disk usage, I found I only needed to uncomment one line in snmpd.config:

vim /etc/snmpd.config

```
begemotSnmpdModulePath."hostres" = "/usr/lib/snmp_hostres.so"
```

vim /etc/rc.conf

```
bsnmpd_enable="YES"
```

And then oddly, on some servers the SNMP description (used to look up a specific file system in nagios) is simply "/" and on others with the same OS version, it is something like "/, type zfs, dev ...".


----------

