# add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf



## ccc (Oct 23, 2013)

Hi

I have FreeBSD 8.4 installed on ESX 4.1 and get these messages after restart:
	
	



```
# tail -f /var/log/messages
Oct 23 21:38:45 bsd kernel: [B]add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf.[/B]
Oct 23 21:38:45 bsd kernel: [B]ZFS WARNING: Recommended minimum kmem_size is 512MB; expect unstable behavior.[/B]
Oct 23 21:38:45 bsd kernel: [B]Consider tuning vm.kmem_size and vm.kmem_size_max[/B]
Oct 23 21:38:45 bsd kernel: in /boot/loader.conf.
Oct 23 21:38:45 bsd kernel: ZFS filesystem version: 5
Oct 23 21:38:45 bsd kernel: ZFS storage pool version: features support (5000)
```

Should I add 
	
	



```
vfs.zfs.prefetch_disable=0
```
 to /boot/loader.conf?


----------



## usdmatt (Oct 23, 2013)

I expect the previous lines, that aren't shown, to be telling you that prefetch has been disabled because you have less than 4GB of RAM. The first line you show is telling you that if you actually want prefetch enabled, you have tell it not to disable it automatically by setting the loader variable mentioned. 

My advice is to leave it as it is. It automatically disables prefetch when you have low memory for a reason and I wouldn't recommend manually turning it back on.

The more important warning to me is the one about low kmem, telling you to expect unstable behaviour. How much RAM does this VM have? I would be very wary of running ZFS with 1GB or less of RAM.

I assume you are using the 64bit (AMD64) version of FreeBSD?


----------



## ccc (Oct 24, 2013)

usdmatt said:
			
		

> I expect the previous lines, that aren't shown, to be telling you that prefetch has been disabled because you have less than 4GB of RAM. The first line you show is telling you that if you actually want prefetch enabled, you have tell it not to disable it automatically by setting the loader variable mentioned.
> 
> My advice is to leave it as it is. It automatically disables prefetch when you have low memory for a reason and I wouldn't recommend manually turning it back on.
> 
> ...



It's 32-bit, running on ESX 4.1 and it has 2GB RAM.
BTW I don't get these messages on other FreeBSD 8.4, running as VM guest.
Any clue?


----------



## Savagedlight (Oct 24, 2013)

You should not enable prefetch on such a system unless you are absolutely sure what you are doing, and why you want it. It seems like this VM have somehow enabled ZFS; Please execute `# zpool list` and see if it states anything. If there are no recognized zpools in that VM, you can completely disregard the message about ZFS.


----------



## kpa (Oct 24, 2013)

If you're going to use ZFS on i386 I recommend that you add these to loader.conf(5).


```
vm.kmem_size="512M"
vm.kmem_size_max="512M"
```

Without those you run the risk of heavy kernel memory fragmentation as the i386 version of FreeBSD has a limited address space for the kernel memory and the defaults are on the low side.

Those settings are also needed on some other usage scenarios, I have to use them on my firewall to allow enough kernel memory for PF tables.


----------



## ccc (Oct 24, 2013)

Savagedlight said:
			
		

> You should not enable prefetch on such a system unless you are absolutely sure what you are doing, and why you want it. It seems like this VM have somehow enabled ZFS; Please execute `# zpool list` and see if it states anything. If there are no recognized zpools in that VM, you can completely disregard the message about ZFS.




```
# zpool list
no pools available
```


----------



## ccc (Oct 24, 2013)

kpa said:
			
		

> If you're going to use ZFS on i386 I recommend that you add these to loader.conf(5).
> 
> 
> ```
> ...



Using these entries in /boot/loader.conf my system won't boot.


----------



## kpa (Oct 24, 2013)

ccc said:
			
		

> Using these entries in /boot/loader.conf my system won't boot.




That's odd. I have two different systems using these settings, both on real hardware. It must be a virtualization level problem with ESX.


----------



## ccc (Oct 24, 2013)

Thanks, this problem is solved now, using these last entries in /boot/loader.conf:
	
	



```
# cat /boot/loader.conf
#sound_load="YES"
#snd_ich_load="YES"
linux_load="YES"

vm.pmap.pg_ps_enabled="1"
vm.pmap.pde.mappings="68"
vm.pmap.shpgperproc="2000"
vm.pmap.pv_entry_max="3000000"

[highlight]vfs.zfs.prefetch_disable="0"
vm.kmem_size="512M"
vm.kmem_size_max="512M"[/highlight]
```


----------



## Savagedlight (Oct 24, 2013)

I still have no idea why you would tune ZFS when you're not using ZFS.


----------



## ccc (Oct 24, 2013)

Savagedlight said:
			
		

> I still have no idea why you would tune ZFS when you're not using ZFS.



Cannot answer, perhaps a BUG in combination with 32-bit VM installed on 64-bit ESX.

```
# tail -f /var/log/messages
Oct 24 21:09:01 bsd kernel: GEOM: da0s4: geometry does not match label (16h,63s != 255h,63s).
Oct 24 21:09:01 bsd kernel: Trying to mount root from ufs:/dev/da0s1
Oct 24 21:09:01 bsd kernel: GEOM: da0s1: geometry does not match label (16h,63s != 255h,63s).
Oct 24 21:09:01 bsd kernel: GEOM: da0s3: geometry does not match label (16h,63s != 255h,63s).
Oct 24 21:09:01 bsd kernel: GEOM: da0s4: geometry does not match label (16h,63s != 255h,63s).
Oct 24 21:09:20 bsd squid[805]: Squid Parent: child process 809 started
Oct 24 21:09:48 bsd dbus[1025]: [system] Activating service name='org.freedesktop.ConsoleKit' (using servicehelper)
Oct 24 21:09:48 bsd dbus[1025]: [system] Activating service name='org.freedesktop.PolicyKit1' (using servicehelper)
Oct 24 21:09:48 bsd dbus[1025]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Oct 24 21:09:48 bsd dbus[1025]: [system] Successfully activated service 'org.freedesktop.ConsoleKit'
[highlight]Oct 24 21:11:24 bsd kernel: ZFS filesystem version: 5
Oct 24 21:11:24 bsd kernel: ZFS storage pool version: features support (5000)[/highlight]

# kldstat
Id Refs Address    Size     Name
 1   21 0xc0400000 d5c65c   kernel
 2    2 0xc115d000 31c40    linux.ko
 3    1 0xcf9e0000 8000     linprocfs.ko
 4    1 0xd00de000 17e000   [highlight]zfs.ko[/highlight]
 5    1 0xd025e000 3000     opensolaris.ko
```


----------



## kpa (Oct 24, 2013)

You might just delete the zfs.ko and opensolaris.ko from /boot/kernel if you're not going to use ZFS. It's really odd that ZFS gets initialised like that automatically. Are you sure you don't have zfs_enable setting in /etc/rc.conf?


----------



## ccc (Oct 24, 2013)

Thanks again, I have deleted zfs.ko and opensolaris.ko from /boot/kernel and deleted ZFS entries from /boot/loader.conf as well:
	
	



```
# kldstat
Id Refs Address    Size     Name
 1   13 0xc0400000 d5c65c   kernel
 2    2 0xc115d000 31c40    linux.ko
 3    1 0xcf9e0000 8000     linprocfs.ko
```
I don't have zfs_enable in /etc/rc.conf. It seems to work well now. BTW, I still cannot understand why and how ZFS was started? On the physical machine before migration to ESX, FS wasn't started.


----------

