# Jail Disk Limit, ZFS Flat File?



## corra (Mar 30, 2009)

Hi Everyone,

Got a fairly powerful server with some resources to use up. Configuring jails for a few of my clients and the only thing that concerns me is disk space usage - I need to be able to limit the amount of disk space they use.

Problem is I can't use separate UFS partitions because jails will be created and removed all the time and I don't have that much disk space.

I've tried googling but can't seem to find any answers in relation to jails and disk quota. I understand ZFS would do what I need but can't/don't want to modify the current file system.

I was thinking I could use a ZFS pool mapped to a file but am unsure of how this will perform? would this be a performance bottleneck for the Jails?

Any other ideas on how I can set individual quotas per jail?

Thanks in advanced for any help you can provide!

Cheers,

Jay


----------



## braveduck (Apr 1, 2009)

Well, instead of creating separate ZFS volumes you could go with a solution
like this:

Create vnode type 'memory' disks and use it as jail filesystems. 
E.g.

1) dd if=/dev/random of=jail1.img bs=1m count=2048
2) mdconfig -f jail1.img
3) newfs /dev/md0
4) Put something like this in your rc.conf:

jail_enable="YES"          
jail_list="jail1"

jail_jail1_rootdir="/usr/jails/mp/jail1"     
jail_jail1_fstab="/usr/jails/fstabs/jail1.fstab"
jail_jail1_mount_enable="YES"
...

5) create /usr/jails/fstabs/jail1.fstab:

/dev/md0 /usr/jails/mp/jail1  mfs  rw,-PF/usr/jails/images/jail1.img


That's all. Of course, the jail based on a file image will not perform as fast as the jail based on a separate partition/disk. But in most cases it will suffice.


----------



## corra (Apr 2, 2009)

Thanks braveduck. I've given it a go and it works It's a little bit too slow unfortunately, I also tried zfs from a memory fs too and it was even slower. I guess the only solution would be to resize the UFS2 partition and create a zfs pool on the slice. Can't really do that on a production server though. I guess I'll just have to write a script that automatically shuts down jails when they get over a certain disk space usage.


----------



## SirDice (Apr 2, 2009)

Not sure if this would work but you can run a jail on a user account, lets say jail1, jail2, etc.. Then you could set quotas for those users.

It's rather theoretical though, I've never tried it.

Edit: scratch that.. I seem to have misinterpreted the -u and -U switches for the jail command.


----------



## braveduck (Apr 2, 2009)

> I also tried zfs from a memory fs too and it was even slower



Didn't quite catch it - you mean here the same thing as creating separate zvolumes and mounting them? I've just made a little research - seems that zvolumes work *way* faster than vnodes:


```
# dd if=/dev/zero of=image.test bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 36.604253 secs (29333800 bytes/sec)
# mdconfig -f image.test 
md0
# diskinfo -t /dev/md0 
/dev/md0
	512         	# sectorsize
	1073741824  	# mediasize in bytes (1.0G)
	2097152     	# mediasize in sectors

Seek times:
	Full stroke:	  250 iter in   3.991016 sec =   15.964 msec

# zfs create -V 1G kamaz/test
# newfs /dev/zvol/kamaz/test 
/dev/zvol/kamaz/test: 1024.0MB (2097152 sectors) block size 16384, fragment size 2048
	using 6 cylinder groups of 183.72MB, 11758 blks, 23552 inodes.
super-block backups (for fsck -b #) at:
 160, 376416, 752672, 1128928, 1505184, 1881440
# diskinfo -t /dev/zvol/kamaz/test 
/dev/zvol/kamaz/test
	512         	# sectorsize
	1073741824  	# mediasize in bytes (1.0G)
	2097152     	# mediasize in sectors

Seek times:
	Full stroke:	  250 iter in   0.015503 sec =    0.062 msec
```
I think I'm going to convert some of my vnode-based jails into zvol-based jails ))


----------



## corra (Apr 3, 2009)

Yep, ultimately zfs volumes are the way to go, unfortunately my server is all UFS2 and I can't really convert it - maybe next time. 

I think for now shutting down the jail when it uses over a certain amount is good enough (just to stop run away logs etc.. from bringing my server down).


----------

