# Seeking best practices for VPS jails to replace physical hardware



## wayne47 (May 8, 2019)

I *thought *that converting a bunch of services from physical hardware to VPS would be fairly straightforward.  I was *sure *that many people are doing this with FreeBSD. But I keep running into so many problems that I'm starting to wonder if it's possible.
The goal:

Spin up some small VPS servers, ideally doing binary installs and not installing /usr/src for speed of upgrading and minimizing storage space.
Solved problem. Lots of vendors support VPS. FreeBSD supports doing binary installs.
As these are small servers, use UFS to minimize memory footprint and support backups with dump.

Create nullfs jails on them, using my own IP addresses, again doing binary installs.
Initial plan was to use ezjail but that is not an option (see #4).
Fell back to using bsdinstall jail but freebsd-update -b is unreliable at upgrading jails.
iocage is not an option as it requires ZFS.
No solution at present.

Advertise those IP addresses via BGP.
Solved problem. Install bird, do a little configuration, BGP routes get advertised.

Be able to binary upgrade said jails.
Major hurdle. For years, we've used ezjail to manage jails. But it's not really supported anymore and it now fails utterly in doing binary updates.
Tried to roll my own but freebsd-update -b seems to miss files when updating jails.
Extensive time using Google did not discover a fully working solution for doing binary-only upgrades of jails.
No solution at present.

Run VPNs between the VPS servers and the legacy hardware.
Brought up OpenVPN which works fine at the link level.

Run OSPF over the VPNs to deal with routing.
Running into issues here. I'm testing on 11.1, with intent to upgrade to 11.2 once I solve issues #2 and #4 and have seen some posts suggesting that there may be some multicast issues with 11.1.
I'm seeing OSPF Hello packets show up from the remote site to the VPS host but never see a response back from the VPS host as though the packets are being consumed by bird without actually doing anything with them.
No solution at present.

Back everything up regularly, ideally using dump.
Solved problem as long as we are using UFS.

As indicated, the big issues are dealing with binary installs/upgrades in jails and getting OSPF to work reliably over OpenVPN links. Has someone done something similar to this and written it up someplace?


----------



## sol289 (May 8, 2019)

How did you setup OpenVPN, tun or tap? And what are OSPF network types for VPN links?

Updating jails is a little pain now, maybe pkgbase will help us in future. What if just change jail template restarting same thin jail on new template?


----------



## wayne47 (May 9, 2019)

OpenVPN with tun.

It looks like the real issue is that freebsd-update simply can not handle updating jails at all. This seems to be what is causing problems with ezjail as well. Not sure how to proceed at this point.


----------



## sol289 (May 9, 2019)

It's hard to tell what's wrong with OSPF without network topology map and OSPF configuration. I would have change OpenVPN network from tun to tap, it could be less simpler in you case.


----------



## reddy (May 9, 2019)

Why not create the jails manually in the manner suggested in the handbook? With the new syntax of jail.conf I found the process quite straightforward. I had no experience with ezjail but reading the docs it seemed more difficult to setup than going the native way.  Upgrading jails also is relatively simple if you use the structure proposed in the handbook.

 I feel these third party jail management tools may have been useful back in the days but this is no longer necessarily the case today.

Also, another remark would be that many times VPS providers have their own hardware redundancies making rolling ZFS and such unnecessary within your VPS. Make sure to read/ask about their infrastructure.


----------



## forquare (May 9, 2019)

How constrained are you on memory?  My VPS has 4GB RAM and I use ZFS with little impact, this also allows me to utilise sysutils/iocage.  I've taken my Jails (plus host) from 11.0, through 11.1 and 11.2 to 12.0-RELEASE-p3 using iocage to do binary updates.

Also, I'd recommend checking out *FreeBSD Mastery: Jails* which has recently been released.  Lucas doesn't seem to have the same issues with freebsd-update(8), but that could be because he's using a later version of FreeBSD?


----------



## wayne47 (May 13, 2019)

reddy: 

I *have *gone to creating jails manually. 
The problem is that freebsd-update -b fails utterly on upgrading (at least from 11.1 to 11.2). 
I continue to investigate this and it appears that this failure is also what is causing ezjail to fail on upgrades.  
I agree that ZFS is not really required in this case, which is why I've been working with UFS. I presumed that lots of people were spinning up VFS servers, tossing jails on them and upgrading them to keep them up to date. But, I'm finding this not to be the case.
forquare:  

The smaller systems are 512M of RAM and the larger ones are 1 GB, which is going to cause issues with ZFS. It's unfortunate that iocage insists on ZFS. 
I know Michael and had some input to his Jails book; one thing I begged him for but he did not include was a working example of doing a UFS manual binary jail install and upgrade, including ports.  
That's why I'm asking here because I'm starting to come to the conclusion that jails installed with a binary install of FreeBSD *can not be upgraded at all*. If this is true, this is a really bad thing. I am very much hoping that I am incorrect and just missing something.


----------



## rf10 (May 13, 2019)

The consensus on the web is that FreeBSD jails are not upgradable (nor were they designed to be upgradable). So the solution is to throw away the old one and recreate a new one with a new version. I manage a handful of applications running in jails this way. Obviously, if you need to manage hundreds or more, it won't be sustainable without automation. So, if you have "pets", jails created manually work great. If you need to manage "cattle", consider other automation tools, such as Kubernetes.

As a side note, I would reconsider ZFS. Yes, it does use more memory, but it makes jail management so much easier. For starters, you are saving a lot of storage space: if you have a "template"jail, and you clone its snapshot into a runtime jail, your runtime jail only takes a small amount of additional space. For example, I have one template jail to run Tomcat with a base FreeBSD install, OpenJDK, and Tomcat. All JVM based web apps running in jails (15+) use the same set of files, and the only changes in their ZFS datasets are the files modified by a specific jail (logs, war files, exploded application archives, etc.). Secondly, it makes moving templates (and even runtime jails) move from one box to another a breathe with `zfs send | zfs receive`.


----------



## ralphbsz (May 14, 2019)

Why are you using jails at all?  The purpose of jails is to isolate subsystems from each other.  For example, run the foo-server in jail F, the bar-server in jail B, and so on; then any security problem in jail F can not affect jail B.  But if you make your VPSes small enough, they are each running one jail only.  At that point, just use the whole OS as a security boundary.

Or to put the question differently: How may layers of virtualization do you need?  I'm already upset that today we run full-blown operating systems within VMs, which then run on ... drum roll ... a full-blown operating system.  Jails add an extra layer.  Something seems redundant here.


----------



## sko (May 14, 2019)

I'm running all my VPS (digitalocean and vultr) with ZFS - even the small ones with only 1GB RAM. I even had some tiny VPS with 512MB running on vultr that were using ZFS. As long as you don't abuse those tiny VPS with way too heavy loads this will work just fine. Yes, ZFS will reduce the ARC to a bare minimum, but as those VPS usually all have very fast flash storage it doesn't matter at least for workloads you'd put on a small VPS.
So I never had any issues with ZFS even on smaller VPS and it enables me to use iocell for jail management (besides lots of other benefits). iocell still uses basejails as default, so a simple re-cloning is sufficient for base system upgrades, which means it also works quite well with "automated cattle herding". IMHO another benefit of iocell is, that it doesn't drag in python and some other cruft as iocage does nowadays and thus keeps the host very minimal and therefore easier to setup and automate.

Regarding BGP I'd suggest you have a look at OpenBGPd. Although FreeBSD ships a rather old Version, its footprint is _MUCH_ smaller than bird or quagga and configuration/management is much more sane and "UNIX-like" IMHO. Also the filter rules are straightforward if you are remotely familiar with PF syntax.
We're using OpenBGPd throughout our routing infrastructure (small multi-homed, multi-site AS) and performance and ressource requirements especially in newer versions on OpenBSD is just amazing.

Regarding your problems with OSPF over VPN: IIRC openVPN by default opens an L3 tunnel (routed) and thus breaks the broadcast domain OSPF needs. Using BGP throughout the whole routing infrastructure might solve that problem, but for static VPNs (i.e. not a "road warrior" setup for laptops) I'd use IPsec + L2TP (or EoIP if you use mikrotik routers) over OpenVPN any day - especially because you won't run in such weird edge-cases with some protocols/services and it has potentially less overhead. With transparent L2 tunneling you can basically treat the VPN just like a switch, which simplifies a lot of things.
Another option would be tinc-vpn, which can also provide L2 tunnels and full-mesh VPN. I've been using it on our gateways before we switched to a dedicated routing network with dedicated VPN routers and it has been rock-solid over the years. With some tweaking the L2 tunneling even works reasonably well for VoIP over VDSL lines, which usually are a pain because of their poor latency and jitter. Tinc is also quite good for road-warrior setups, as it can penetrate through most firewalls and traverse NAT - so as long as two tinc hosts can "somehow" reach each other, tinc will get a vpn tunnel running. Configuration is also quite easy and straightforward and can be easily scripted/automated.


----------



## tommiie (May 14, 2019)

forquare said:


> I'd recommend checking out *FreeBSD Mastery: Jails* which has recently been released.


I bought the book, tried the examples in the networking chapter and they all failed on me so I'm not a fan. But the other chapters, e.g. regarding setting up, cloning, upgrading jails might be splendid. I don't know.



rf10 said:


> So the solution is to throw away the old one and recreate a new one with a new version.





rf10 said:


> consider other automation tools, such as Kubernetes.


I've done some quick googling for Ansible modules to manage jails but no luck so far. This would indeed be a good option: just automate the removal, recreating of jails with something like Vagrant-like scripts and Ansible or something like that. I'm no expert but this is still on my to-do list to investigate.



ralphbsz said:


> Why are you using jails at all?


Perhaps it's a lot cheaper to buy one rather big VPS and run jails on it than buying dozens of small VPS's. It probably also has something to do with the amount of public IP addresses being "wasted" on all those small VPS's. But I agree, stacking layers of virtualization on top of each other is generally not a very good idea.


----------



## forquare (May 14, 2019)

tommiie said:


> I bought the book, tried the examples in the networking chapter and they all failed on me so I'm not a fan. But the other chapters, e.g. regarding setting up, cloning, upgrading jails might be splendid. I don't know.



I'm partway through reading, I quite enjoy reading books like that first, then going back and trying things out.  I don't think I've got to the networking chapter yet (I'm a slow reader, and time hasn't been on my side of late).
I recommended it because there is so much information out there about jails, and so much of it is out of date that I hoped the book would consolidate current best practices.



tommiie said:


> I've done some quick googling for Ansible modules to manage jails but no luck so far. This would indeed be a good option: just automate the removal, recreating of jails with something like Vagrant-like scripts and Ansible or something like that. I'm no expert but this is still on my to-do list to investigate.
> 
> Perhaps it's a lot cheaper to buy one rather big VPS and run jails on it than buying dozens of small VPS's.



My VPS, mentioned above, runs nine jails for pretty much this reason.  I pay $20/mo and share resources, rather than spending 9*$5/mo and wasting resources.

Regarding the lack of Ansible module, it's also on my to-do list.  Occasionally I see something interesting on Github, but it either never goes anywhere or disappears


----------



## tommiie (May 14, 2019)

I am glad you enjoy the book. I bought it for the same reason: I am very new to FreeBSD and wanted to dive into jails but could not find a lot of good resources. I was hoping Lucas would present the current best practices and I think he succeeds in that. Partly. I believe his chapter on networking to be subpar. But that might just be me. And when I hear Wayned stating below, it feels like the book is lacking in other arreas as well.



wayne47 said:


> One thing I begged him for but he did not include was a working example of doing a UFS manual binary jail install and upgrade, including ports.



Nonetheless it is a good introduction to get started with jails. But more research is needed for sure.



forquare said:


> Regarding the lack of Ansible module, it's also on my to-do list. Occasionally I see something interesting on Github, but it either never goes anywhere or disappears.



I'm now working hard on understanding networking with jails: if_bridge(4) and netgraph(4), both without and with PF integration.
I take notes and pu them on GitLab and the FreeBSD wiki. Once I have a better understanding of these topics, I'll divert my attention to Ansible and other possible tools to automate jail configuration and maintenance.

The problem seems to be in part that there are so many options: manual configurationor iocage/ezjail/... and even manual configuration can be done in two ways (/etc/jail.conf or /etc/rc.conf). Next there are three options for networking, three more to integrate firewalling. Everything also depends on the usage of ZFS vs UFS and more. This will surely complicate scripts like iocage and automation in general.


----------



## wayne47 (May 14, 2019)

Before responding, I want to say thank you to all who have taken the time to address these issues.

rf10:

I've been using jails for over 15 years now and, in the past, they were upgradable just fine. Having to re-create and redeploy machines each time would be a maintenance nightmare. We allocate a block of IP addresses to each host and those blocks are in multiple firewall rules, making moving one machine to another host "difficult".
With nullfs jail mounts each jail already uses very little storage. Having used ZFS extensively, we've determined that ZFS has so many disadvantages (I know, this view is highly politically incorrect but after quite a few years and machines of experience, I maintain that it is correct) that it's only applicable in very special situations. 
ralphbsz:

In part because we've been doing this for over 25 years (yes, really). Over decades, services that would run on physical hardware got moved onto jails on our physical hosts because the resource demands did not increase that much but the available hardware kept getting more powerful.  So there's decades of firewall rules, IP address ranges and the like that are in place.
Security. The host can be hardened to the point that it's likely uncrackable. If a cracker gets into one jail, they are unlikely to be able to do much damage.
Economic sense. When a small VPS is enough to run 10 jails, it's silly to deploy VPS*10.
Maintainability - it *used* to be the case that upgrades went very smoothly and took far less time than doing upgrades on each machine.
sko:

Thank you for that comment - it's one tick in favor of ZFS. I'm not sure it's enough to overcome all the other disadvantages but I will certainly consider it. And I was not aware of iocell, only iocage, so I will look into that as well.
OpenVPN supports Point-to-Point links which *should* work with OSPF.  I'm pretty sure people have gotten it working, usually with Linux, I suspect I'm missing some small key point or there is an issue with FreeBSD. At least I can see the HELLO packets show up on the remote host, it just seems to be ignoring them. I'll admit that routing was a secondary concern and did not get at many resources as the jail upgrade issue because it's expected only to be used during the transition and there are always (yuck) static routes.
It is still highly annoying that freebsd-update refuses to work properly (on either the base system *or* jails).
I'll also take a look at OpenBGPd, thank you. At the moment I'm simply advertising a couple of routes and bird is only burning 27MB of RAM so this is not a high priority.
We've got a lot of IPSec tunnels deployed. Love the performance, especially when we can do hardware acceleration. Hate the MTU issues. Compared IPSec and OpenVPN for road warriors and OpenVPN wins hands down for punching through crappy hotel restrictions and ease of support. Also, we ran into all sorts of issues with IPSec when deploying VOIP phones - OpenVPN has none of those issues. 
Will look into tinc-vpn. It's always good to know of alternatives, thank you.


----------



## ralphbsz (May 15, 2019)

wayne47 said:


> Security. The host can be hardened to the point that it's likely uncrackable. If a cracker gets into one jail, they are unlikely to be able to do much damage.


The same argument applies to VMs ... but I completely understand your other arguments, so this is minor.



> Economic sense. When a small VPS is enough to run 10 jails, it's silly to deploy VPS*10.


Ah, I understand.  I've always worked in environments where I had lots of computers available.  If I have 10 racks with 42 computers each, it doesn't really matter whether I deploy 1000 or 10,000 VMs on that.  For me, the goal is to get my workload done efficiently: If I have to buy an 11th rack, because I'm using the existing computers inefficiently, that would be bad.  Your situation is quite different, because you need to pay per VPS, whether you utilize a lot of CPU cycles or only a few.  In your situation, packing lots of jails into each VPS makes perfect sense.  In my situation, the jail is one more layer of indirection, which causes complexity and inefficiency.

Thank you for responding!


----------



## sko (May 17, 2019)

wayne47 said:


> It is still highly annoying that freebsd-update refuses to work properly (on either the base system *or* jails)



I just updated 3 of our jailhosts and their jails to the latest patch-level yesterday - with iocell and basejails its as simple as running `iocell fetch <release>`, which will update an already existing release. Then stopping each jail, run `iocell update <tag>` to re-clone the base and start the jail and check if everything is OK. For patch releases I usually run this with a simple one-liner for all jails and grep for warnings/errors in the jails logs; in 99% of all patch updates this works flawlessly for all ~50 jails we're running in our network.
Minor release upgrades are done selectively - for some jails its easier/faster to just re-deploy them than running an update (e.g. DNS slaves), but if you update them via iocell/re-cloning it is essentially the same procedure plus running "pkg upgrade" for each jail afterwards...


edit:
RE ansible and jails, have a look at JoergFiedlers repos at github, he has some very interesting freebsd/jail specific ansible roles and modules:








						JoergFiedler - Repositories
					

JoergFiedler has 46 repositories available. Follow their code on GitHub.




					github.com
				



I've modified, re-written or patched some of the (mostly broken) existing ansible modules myself to fit my needs and added a lot of scripts for specific tasks (e.g. our automated FreeBSD client deployment) or to work around issues with ansible on FreeBSD. Cleaning up that rather hairy Ansible-Patchwork still has a special place on my to-do list


----------



## benoitc (Nov 29, 2022)

wayne47 said:


> Advertise those IP addresses via BGP.
> 
> Solved problem. Install bird, do a little configuration, BGP routes get advertised


how did you that? can you share your configuration?


----------

