# Experiment: porting a Kubernetes microservice setup to BSD



## steveoc64 (Jun 13, 2019)

Firstly, apologies if this is the wrong forum, couldnt decide which category it fits in, so I thought Id drop it in general.

Not looking for hard answers here, just general discussion and radical ideas out of left field are welcome 


Situation:
We have an application, currently running under Kubernetes on AWS that ... consumes a lot of online resources to run.
Each microservice is a single statically linked binary, so the "containers" are simple affairs.  (thanks Golang!)
We want our developers to have a local setup that closely matches whats running in production.
Running the whole stack on a developer's laptop doesnt fit at all, thats definitely a no go.
A maxed out new MacBookPro doesnt have the capacity to run the whole stack, let alone an IDE on top of that.
Running the whole stack on a secondary machine for each developer (using linux + one of the micro kubernetes stacks) is an option.
With a secondary machine - using something sensible, like a NUC, doesnt fit either. Experiments show that something at least threadripper class is required to handle the bare minimum load.

Analysis:
It appears that the overhead of running Kubernetes, message busses, dynamic IP address allocation, iptables on linux, healthchecks, service discovery, coreDNS etc. kill the setup before any data is even passed through the system.  Hence the need for a high end desktop just to manage the orchestration overhead.  This is not nice.

Mission:
Get a working solution for each Developer, so they can use their laptop for regular coding, and have a non-ridiculous extra machine sitting next to their Macs to run a local stack of microservices.

Solution:
Going to try an experiment with a pure BSD approach based on each microservice having a static IP address, and removing all the orchestration overhead.  Each microservice will then have an in-memory map of where to find other services.

BSD + Jails over ZFS is the first option.  Will probably have to write an iocell-like thing (or a wrapper over iocell) to manage the jails.  That should be straight forward.

Intrigued by the idea of using DragonflyBSD for this - readings suggest that they have some magic happening in juggling lots of threads, which might suit the problem we are seeing.  No idea what the story is on setting up a jail-like set of microservices with DragonflyBSD is though ?   Can you do jails over Hammer2 on Dragonfly ?

Anyway, any musings on this most welcome


----------



## zirias@ (Jun 14, 2019)

I have NO idea about Dragonfly and only a rough idea what Kubernetes can do. But if you want to host each microservice in a container with its own IP address and don't need any of these failover/discovery/etc features kubernetes probably provides, you should easily achieve that goal with FreeBSD's jails. You might want to go for vnet jails (where every jail has its own network stack), so you can route between your jails and make sure no unintended communication happens by accident that might later not work in production.


----------



## rigoletto@ (Jun 14, 2019)

This sort of complex situations are often better addressed in the mail lists given the fact there is where the vastly majority of the developers hang (the forums are more towards to end-users).

I would suggest starting on the freebsd-jail one.

*[EDIT]*

One thing you may want to look in the future is on MirageOS and Albatross (unikernel, instead of Kubernetes). MirageOS work pretty well on BHyve (one of the MirageOS Core members, Hannes, is a heavy FreeBSD user), and I _guess_ we will have a nice integration with MirageOS/Albatroz in a not so distant future.


----------



## steveoc64 (Jun 18, 2019)

Zirias said:


> I have NO idea about Dragonfly and only a rough idea what Kubernetes can do. But if you want to host each microservice in a container with its own IP address and don't need any of these failover/discovery/etc features kubernetes probably provides, you should easily achieve that goal with FreeBSD's jails. You might want to go for vnet jails (where every jail has its own network stack), so you can route between your jails and make sure no unintended communication happens by accident that might later not work in production.



Thanks Zirias - yeah, I use FreeBSD jails under iocell for a lot of other projects ... so Im pretty comfy with that.  I do like Kubernetes up to a point, but when you see it applied in some cases, then take a step back to work out what the actual requirements are ... and why the solution has so much overhead ... it does lead to some SMH moments 

rigoletto - MirageOS sounds cool, will have a look into that. Thanks !


----------



## steveoc64 (Jun 26, 2019)

in the meantime - in case anyone is curious .... running a complete Microk8s setup inside a bhyve VM works superbly well.  Much better than expected.

Allows me in this case to run both the old microk8s setup on an ubuntu VM + the new jails setup at the same time on the same machine.  Makes experimenting and porting things across that much easier.


----------



## admdwrf (Jun 26, 2019)

I just launch a POC for my startup yourITcity. All my production infra run under FreeBSD.
We used golang to dev our backend.
I use HashiCorp Consul for services discovery and monitor my services too. Concerning services, they run under Jails with vnet (VIMAGE), and services orchestration, it is HashiCorp Nomad.
All run very well.
For my dev env, I have a VM on my laptop and workstation with consul+nomad+devtool.
I have the same QoS than a Kubernetes. If one of my services crash, Nomad will restart it in another place.
Conclusion, you can effectively have a pure BSD approach


----------

