# Building ports & heating house.



## Alain De Vos (Jun 29, 2021)

The energy of building ports is currently used for heating my livingroom.
But CPU's operating at lower temperatures and voltages are the future.


----------



## mer (Jun 29, 2021)

Hence packages.  Let someone else deal with the energy costs


----------



## astyle (Jun 29, 2021)

Come on, it's summer out there... The only lower-voltage stuff is for laptops anyway, just to avoid frying everything around them, such your actual lap, and internal components, which are really squished together.


----------



## sidetone (Jun 29, 2021)

I say, that ports need to be slimmed. Then, a package and a port will work identically. Use the BSD and Apache implementations, and leave out other implementations such as Doxygen, Avahi, Alsa and LibCanberra.

In my theory, I say after that and slimming Xorg, building all ports for a desktop should last about 6 hours. Not including toolchain utilities that are in base, which can just go in by binary updates.

I need to stop ranting.

I mix ports and packages, because I can figure out how to make it work in a fraction of the time than recompiling everything.


----------



## Alain De Vos (Jun 29, 2021)

A lot depends on CPU speed. Amd Ryzen compiles 4X faster then my Intel-Core-I7.
Waiting 2 hours or 8 hours makes a difference.
But ports with large buildtimes are mostly badly designed.
Allow me to give an example.
On my slow Intel-Core-I7 cpu the buildtime of openbsd's mail smtpd is less then one minute.


----------



## mer (Jun 29, 2021)

I'm not going to disagree with sidetone but it comes down to "what defaults should a port be built with".  Some people want everything, others want bare minimum.  If you do one, you annoy the others.
The answer is "flavours", but then how much work is a port maintainer willing to do?  Every port flavour may take a lot of time to create, test, checkin.


----------



## sidetone (Jun 29, 2021)

I believe you can get almost everything in terms of functionality, by using BSD implementations when they exist.

Things for special purposes, the only thing, I see left out is JACK, which, there can be messages, saying if you need this ability, simply install this package.


Alain De Vos said:


> Allow me to give an example.
> On my slow Intel-Core-I7 cpu the buildtime of openbsd's mail smtpd is less then one minute.


There's lot of examples, but to sum it up, use a BSD or Apache implementation whenever it exists instead of another implementation.

A lot of upstream programs are made with the assumption that there's only a GNU implementation.

The major exception of a GNU utility that is useful as an alternative is GNUTLS. Gmake is simply required, bc not everything can be built with bmake.


----------



## mer (Jun 29, 2021)

A lot of it comes down to expectations and "time".    Note:  I'm coming from "I've been using FreeBSD as a desktop for a long time and updating and building from source".

Can I build X from source and configure it exactly how I want it?  Absolutely.

But do I want to spend the time and effort on every single port (and dependency) that I use?   Not anymore, so I migrated to using prebuilt packages.


----------



## Sevendogsbsd (Jun 29, 2021)

I used to have an HP z800 with 2 6 core Xeons for 24 virtual cores and 96GB ram. It would build all the desktop packages including compilers, open office, FF, rust, chromium, etc, in about 4 hours, but heated up my office in the summer so I had to move the machine to the other side of my office and run it headless.

I moved to packages because frankly the only option I really ever tweaked was the subshell support on midnight commander and I have since switched to using another file manager whose name escapes me at the moment. For me, packages are easier. To each his or her own!


----------



## Alain De Vos (Jun 29, 2021)

Currently I'm rebuilding more than 2000 ports, while surfing the internet.
In fact I don't know the packages are building.
There is always one core of my CPU sleeping.
Everything is smooth and fast.
I must say the developers of the freebsd-scheduler must have done a great job.
Why do some people write programs in assembler. Not always because of technical need.
Tomorrow while i write a document or surf, i compile, further in background, without even noticing.


----------



## Sevendogsbsd (Jun 29, 2021)

My ports list was only about 700 or so and I never used the machine as a desktop once I started using it as a build server. I just gave up on building ports - for me there was no point. I do understand the needs though.


----------



## Alain De Vos (Jun 29, 2021)

I hope one day I will be able to write patches. Why do people want to write patches ? Because you can.
Just like why do people climb mountains.


----------



## sidetone (Jun 29, 2021)

mer said:


> The answer is "flavours", but then how much work is a port maintainer willing to do?  Every port flavour may take a lot of time to create, test, checkin.


I see something like flavors, that works with both ports and packages.

Something like, which implementation do you want? For dependencies, choose in order of preference: BSD, Apache, or GNU. Choose the BSD or Apache implementation first, then only use the GNU implementation, when other implementations don't exist.

It would be much better to just have two implementations to choose from for the default, which would function about the same, then have the GNU implementation for the few things that are left, without so many dependencies.

I can't explain why I believe it will work.

I'm thinking in terms of functionality, while others are thinking in terms of needing each dependency because it is interlaced.

Why would 1 make need so much more than another make, for 1 dependency? I understand why Cmake pulls in Sphinx, which pulls in Python. But shouldn't Sphinx be standalone than be tied to cmake? Can I choose to have Sphinx or an alternate implementation, and never have Doxygen? Or just let the implementation be the default, and have it for everything, no matter what upstream says is a must. Why do GNU dependencies pull in so many unrelated ports to a specific function? When a BSD implementation doesn't? Or why would a dependency not related to a graphics card require the latest build of Rust or LLVM? When, all it's needed for is a poorly written Linuxism dependency that wasn't portable to standards, or that they insist on it being needed.


----------



## Alain De Vos (Jun 29, 2021)

Sidetone, see it from the perspective of the maintainer. From us poor mortatabels, it can lead to cyclic dependencies. Which are pain.
I disabled doxygen except the API's I really car about.


----------



## sidetone (Jun 29, 2021)

Alain De Vos said:


> Sidetone, see it from the perspective of the maintainer. From us poor mortatabels, it can lead to cyclic dependencies. Which are pain.
> I disabled doxygen except the API's I really car about.


I believe it will be easier for maintainers. Less dependencies to maintain, less things to go wrong, less to secure, less to debug, and simpler makefiles. That was already included in my consideration. Past fixes that I have pointed out already did this.

Did anyone actually debug a lot of bloat, when I pointed out that it wasn't needed? It's easier to understand and fix fewer dependencies. I don't understand how anyone was going to debug over a days worth of compile, when they didn't even know why it was there. When the problem was simply, why do we have 10 sets of duplicate dependencies, and why did 1 enabled feature add 1 to 3 entire Linux distributions worth on top of a FreeBSD distribution. I believe a lot of port dependencies can be trimmed by this method, then so that less maintenance would be needed.

The flavor part though would be more difficult, but it would be easier after fixing things, than the way flavors are now. I rather just have the default be the BSD implementation, then Apache, and less flavors. Whichever BSD or Apache implementation gets the default is less important.


----------



## PMc (Jun 30, 2021)

Sevendogsbsd said:


> I used to have an HP z800 with 2 6 core Xeons for 24 virtual cores and 96GB ram. It would build all the desktop packages including compilers, open office, FF, rust, chromium, etc, in about 4 hours, but heated up my office in the summer so I had to move the machine to the other side of my office and run it headless.


I wonder how people get along with _not_ adjusting port options (or patching/fixing one or the other).

I bought an i5-3570 back in 2013 (I wanted something decent), and building world+ports is almost the only thing I would use the power. But now it takes 2-3 days. Now I got an E5-2660v3 (2nd one could be put in), but with that fat cpu sitting in the pole position it conflicts with the disk bays in the server, and will need some metal-working to fit in. I think of putting it in the desktop instead, but then I need an extra graphics card (would a used quadro fx-1800 still work? - it has the connectors I need and should do 2xWQHD) 

Maybe a better idea is to build the ports in the cloud (and pay only for compute that is actually used).



Alain De Vos said:


> Why do people want to write patches ? Because you can.
> Just like why do people climb mountains.


No. Because I want the things to work. Decently. In some way You are right - if there were no option to fix the things, one would learn to live with them as they are. (But then there wouldn't be fun in running a computer at all, and one could as good use a windows tablet and live as a stupid consumer.) But the main difference is: in the end there is a result that actually solves problems. 
Examples:

For years I fought with the backup solution. Now I have one that
collects my documents, edits, configs etc. every 10 minutes
creates longterm and offsite images
moves a 3rd copy to the cloud automatically
etc.
But that doesn't work reliably without patches

I have a little webserver running, on a fixed IP that is borrowed from the cloud machine. Normally one does not know if such a construct is indeed working (until one is abroad and notices it is not reachable from outside). Solution: it is always accessed from the outside - even from my LAN the connections go out to the internet and then come back from there. Needs a bit hacking into the ipfw configs, which is ugly. So I built a web frontend to properly craft the ipfw configs, and now I can reload them without even breaking the current (stateful) connection.
The port build itself: for as long as I can think I fought with brooken dependencies, smartass ports that would look into what is actually installed, and the like. Now ports and base are built together in a bhyve guest, options are adjusted beforehand, and the port will only see accurately the dependencies that it has required.
The general idea is to fix the problems, in a way that they will never bother again. If everybody else agrees to live with crap, I still don't need to chime in. 
BTW, I put these things online, in case anybody is interested. The world+ports building scheme is here. The ipfw config crafter will go online here (it's just a mockup for now). And I tried to create a podcast presenting "The Bug of the Week", but got bored of it.


----------



## Deleted member 30996 (Jun 30, 2021)

I have an old style electricity meter in my apartment (not a smart meter) and am allotted so many kilowatt hours per month before I am charged a pro-rated cost for use over that amount.

I never pay an electric bill in the Winter and the highest bills I've had in the Summer were a few in the $40 range when I ran my Dell tower pfSense box.

I used to compile ports on 5 Thinkpads at once but now I use an oPolar gaming fan and do one at a time so they don't raise the Sea level by melting ice at the Poles, since I only own one fan.

I am a carbon-based life-form, carbon-based life-forms should leave a carbon footprint and I want to leave a massive carbon footprint, as I am destined to in the future.


----------

