# Software Bloat



## mrbeastie0x19 (Oct 19, 2021)

20 years ago the most advanced games console I can think of was the PlayStation 2, this ran on a remarkable 32mb of Ram (with a small amount of other ram, and when I say small I mean less than 10mb). Today it is common to see a single program use that amount for an extraordinary small task. This was a machine that could play sound, moving video, accepted input and did networking, all in a remarkably small amount of Ram.

How is it hardware has got so sophisticated but software has become so large and required the utilisation of this.

To put it into perspective the worst machine I can find on Amazon right now has 2gb ram. That's 62.5x the amount the Ps2 had.


----------



## Alain De Vos (Oct 19, 2021)

Software did not became bad. Memory just became cheap.


----------



## mrbeastie0x19 (Oct 19, 2021)

Alain De Vos said:


> Software did not became bad. Memory just became cheap.


Why does it seem to need so much more just to achieve the same thing. I could understand if it led to performance increases but having used these old machines I don't think it has.


----------



## eternal_noob (Oct 20, 2021)

I believe it has a lot to do with media files. Textures, audio and so on. There are a lot of these nowadays and all have to be kept in memory.


----------



## Alain De Vos (Oct 20, 2021)

javascript code with a functionality needs more memory than the same thing programmed in assembly.


----------



## richardtoohey2 (Oct 20, 2021)

mrbeastie0x19 said:


> Why does it seem to need so much more just to achieve the same thing


Um, I think there's quite a difference between what you can do on the PS5 compared to the PS2.  Not sure gaming is the best example of software bloat, but agree it's out there - lots of frameworks and things like Electron.

Hardware is cheaper than programmers.


----------



## Jose (Oct 20, 2021)

richardtoohey2 said:


> Hardware is cheaper than programmers.


Exactly. Writing software for severely limited hardware takes more skill and time. There are also fewer people who can do it. The upside is you get more software, and it's cheaper. The downside is your laptop burning a hole in your pants every time you open a Web page.


----------



## Hakaba (Oct 20, 2021)

gaming is not the good example...
Let's do some maths.

The PS 2 can display an image up to 1920x1080 pixels, with a color depth of 8bits (per color and transparency) and with a 50hz refresh frequency.

The PS5 can display a 4k (2160x4080) image with a color depth of 10 bits (per color and 8 for transparency?) and a frequency up to 120hz.

What is the size of an image displayed with PS2 and PS5 ?
and a video ?

And this do not include the sound capabilities, raytracing and so on.

In my Mac SE (8" black and white screen) all game that I played was distributed in a 1.44Mb floppy.


----------



## zirias@ (Oct 20, 2021)

The most simplified thing about this topic was already mentioned, but let's repeat:

Hardware is cheap, Software is expensive.

Software is expensive because software development is complex (and yes, hardware development is complex as well, but in a much more limited domain. The problems you solve in software are _much_ more varied).

So, if you can greatly cut down software development effort by using e.g. simplified languages, libraries, frameworks, and so on, and the price is you will need more RAM (for all that "hidden" complexity), that's a good deal.

Then, games are special, and development for a gaming console is even more special. Games (definitely 20 years ago) were relatively simple in structure. There's a limited amount of things that can happen (or the player can do) during gameplay. Furthermore, if you write a game for a console, you will find a special-purpose OS (if any) and a defined hardware. There's no need for any abstraction layers you will need with your typical portable PC software. You will never have to think about other processes, there will only be those your game needs. Directly accessing audio and video hardware isn't a problem at all, cause again, you're "alone" on that machine and you know exactly which hardware is used. Comparing memory amounts needed for this to programs running on a general-purpose multiuser/multitasking OS on your general-purpose machine is fundamentally flawed.

Finally, I think the "extraordinary small task" you talk about is also mistaken. Please compare such software to software on "desktop" machines 20 years ago. Back then, it wasn't uncommon that a crashing program would crash your whole system (yep, it was the time of e.g. win 9x on your typical x86). UIs were "simple", but very lacking in functionality (and UX). Few things, if any, were configurable by the user. Sound was an exclusive resource. Well, this list goes on, depending on which software you actually look at...

So now, what's "Software bloat"? Maybe the "excessive" use of libraries and frameworks. This _does_ happen. I'm looking for example at node and electron. I don't like them for other reasons (portability isn't as nice as advertised, packaging software using them is a PITA), but I personally wouldn't mind the "wasted" memory, if I can have a well-working application quickly, because the devs didn't have to "reinvent the wheel" over and over.

And finally, I just stumbled over this: https://v8.dev/blog/pointer-compression – it's IMHO an awesome example how today(!), in a lower level, a lot of effort is spent for optimizations. Doing it there makes a lot of sense because a lot of software will immediately benefit from it.


----------



## sko (Oct 20, 2021)

Alain De Vos said:


> Software did not became bad. .



I'd like to counter that... Today the approach to building a 'small' (in terms of functionality) program is sadly more often than not: "lets use this framework, which needs that ecosystem with this interpreter and drags in those few hundred libraries and dependencies and needs exactly *this* one version of that graphical framework and exactly *that* version of this obscure library someone abandoned in 2005"
Programs that used to be a few MBs in size 20 years ago are now 500+ MB behemoths in total - often under the disguise of (non-existent) "cross-platform" support. Take for example all those fancy "desktop apps" (I hate that term) that are basically just a complete browser plus/or something horrible like electron and the actual program logic and functionality is just a few kb of actual code and could have been solved easily without all of that bloat.

Yes, this isn't true or that extreme on all platforms, but *very* extreme on some. (E.g. android - which is a dumpster fire of bloat, bad code quality and horrible security practices)
It seems the profession of writing dedicated and optimized programs has nearly vanished or at least has been pushed back to some ecosystems (of which the BSDs are luckily one), and often it was replaced by just cobbling together a bunch of libraries and frameworks that drag in tons of dependencies and are a nightmare to support and keep working over time.


----------



## covacat (Oct 20, 2021)

still the specs of a pi zero looked high end 20 years ago
you could run win2k/xp without trouble in 256 megs 
i ran ms small biz server (nt4) WITH exchange on a K6 with 64 megs or ram serving 10-15 users
today a printer driver package is larger than win2k(3) and it clearly does A LOT less


----------



## mrbeastie0x19 (Oct 20, 2021)

I think some of you are taking my comparison with the consoles a little out of context. I used the Ps2 because it was an example of a machine that did a lot with a small amount of memory, but consoles are still general purpose computers. By all accounts I think the PS5 does memory optimisation much better than most, although the storage is sometimes questionable. A better example to think about is why could I play a game on 32 mb 20 years ago when I now need that just to open a text editor... Also for people saying the comparison isn't fair because modern machines are more sophisticated multitasking ones, I do get that, but we also have massively parallel hardware compared to then.


----------



## mrbeastie0x19 (Oct 20, 2021)

covacat said:


> Zirias said:
> 
> 
> > Finally, I think the "extraordinary small task" you talk about is also mistaken. Please compare such software to software on "desktop" machines 20 years ago. Back then, it wasn't uncommon *that a crashing program would crash your whole system* (yep, it was the time of e.g. win 9x on your typical x86). UIs were "simple", but very lacking in functionality (and UX). Few things, if any, were configurable by the user. Sound was an exclusive resource. Well, this list goes on, depending on which software you actually look at...
> ...


Exactly. The comparison with respect to graphics makes perfect sense. Graphics have got more detailed, but what about simple UI sprites? 

Also I have highlighted a bit there. Is this not still the case? It is very common for me to have a single program stall my entire system, whether it be chrome or another application, ironically part of that is lack of memory which people on this thread are telling me is not so much of an issue, sure I could get more but couldn't programmers just not be so wasteful?


----------



## sidetone (Oct 20, 2021)

mrbeastie0x19 said:


> I could get more but couldn't programmers just not be so wasteful?


No, because they don't get it, and will fight tooth and nail against something being better, because they don't understand.

It takes understanding the kind of algebra used to solve a calculus problem, not 3rd grade algebra. Now, many will say they get that, and maybe less will than claim it, but they still wouldn't be able to apply those concepts to anything else.

I've applied mathematical concepts to the work place, even, with physical objects, and they saw what I did was better and accomplished more work faster with the same previous effort. Someone else did problem solving a bit quicker, but I actually was doing that already, and the person did something before I was going to do it. I also suspect that it crossed the boss'es mind to do work that way, but didn't think it could work, because they understood it quickly when they saw it work, and they adopted it. I kept doing problem solving in my head, finding the best way to do more with the same effort, and doing it, even as I could only do one part of a task at a time, that other workers changed the setting due to them doing part of the work when I returned to continue to complete a task. Then again, I was able to do this to a lesser are more basic extent at a previous workplace, before I took calculus.

The only way that will happen is starting a new sub project. Otherwise, nay sayers will keep saying, "no, you can't." It doesn't make a difference if it's software, or something else. We're beyond the technology of 1910, because people didn't listen to them.


----------



## drhowarddrfine (Oct 20, 2021)

Nowadays, everything is "Fast, cheap or good. Pick two."


----------



## zirias@ (Oct 20, 2021)

BTW as for comparing sizes, I just had a quick look at my latest tool, because

It does one "simple" (but, on a lower level not _so_ simple) job and nothing else
It's written in plain C with no dependencies outside POSIX/C standard libraries, so probably isn't suspicious of "bloat"
and it looks like this:

```
root@mail:~ # ls -lh /usr/local/bin/remusockd
-rwxr-xr-x  1 root  wheel    43K Oct  9 09:57 /usr/local/bin/remusockd
root@mail:~ # file /usr/local/bin/remusockd
/usr/local/bin/remusockd: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, FreeBSD-style, stripped
root@mail:~ # ldd /usr/local/bin/remusockd
/usr/local/bin/remusockd:
    libthr.so.3 => /lib/libthr.so.3 (0x80024f000)
    libc.so.7 => /lib/libc.so.7 (0x80027c000)
```

So now I could say, damn, in 43K, I could write a whole nice action game for the C64, with music and everything. And I would be right  But my codebase uses a modular and flexible design, which e.g. helps to prevent bugs, makes future changes easier, etc...

Yes, there's software out there that is "unnecessarily bloated". But there are also very valid reasons to _use_ the resources you have at hands.


----------



## kpedersen (Oct 20, 2021)

From what I have seen it is down to middleware.

Back in the day a team would write their own i.e occlusion culling engine which does the bare minimum for their purpose. Or at least off-the-shelf ones were minimal of features to fit the hardware of the time.

Now you would grab an off-the-shelf one and it would have every bell and whistle, even if you don't use 99% of the features. Yes the compiler can strip out unused code paths but there are a lot of things it still can't.

Now consider that a typical game would include dozens of different types of middleware. Even worse if you look at prosumer / hobby game engines like Unity. The fact that they stuck an entire .NET VM and runtime in there means a spinning cube program is already 10 times the size of the maximum PS2 memory limits.

It is costly to "re-invent" the wheel. But sometimes you simply can't beat a wheel invented for a specific task.

These days, if I need a library, I often start with an old revision (from ~2005) and build up from there. I usually don't need to backport any security fixes because these always only address issues caused by the later bloat.


----------



## Crivens (Oct 20, 2021)

drhowarddrfine said:


> Nowadays, everything is "Fast, cheap or good. Pick two."


That was yesteryear. Now it's pick any ONE.


----------



## byrnejb (Oct 20, 2021)

These days it is more like: "If it works be thankful. And, no, you do not get to pick."


----------



## jbo (Oct 20, 2021)

You guys forgot to complain about subscriptions


----------



## Crivens (Oct 20, 2021)

Just saw Teams installs a local version of Edge, keeps the previous version around, so you end up with 3 edge browsers on your company laptop. Plus chrome. What the...


----------



## hardworkingnewbie (Oct 20, 2021)

Alain De Vos said:


> Software did not became bad. Memory just became cheap.


One word: Electron.

And aside that the underlying problem is that programmers got lazy. Who started programming in the 70/80s, when memory was a scarce ressource, needed to squeeze out as much out of the limited stuff as possible. Today this is mostly a forgotten art, among some niches like embedded systems. 

Since memory is dirt cheap these days, programmers tend to treat it that way.


----------



## jbo (Oct 20, 2021)

hardworkingnewbie said:


> Today this is mostly a forgotten art, among some niches like embedded systems.


Not sure whether I'd refer to embedded systems as niches. As an embedded engineer I might be biased tho.
But then again... it seems like the "perceived definition" of embedded is changing too. Personally, I see more and more customers asking for some embedded engineering and then it turns out that it's something based on a raspberry pi running a gazillion python scripts.


----------



## zirias@ (Oct 20, 2021)

hardworkingnewbie said:


> And aside that the underlying problem is that programmers got lazy. Who started programming in the 70/80s, when memory was a scarce ressource, needed to squeeze out as much out of the limited stuff as possible. Today this is mostly a forgotten art, among some niches like embedded systems.


The thing is: Would you prefer to pay someone for weeks of "optimizing" they could also spend to actually satisfy business needs?


----------



## sko (Oct 20, 2021)

jbodenmann said:


> running a gazillion python scripts.



ah yes, python... "I learned programming with python... this is so easy! just import these 83 libraries and you can print 'hello world' on the screen with just 2 lines of code! lets use it for everything!"


----------



## hruodr (Oct 20, 2021)

What is bloat? A big program or a slow program? The speed does not depend on the size, but on the
number of steps till the goal is reached. A single line of code like "10 GOTO 10" is bloat in speed, because 
it never reachs a goal. A very big program may be very fast for every data entered. Todays bloat is both
in size and speed.


----------



## mrbeastie0x19 (Oct 20, 2021)

hruodr said:


> What is bloat? A big program or a slow program? The speed does not depend on the size, but on the
> number of steps till the goal is reached. A single line of code like "10 GOTO 10" is bloat in speed, because
> it never reachs a goal. A very big program may be very fast for every data entered. Todays bloat is both
> in size and speed.


I can forgive size increase if it trades off with performance (a smaller program is not necessarily a faster or more responsive one) or makes development time easier. However as someone above said, they have three versions of Edge. Why? It's madness the weight of modern day dependency graphs.


----------



## Jose (Oct 20, 2021)

kpedersen said:


> Back in the day a team would write their own i.e occlusion culling engine which does the bare minimum for their purpose. Or at least off-the-shelf ones were minimal of features to fit the hardware of the time.


Yes, I worked there. We had "libcommon" or "liblocal" or "standard" that had all the things you need to write software, like lists, hashmaps, etc. They were all closed-source and had new and interesting bugs in them. If you were lucky there was a truly hair-raising library of concurrent programming abstractions.

The line between not-invented-here and dragging in the kitchen sink takes careful consideration, and many people are just not up to the task.



hardworkingnewbie said:


> And aside that the underlying problem is that programmers got lazy.


Some people I highly respect consider laziness a virtue in programmers.
https://wiki.c2.com/?LazinessImpatienceHubris(EDIT: Wow. That URL went wrong!)



sko said:


> ah yes, python... "I learned programming with python... this is so easy! just import these 83 libraries and you can print 'hello world' on the screen with just 2 lines of code! lets use it for everything!"


It' so much worse than that. Have you heard about Jupyter? Now you need a local Web server and a browser to write (bad) Python.


----------



## zirias@ (Oct 20, 2021)

Jose said:


> The line between not-invented-here and dragging in the kitchen sink takes careful consideration, and many people are just not up to the task.


It helps to be aware there _is_ actually a trade-off to consider.

And yes, home-grown libs implementing "basic" stuff that seems super simple (hashmaps – except if threads come to the table. or even stuff like a datetime type and methods) are a huge red flag.

When looking for libs to use, we tend to look for a solution providing "just enough" for our problem. Unfortunately, you won't always find something.


----------



## kpedersen (Oct 20, 2021)

Zirias said:


> When looking for libs to use, we tend to look for a solution providing "just enough" for our problem. Unfortunately, you won't always find something.


Agreed. This is exactly why as mentioned I start to look at older stuff. Yes, it may feel pretty weird at first but ultimately if a library has gotten objectively worse and regressed with cruft, it makes sense to use the version before the breakage.

The whole IT industry has tried to put fear into us that "old software is insecure". However code is still code. Almost always, the less there is, the better.


----------



## Jose (Oct 20, 2021)

Zirias said:


> And yes, home-grown libs implementing "basic" stuff that seems super simple (hashmaps – except if threads come to the table. or even stuff like a datetime type and methods) are a huge red flag.


There weren't many options back in the day when the product was software. No way the lawyers were going to let you use any GPL stuff, the commercial libraries had their own new and interesting bugs, and were expensive to boot.

I think that was the original reason why Java got so popular. It's a good language, but not spectacularly so. What was new and good was the rich core library it came with. Especially back then when everything had to be threaded and the POSIX threads libraries were a sick joke.


----------



## mrbeastie0x19 (Oct 20, 2021)

Jose said:


> There weren't many options back in the day when the product was software. No way the lawyers were going to let you use any GPL stuff, the commercial libraries had their own new and interesting bugs, and were expensive to boot.
> 
> I think that was the original reason why Java got so popular. It's a good language, but not spectacularly so. What was new and good was the rich core library it came with. Especially back then when everything had to be threaded and the POSIX threads libraries were a sick joke.


Yes the core java libraries are very good and the implementations are good. I also think they stay away from implementing the kitchen sink, there are some common variations of things but on the whole they're quite generic. For example there's no graphs but you can implement one easily with a list.


----------



## Sevendogsbsd (Oct 20, 2021)

Software bloat is irrelevant to me. Having said that, I always have new hardware and that hardware is almost always high end, because I can afford it.  That’s not everyone’s situation and I understand that. I do see plenty of use cases where older hardware can be used so in those cases, using FOSS and a limited set of applications can work.

The web is terrible in my opinion. JavaScript is useful for devs and “can” make web applications more useful but can also ruin the experience and bring new hardware to its knees.


----------



## sidetone (Oct 20, 2021)

They don't get it, because they don't understand what bloat is, and how much redundancy there is. They misattribute why there's redundancy. There's also excuses for multitudes of repeated redundancy. Even when code was cleaned up, those who never built things or compiled things and didn't know why will later falsely attribute that to hardware. It's obvious from some replies.


----------



## zirias@ (Oct 20, 2021)

sidetone said:


> They don't get it, because they don't understand what bloat is, and how much redundancy there is. They misattribute why there's redundancy. There's also excuses for multitudes of repeated redundancy. It's obvious from some replies.


Please tell us about your experience in professional software development.


----------



## sidetone (Oct 20, 2021)

When a non-programmer actually pointed out redundancies, and they fixed it, reducing hours of compile time, thus further making programs run faster, that's all there is to know at how badly the thought process is for many programmers. It even shows that even good programmers didn't even know where to start or there was so much, they didn't realize it could be fixed so well.


----------



## zirias@ (Oct 20, 2021)

IOW: none.


----------



## sidetone (Oct 20, 2021)

So this excuses terrible programming, and piling on dependencies, while not understanding why. Actually, my example is programming experience anyway.

It's better to have people who understand math and know how to apply that to unrelated subjects than it is for those who don't understand math. Because math goes a long way, into every single topic.

And telling someone something like, you don't do this or you don't have this is usually a cheap cop-out, and there's no logic involved in it, it's just because they don't want to hear something that actually somehow has a lot of validity.


----------



## zirias@ (Oct 20, 2021)

I just don't see anything good in people talking about stuff they don't fully understand.


----------



## sidetone (Oct 20, 2021)

Zirias said:


> I just don't see anything good in people talking about stuff they don't fully understand.


You don't realize it, but that refers to you. Programming or anything without understanding math is inefficient. Of course you wouldn't understand, because even if you do understand advanced math, your arguments show that you don't know how to apply it.


----------



## zirias@ (Oct 20, 2021)

You're on very thin ice, my friend.


----------



## sidetone (Oct 20, 2021)

So only a programmer is only able to tell something runs better and compiles faster? It doesn't matter if someone else fixes something, and makes it 10x better, that's irrelevant, because they're not a programmer.



Zirias said:


> You're on very thin ice, my friend.


Maybe you shouldn't be so argumentative in an antagonistic way.


----------



## zirias@ (Oct 20, 2021)

Maybe you should get to the point. If you managed to improve something, show sources. This would "count" as an "experience".

Otherwise, don't tell me crap about maths. Studying computer sciences at a german university involves an awful lot of maths (more than you will ever need for the job, but that's fine, cause universities are about science). And especially, don't tell me anything about software development, engineering, design, etc. That's my profession for MANY MANY years now. So?


----------



## sidetone (Oct 20, 2021)

My only mistake is not taking out a full ad in a newspaper.

If a tree falls, and no one heard it, you would claim the tree never fell. Even if it had an effect, which was so seemless. You may not know something happened, but to claim nothing ever happened is not a logical thought. You can't know everything that didn't happen, even when at least someone knows.

You also sound like you don't even apply what math you supposedly know.

Is a child not able to say, the leaning tower of Piza isn't leaning, because he's not an engineer?


----------



## zirias@ (Oct 20, 2021)

Bla, Bla, Bla.


----------



## sidetone (Oct 20, 2021)

You're NOT intelligent.


----------



## Tieks (Oct 20, 2021)

sidetone said:


> tower of Piza


Hungry? 




Zirias said:


> I just don't see anything good in people talking about stuff they don't fully understand.


As a programmer of various application software I can assure you that talking with the users of that software is very useful.  Think of design flaws like implicit assumptions that turn out to be incomplete or even completely wrong. Users can tell you. It will usually lead to substantial improvements.


----------



## mrbeastie0x19 (Oct 20, 2021)

Tieks said:


> Hungry?
> 
> 
> 
> As a programmer of various application software I can assure you that talking with the users of that software is very useful.  Think of design flaws like implicit assumptions that turn out to be incomplete or even completely wrong. Users can tell you. It will usually lead to substantial improvements.


That is usually the basis of good UI design so I very much agree.


----------



## Crivens (Oct 20, 2021)

_View: https://m.youtube.com/watch?v=nI3C9yLVsVE_


This for the discussion.

And if the bickering does not stop maybe there will be trouble ahead.


----------



## mrbeastie0x19 (Oct 20, 2021)

Crivens said:


> _View: https://m.youtube.com/watch?v=nI3C9yLVsVE_
> 
> 
> This for the discussion.
> ...


Honestly if you've ever had the misfortune of using the HP printer I own I'm surprised printer responds.


----------



## zirias@ (Oct 20, 2021)

Tieks said:


> As a programmer of various application software I can assure you that talking with the users of that software is very useful. Think of design flaws like implicit assumptions that turn out to be incomplete or even completely wrong. Users can tell you. It will usually lead to substantial improvements.


That's not the point. In software development, if you're not talking to the user, you're doing _everything_ wrong. But that's about requirements, UX, etc. A "user" telling you he knows better about technical matters? Ridiculous.


----------



## mrbeastie0x19 (Oct 20, 2021)

Sevendogsbsd said:


> Software bloat is irrelevant to me. Having said that, I always have new hardware and that hardware is almost always high end, because I can afford it.  That’s not everyone’s situation and I understand that. I do see plenty of use cases where older hardware can be used so in those cases, using FOSS and a limited set of applications can work.
> 
> The web is terrible in my opinion. JavaScript is useful for devs and “can” make web applications more useful but can also ruin the experience and bring new hardware to its knees.


I hope WASM will ease this, it is increasingly common to see transpilers for JS too, but yes the web is a monster.


----------



## mark_j (Oct 20, 2021)

mrbeastie0x19 said:


> 20 years ago the most advanced games console I can think of was the PlayStation 2, this ran on a remarkable 32mb of Ram (with a small amount of other ram, and when I say small I mean less than 10mb). Today it is common to see a single program use that amount for an extraordinary small task. This was a machine that could play sound, moving video, accepted input and did networking, all in a remarkably small amount of Ram.
> 
> How is it hardware has got so sophisticated but software has become so large and required the utilisation of this.
> 
> To put it into perspective the worst machine I can find on Amazon right now has 2gb ram. That's 62.5x the amount the Ps2 had.


Forgive me if this has been raised before (I've not read all the other replies), but cpu architecture has something to do with it, as well as how some operating systems handle different architecture, even 'simultaneously', for example universal binaries on Apple and even WOW64 on Windows. This leads to hefty sizes.

With the architecture, the transition from 16 bit to now 64 bit as the standard PC platform cpu has of course inevitably lead to larger binary executable files. Even so, a lot is hidden from the user when dynamic/shared libraries are called. So a program might look small(ish) in a directory listing but be very large once running and after pulling in massive libraries.

Finally, a lot of programmers are lazy. This is primarily the fault of OO languages and the indiscriminant pulling in of libraries to perform a simple task. Yes kpedersen , I read your comment and you are 100% correct in using the "correct wheel" rather than just the "wheel" provided by a large library.

More programmers should probably take a course on embedded programming...


----------



## richardtoohey2 (Oct 20, 2021)

Zirias said:


> The thing is: Would you prefer to pay someone for weeks of "optimizing" they could also spend to actually satisfy business needs?


This is my experience - the customers want to pay as little as possible for as much "working" code as possible and they want it as soon as possible.

Do they want to pay me for the smallest, fastest, cleverest code?  If NASA or medical equipment then probably yes.  For knocking up some website functionality, then no.  If it works and performs well enough, it's good enough - for them (paying the bills).

I think maybe the argument overall is too simple - there's different types of programming, different types of programmers, vastly different sets of requirements (embedded, gaming, graphics, desktop, server, OS, internet-facing, websites, short-lived, long-lived, etc, etc).

One size doesn't fit all, not every job needs a hammer.

For the issue of e.g. why do we have three browsers?  It's usually backwards compatibility.  The tax department site of country Z won't work with browser Y.Y but works with browser X.X.  Which is easier, ship X.X and Y.Y or get the tax department to change their website?


----------



## zirias@ (Oct 21, 2021)

richardtoohey2 said:


> vastly different sets of requirements


Note you talk about _non-functional_ requirements here. Typically, there aren't many of them. Maybe you have a few about security or performance. Most of the time, NOT about memory consumption (because, yep, often that just wouldn't make sense. Every requirement, you will pay for, NFRs tend to be especially expensive).

Of course there are scenarios where an NFR about memory consumption makes sense.


----------



## Hakaba (Oct 21, 2021)

For the web part, I am surprised by all this react/angular/next/... developers that use ton of js instead using html/CSS.

JavaScript is "transpiled" (via Babel or Typescript) just because web developers refuse to write correct JavaScript for browsers...

I try to defend the "progressive enhancement" philosophy, but this is hard.

So I hope Svelte will be the next funny framework. At least Svelte let the CPU of the end user cold.


----------



## richardtoohey2 (Oct 21, 2021)

Hakaba said:


> because web developers refuse to write correct JavaScript for browsers...


Were you around in the old Internet Explorer/Netscape/Opera days?

"Correct" JS was next to impossible - so many rules, broken things, version dependancies.  The frameworks were a way out of that hell - you could concentrate on making the site "do stuff" instead of trying to fix some IE4 issue.  You were a lot more productive, and time is money.

It's gone a bit bonkers the other way with e.g. USB support in browsers.


----------



## Hakaba (Oct 21, 2021)

richardtoohey2 said:


> Were you around in the old Internet Explorer/Netscape/Opera days?
> 
> "Correct" JS was next to impossible - so many rules, broken things, version dependancies.  The frameworks were a way out of that hell - you could concentrate on making the site "do stuff" instead of trying to fix some IE4 issue.  You were a lot more productive, and time is money.
> 
> It's gone a bit bonkers the other way with e.g. USB support in browsers.


I wrote js in this old time. And with some really easy rules it is possible.  Today, that is call "progressive enhancement".
But I was in the same time a backend PHP dev and a CMS (with framework) Dev. And I learn some best practices in high school with the Eiffel Lang.
Today, I am a full stack lead dev + software architect. But it is the same job. The main difference is the resistance to test if a function exist before using it, always have acceptable fallback and so on (test variable and result in function, as I learn it in Eiffel "by contract" now it is called functional programming...)
Even UI/UX refuse fallbacks. You must have the same website in a 34" with 1440 px height and a 13" with 760px and (yesterday case) no scrollbar in 13" if there is no scrollbar in 34"...
The complexity is really easiest to handle today, but developers made more crap. Why ?


----------



## Jose (Oct 21, 2021)

richardtoohey2 said:


> Were you around in the old Internet Explorer/Netscape/Opera days?


Quirks mode anyone?


----------



## drhowarddrfine (Oct 21, 2021)

Just now read this comment on Hacker News:


> The one thing you should understand is the average programmer is not a good programmer.
> The reason is simple.
> 
> The average programmer is a newbie with little to no experience.
> ...


----------



## mrbeastie0x19 (Oct 22, 2021)

richardtoohey2 said:


> Were you around in the old Internet Explorer/Netscape/Opera days?
> 
> "Correct" JS was next to impossible - so many rules, broken things, version dependancies.  The frameworks were a way out of that hell - you could concentrate on making the site "do stuff" instead of trying to fix some IE4 issue.  You were a lot more productive, and time is money.
> 
> It's gone a bit bonkers the other way with e.g. USB support in browsers.


UNIX philosophy has very much gone out the window with respect to browsers... They are basically operating systems within operating systems at this point.


----------



## Vull (Oct 22, 2021)

mrbeastie0x19 said:


> UNIX philosophy has very much gone out the window with respect to browsers... They are basically operating systems within operating systems at this point.


Because they present standardized OS service interfaces, browsers still provide the most cross-platform-compatible mechanism for deploying GUI applications that I'm aware of.


----------



## zirias@ (Oct 22, 2021)

Hm, "still" is not the correct word. Yes, browsers nowadays are "application platforms". You could argue whether this is a good approach. It makes deployment super simple, the user doesn't have to "install" anything but can use the application right away, but complexity of the browser itself is the price. Chromium takes a lot longer to build than the whole FreeBSD system. There are, of course, security implications.

I personally still prefer the "old" approach, at least for your typical "desktop app": Build it for every target platform, use toolkits/frameworks (e.g. Qt) so you don't need platform-specific code paths. IMHO, "web applications" are best where they were coming from: form-based and backend-driven.


----------



## Vull (Oct 22, 2021)

Zirias said:


> Hm, "still" is not the correct word. Yes, browsers nowadays are "application platforms". You could argue whether this is a good approach. It makes deployment super simple, the user doesn't have to "install" anything but can use the application right away, but complexity of the browser itself is the price. Chromium takes a lot longer to build than the whole FreeBSD system. There are, of course, security implications.
> 
> I personally still prefer the "old" approach, at least for your typical "desktop app": Build it for every target platform, use toolkits/frameworks (e.g. Qt) so you don't need platform-specific code paths. IMHO, "web applications" are best where they were coming from: form-based and backend-driven.


I meant "still" in reference to my personal awareness, rather than to any absolute judgment of what is the best, or "most cross-platform compatible" methodology for deploying software. In my retirement, I don't do enough research to justify making any such absolute judgments. I don't really know what the best way is, but still _nevertheless_ am curious, and have opinions about it.

This is "still" the best method I employ, and am aware of. I'm curious about any, potentially newer, alternatives. Browsers are complex, but most computers already have them anyway, so one big advantage is that I don't have to provide any client workstation installers or configuration guides.

I target individual platforms as little as possible, but "still" write software that can run on FreeBSD, Linux, MacOS, and Windows workstations equally well. Making users install nothing, or as little as possible, on individual client "workstations" was a primary motivation for the original endeavor. Typically our customers had Windows 98 or Windows XP computers sitting on their desk tops. These, running PowerTerm or PuTTY, were rapidly replacing dumb terminals as our customers' primary workstations. But we were also trying to move away from character-based applications, and into the strange new world of graphical user interfaces. For awhile we wrote Microsoft Windows targeted applications written in Borland C++, but they lacked the un*x-like file-sharing and other multi-user features we needed. We also messed around with SCO's Tarantella and Sun's Java applets, but gave up on Java following the Microsoft vs. Sun Java wars of the late nineties and early naughts.

The approach I now use requires combining PHP, SQL, HTML, EcmaScript, and CSS to run applications in browsers. It's a trade-off between the complexity of platform-specific backends and the complexity of targeting browsers with multiple scripting languages. It might or might not be the best approach, but it does work well.


----------



## mark_j (Oct 22, 2021)

Vull said:


> Because they present standardized OS service interfaces, browsers still provide the most cross-platform-compatible mechanism for deploying GUI applications that I'm aware of.


Probably true, but then people I work with who do this web design thingy-stuff are constantly swearing at the various nuances of each of the browsers and they have only to support 4, 1 of which used to receive the most abuse: IE or now Edge.

You might be 90% there but that last 10% can cause you all sorts of nightmares, so the default position is: specify a limited number of browsers (eg 1, aka Chrome). So now you're back to coding for a specific OS, basically.

It's a fool's game.


----------



## Vull (Oct 22, 2021)

mark_j said:


> Probably true, but then people I work with who do this web design thingy-stuff are constantly swearing at the various nuances of each of the browsers and they have only to support 4, 1 of which used to receive the most abuse: IE or now Edge.
> 
> You might be 90% there but that last 10% can cause you all sorts of nightmares, so the default position is: specify a limited number of browsers (eg 1, aka Chrome). So now you're back to coding for a specific OS, basically.
> 
> It's a fool's game.


I used to support IE, Safari, Firefox, Chrome, and Opera. Edge had not yet been introduced at that time. It was indeed a nightmare; I'd first make a thing work on Firefox, and then often spend more time coercing it to work on the other browsers than it took to implement it in the first place. Now I only support Firefox, which is easy enough to install if you don't already have it, and readily available on every platform.

Out of curiosity I did look at Edge when it was relatively new and it seemed to be the worst behaved yet, worse even than IE in some ways.


----------



## dd_ff_bb (Oct 22, 2021)

mark_j said:


> they have only to support 4, 1 of which used to receive the most abuse: IE or now Edge.


It was true a while back but now its at minimum. Regarding IE / Edge they are completely different browsers. You are right about IE because MS tried to implement their own standards (these kind of dick moves was pretty popular back in the day). Edge is based on chromium. Now there are mainly 3 browser engines chromium based, firefox and safari. So MS is out of the game for now. To be honest they are pretty compatible across OSes and devices.



mark_j said:


> You might be 90% there but that last 10% can cause you all sorts of nightmares,


Previously equation was almost 10% to 90% (Edit: for software development) so we should be really thankful how web apps work these days.
Pretty much same across devices computers/phones/tablets. You had to be million $$ company (if not billion) to be able to support that kind of infrastructure.
In my opinion XMLHttpRequest object was break through, which led programmers (and applications) to reach masses (among other techs during same period).
And not to mention cut back on maintenance cost and man power.



mark_j said:


> It's a fool's game.


So as i tried to explain briefly, it is not anymore.


----------



## dd_ff_bb (Oct 22, 2021)

Vull said:


> Out of curiosity I did look at Edge when it was relatively new and it seemed to be the worst behaved yet, worse even than IE in some ways.


Probably what you tested was "Microsoft Edge Legacy" which used MS engine. 
So it was completely different browser. 
New Edge browser is also available on Linux platforms and is chromium based.


----------



## Hakaba (Oct 22, 2021)

Since EDGE 12+, the render is the same as Chromium.

And no, front Dev do not write js for browser today. They write "meta js" and use transpiration tools like Babel.
Some library exists only to not learn how to use js (loadash if it is the right name) and thanks to Microsoft, there is Typescript, the decision that all this nightmare need to be ended.

But for me, vanilla js is not that complex.

The same thing about shell script. I rewrote all bash script in sh.


----------



## mark_j (Oct 22, 2021)

Vull said:


> Out of curiosity I did look at Edge when it was relatively new and it seemed to be the worst behaved yet, worse even than IE in some ways.


Well, my friend, that's because Microsoft programs it. If there's one sure way to screw something up, MS will do it. Look at Windows 10 for a good laugh.


----------



## hardworkingnewbie (Oct 22, 2021)

sko said:


> ah yes, python... "I learned programming with python... this is so easy! just import these 83 libraries and you can print 'hello world' on the screen with just 2 lines of code! lets use it for everything!"


You never tried out Node.Js, did you? Where already left-padding is too much for most programmers, so when the author of the left-padding module - 13 lines - pulled it off, it broke literally much stuff. And exactly this is what I do mean with lazy programmers. +

Here you've got an own eco system mainly based around people basically not knowing much about what they are doing at all. And on top of it now a GUI with Electron.









						How one programmer broke the internet by deleting a tiny piece of code
					

A man in Oakland, California, disrupted web development around the world last week by deleting 11 lines of code.




					qz.com


----------



## mark_j (Oct 22, 2021)

sko said:


> ah yes, python... "I learned programming with python... this is so easy! just import these 83 libraries and you can print 'hello world' on the screen with just 2 lines of code! lets use it for everything!"


That is too funny!


----------



## mark_j (Oct 22, 2021)

hardworkingnewbie said:


> You never tried out Node.Js, did you? Where already left-padding is too much for most programmers, so when the author of the left-padding module - 13 lines - pulled it off, it broke literally much stuff. And exactly this is what I do mean with lazy programmers. +
> 
> Here you've got an own eco system mainly based around people basically not knowing much about what they are doing at all. And on top of it now a GUI with Electron.
> 
> ...


And the key to that is the quote in the article:


> That code can be used to add characters to the beginning of a string of text, perhaps a zero to the beginning of a zip code. It’s a single-purpose function, *simple enough for most programmers to write themselves*.



But because they're lazy they don't. That's why I say javascript in particular is a pox on the world and especially the internet.


----------



## zirias@ (Oct 22, 2021)

hardworkingnewbie said:


> You never tried out Node.Js, did you? Where already left-padding is too much for most programmers, so when the author of the left-padding module - 13 lines - pulled it off, it broke literally much stuff. And exactly this is what I do mean with lazy programmers. +


First of all: LOL.

And then, sure, there's _some_ "lazy programmer" involved somewhere down the line. Looking at this code, it's ridiculous to build a package from it in the first place, and then the code looks far from optimal (although I'm not sure you _can_ do better in Javascript).

But the real problem are ideas like npm. A central package repository, builds pulling them in dynamically, encouraging people to pull in whatever and creating ridiculously deep dependency graphs that are ever changing (try to port some software using this mess, it's horrible) – there's just _no_ way to keep that stuff under control.


----------



## hardworkingnewbie (Oct 22, 2021)

NPM is just dependency hell in perfection, yes. What's even more concerning is that there are now some software projects around which do have Nodejs as part of their build infrastructure, like e.g. Firefox or Chromium.


----------



## Buck (Oct 22, 2021)

I think it's the novelty effect. We used to have stable and well QA-ed tools before that didn't get updates often, didn't look fancy, but did the job quite well. I'm talking both hard and software. Now delivering the whole package in less than 3 minutes, the quick gratification, is a number one priority. At the same time bringing quality takes time and very soon someone else publishes a similar piece of software and public interest shifts there, because _new_. Also due in part to effort justification effect authors who produce things quickly can just as quickly and easily abandon those 3-minute projects and move on to something else. So in essence it's a trade off between time spent and code quality.

On the corporate side, being quick to market brings more money than being quality-oriented, as you can always buy your way out with endless excuses, happy people graphics and smilies and promises of newer and greater version that is coming soon. The one that's usually even more buggy than the original. Our expectations have surely went down over the years, especially in regards to stability.


----------



## Jose (Oct 22, 2021)

Buck said:


> Also due in part to effort justification effect authors who produce things quickly can just as quickly and easily abandon those 3-minute projects and move on to something else.


I'd never heard of that! It explains a lot of the pathologies  I've seen in the software world. People think that it's no good if it doesn't take any effort. That's why upgrade treadmills are so popular. The Python Steering Council feels like it's keeping the language "relevant," and hordes of pythonistas justify their salary with all the "maintenance" they have to do to keep up with latest stupid breaking changes.

I suspect that's one of the reasons Freebsd has so little traction in the Enterprise and elsewhere. The few times I've managed to talk a boss into letting me set up some Freebsd machines, they worked so well and were so stable that we plain forgot about them. There always was some whiny NT or Linux server that needed the latest security patches or some new whiz-bang offering from Microsoft that some idiot executive just had to have. Those platforms get all the attention, and therefore all the money. All you get when you point out that Freebsd mail exchanger you set up has 3 months of uptime and has needed 0 security patches is a "what's Freebsd again?"


----------



## drhowarddrfine (Oct 22, 2021)

Jose said:


> There always was some whiny NT or Linux server that needed the latest security patches or some new whiz-bang offering from Microsoft that some idiot executive just had to have. Those platforms get all the attention, and therefore all the money.


Reminds me of a TV car commercial that showed a guy talking about how much he loved the model car he bought. He bought a new one every three years. "If they weren't so great, why would I buy so many of them?"


----------



## Hakaba (Oct 23, 2021)

For web, an another reality is that all the code will go in trash after 1,2 or 3 years. So invest in quality is not a priority.


----------



## Vull (Oct 23, 2021)

Hakaba said:


> For web, an another reality is that all the code will go in trash after 1,2 or 3 years. So invest in quality is not a priority.


I must be misreading this. I have no idea what you mean. If what you say is true, then all the people who use Wordpress are sure going to be surprised. The PHP, CSS, and ECMAScript code I wrote 15-20 years ago runs better and faster now than ever, and has required surprisingly little maintenance over the intervening years. It probably helps that none of it incorporates node.js, npm, or any other such 2nd-hand code libraries.

These software technologies have been, and seem sure to be, around for a long long while.


----------



## ralphbsz (Oct 23, 2021)

Hakaba said:


> For web, an another reality is that all the code will go in trash after 1,2 or 3 years. So invest in quality is not a priority.


That's actually quite a general statement. Most software has a surprisingly short shelf life. If you look at software that is actively supported and being maintained (which in professional production environments is the rule), most lines of code get touched every few years. And I don't mean that whole products get obsoleted and thrown away: Within an existing system of programs, most lines of code get touched and worked on regularly.

Yes, there are exceptions, including "dusty decks" that the source code has been lost for, and where compiled executables are being run, often on emulated systems. But nearly all software had a thorough once-over for Y2K, so for most of it the maximum age is about 21 years. If we were to assume a uniform rate of change, that would put the average at 10.5 years. But given that the amount of software development has been growing exponentially, the average software is relatively young.

On a previous project that I worked on a few years ago (about 5M to 10M lines of code), the oldest source code was from the mid 1980s, or about 30 years old at the time. We had some files with copyright dates from back then; in some cases, the original authors were no longer alive. But we only had hundreds of lines of code that were this old; most of the millions of lines were much more recent. Another project I worked on (which had grown to a measured 17M lines in the early 2010s) had no line of code that was older than 12 years. And I had left the company about 2 years after the project started (it was less than 100K LOC when I left), and according to the source control, there were no lines of code that I wrote left.

So having established that code changes relatively quickly, one might jump to the conclusion that quality doesn't matter. I think that this is in general utterly wrong, and extremely dangerous. What has to be of very high quality is the overall architecture, the big design choices, and the style. Otherwise enhancement and maintenance become prohibitively expensive or outright impossible. Each individual piece or module is replaceable; the overall artifact is a big investment.


----------



## Hakaba (Oct 23, 2021)

Vull said:


> I must be misreading this. I have no idea what you mean. If what you say is true, then all the people who use Wordpress are sure going to be surprised. The PHP, CSS, and ECMAScript code I wrote 15-20 years ago runs better and faster now than ever, and has required surprisingly little maintenance over the intervening years. It probably helps that none of it incorporates node.js, npm, or any other such 2nd-hand code libraries.
> 
> These software technologies have been, and seem sure to be, around for a long long while.


Sorry I wrote web but that is only true for the front dev in JavaScript (that was what I have in mind)
CMS (like Typo3) have a very good code quality and exists for years.
But the JavaScript that interact with a "click to chat" button has a shot live.
I observe nevertheless that this quick and dirty approach contaminated node.js development (In my little experience, that is not a general overview).


----------



## Vull (Oct 23, 2021)

Hakaba said:


> Sorry I wrote web but that is only true for the front dev in JavaScript (that was what I have in mind)
> CMS (like Typo3) have a very good code quality and exists for years.
> But the JavaScript that interact with a "click to chat" button has a shot live.
> I observe nevertheless that this quick and dirty approach contaminated node.js development (In my little experience, that is not a general overview).


Sorry if I misunderstood your post. I write all my javascript from scratch, often following the example code snippets from w3schools, but every line I write is my own. Javascript manipulation of the DOM and XMLHttpRequest are essential components for exploiting the clients' browsers to their fullest potential, and for providing smooth and dynamically changing user interfaces, with minimal network I/O. Without javascript I could offer nothing much better than static HTML forms.


----------



## Buck (Oct 23, 2021)

Vull said:


> ...are essential components for exploiting the clients' browsers to their fullest potential, and for providing smooth and dynamically changing user interfaces...


What even is "_the fullest potential_"? Is it maybe the reason we need to upgrade to latest XY-core Ryzen each year just to view animations that serve nothing but entertain at best and create an annoyance at worst? A great website in my view, is the one that has 1. The original information or research presented in 2. Accessible format. The rest is just fluff. You can easily do that with CSS. 5% design, 95% content.


----------



## drhowarddrfine (Oct 23, 2021)

Vull said:


> XMLHttpRequest are essential components for exploiting the clients' browsers to their fullest potential


XHR is only for sending requests to the server and nothing beyond that. It's one function call and nothing more. Let's not give it any more credit than that.


----------



## zirias@ (Oct 23, 2021)

And it's a misnomer from times when "everything" was XML. Nowadays, it's mostly used to consume REST APIs talking JSON…


----------



## grahamperrin@ (Oct 23, 2021)

richardtoohey2 said:


> Were you around in the old Internet Explorer/Netscape/Opera days? …


Yes.

`/me shudders, slightly`



hardworkingnewbie said:


> One word: Electron. …



Without reference to _bloat_: is it reasonable to assume that maintenance of devel/electron12 might be occasionally challenging? <https://cgit.freebsd.org/ports/tree/devel/electron12>, in particular the Makefile and long list of files.


----------



## Vull (Oct 23, 2021)

Buck said:


> What even is "_the fullest potential_"? Is it maybe the reason we need to upgrade to latest XY-core Ryzen each year just to view animations that serve nothing but entertain at best and create an annoyance at worst? A great website in my view, is the one that has 1. The original information or research presented in 2. Accessible format. The rest is just fluff. You can easily do that with CSS. 5% design, 95% content.


I refer to the browser's "fullest potential" for sharing the work between client and server. I don't display animations. Without XHR and DOM manipulation, the server has to do the lion's share of the work. Let me show an example. Here's a customer maintenance form with a scrollable customer search box, full of test data, which can be indexed by name, phone number, or customer number. If I change the search order by clicking on one of the search box's column headings, it visibly resequences the items in the search box.





To do this without using XHR and DOM manipulation, you'd have to submit the whole form from the client to the server. The server would then have to reformat the entire web page, with the search box items in the new sequence, and send it back to the client. The browser would then have to redisplay the whole page, and there would be a visible "stutter" on the display.

But in this example, XHR lets you send a simple, relatively much shorter request, from the client to the server. The server then sends back only the resequenced items in the search box. These are then displayed, not by redisplaying the whole form, but rather, by smoothly updating the document object model inside the browser, using ECMAScript. The display will not stutter, and there will have been the minimum necessary network I/O required to implement the entire transaction.

This doesn't require the latest Ryzen XY-core. I can run both the client and server ends of this on my 32-bit Dell Latitude laptop, and without any appreciable strain whatsoever on the processor.



drhowarddrfine said:


> XHR is only for sending requests to the server and nothing beyond that. It's one function call and nothing more. Let's not give it any more credit than that.


One other noteworthy feature of XHR is that it can be done invisibly, "behind the scenes" so to speak, without disturbing the browser's display, or causing it to "stutter."



Zirias said:


> And it's a misnomer from times when "everything" was XML. Nowadays, it's mostly used to consume REST APIs talking JSON…


True enough I suppose. I believe the name XMLHttpRequest was originally coined by Microsoft, so there's that. But if it wasn't a useful idea, I doubt it would have ever gone any further than Microsoft.

The data transmitted via XHR doesn't have to be in XML format, although it can be, and sometimes is.

I'll take your word for the REST API's and JSON since I'm unfamiliar with all that.


----------



## dd_ff_bb (Oct 23, 2021)

drhowarddrfine said:


> Let's not give it any more credit than that.



I think we should. Here is a simple scenario, you have a web based software. Lets call it "courier/shipping software" , courier company has 100 customers and customers shipping pkgs. So they have to invoice them periodically. Say each customer ships 100 pkgs p/w. Its monday and we gonna charge all those shipments, prepare invoice for them and send them invoice + notifications. 
Without XMLHttpRequest:

Load the list of customers who would be charged today to a web page with basic info (shipment count, total amount, surcharge amount etc...)
Now we can not load all this info on a single page (for 10000 records. 100 customers x 100 shipment) and operate on them unless we play around with script memory limit, script time limit, max body etc... (even than it is pretty hard)

We can spit it to 10 customer per page so we'll have to load 10 pages to operate on 100 customers.
Now we are on 1st page with 10 customers loaded we selected 6 of them and click on calculate button (we have to review numbers before we actually charge them)
so we are dealing with 600 record at this point ( 100 pkg x 6 customers ) say each shipment takes 0.5 second to be calculated (calculate dims, vol weight, find zone, find rates, apply them etc...)
Once we click on calculate button we have to wait 300 sec ( 100 pkg x 6 customers x 0.5 sec) . Of course we can not expect accounting person to wait 300 secs just to calculate charges for 6 customers.
So we decide to use dom and iframe. We'll split the load to chunks and give interactive impression to the person who is using our app. Once he clicks calculate we'll send 1st customer accnum to iframe (with Dom) call our script and our server side script will do the calculation and show results. Once server side script finished loading we'll call parent window to send the next number and repeat the process. (without Dom we are completely screwed)

At this point we lost all info (visually) related to previous results (There are workarounds but thats not the focus). 
Also don't forget user will still be looking to a empty iframe for (0.5 per shipment x 1 customer x 100 shipments =  50 seconds)

I wont continue with rest of the process but just to mention i didn't go into traffic load (you have to load same page over and over) and processor load (you have to render same script over and over) and other factors to keep this post brief.

(Also before anyone throws you are a bad coder a calculation can not take 0.5 sec argument change it to 0.1 or 0.01 if it makes you happy. But it wont change the fact)

With XMLHttpRequest:

Load the list of customers who would be charged today to a web page with basic info (shipment count, total amount, surcharge amount etc...)
We load 20 of them if needed user can click on "load more" or "next" button to get rest of the list (we are still on same page)
We select 20 of them and click on calculate button.  As soon as we click on calculate we start to see results:
Shipment 1: 10$ transportation charge 2$  fuel charge Zone 1 etc.....
So while we are waiting we are seeing almost real time results. (No more time outs, traffic load etc....)
Also during this time we are doing other things with our software within another DIV on same page while we are keeping eye on results.
And don't forget we can select all 100 of customers and do calculation at once we don't have to go 6 or 10 customer per click.
(Which is practically almost impossible without XMLHttpRequest)

And more importantly:

Other than technical advantages (bandwidth, processing etc...) XMLHttpRequest give users the impression as they were using a desktop application. 
After XMLHttpRequest companies started to look at web as software platform and started delivering their products as web based instead of desktop/compiled based.
And If you dont know anything else just know this if you are able to use QuickBooks online or Office suite online thats because of XMLHttpRequest

So yes, XMLHttpRequest deserves a lot of credit.


----------



## dd_ff_bb (Oct 23, 2021)

Vull you beat me with a minute


----------



## grahamperrin@ (Oct 23, 2021)

hardworkingnewbie said:


> …
> 
> 
> 
> ...



Love it. The freedom to protest and remove. 





<https://web.archive.org/web/2016032...e/i-ve-just-liberated-my-modules-9045c06be67c> "… Cheers" | moved to <https://kodfabrik.com/journal/i-ve-just-liberated-my-modules>


----------



## kpedersen (Oct 23, 2021)

Vull said:


> The browser would then have to redisplay the whole page, and there would be a visible "stutter" on the display.


This AJAX approach does avoid visual stutter. However it *is* more complex in how it works and browser requirements. You are making a trade for additional complexity vs "visual experience".

It is not a trade that I would particularly disagree with given your requirements but bloat appears when more and more of these compromises are made. From this point, there may now be two type of users:

Someone like myself who actually doesn't care about visual stutter and often has such shaky internet that a page refresh is nicer than a potential AJAX request timeout.


A "cool" user who might not want to have the default HTML buttons but instead have cool glowing buttons that fade in and out when hovered over.
Most projects will always try to please user #2 because... "progress".


----------



## zirias@ (Oct 23, 2021)

kpedersen said:


> You are making a trade for additional complexity vs "visual experience".


You can avoid that "trade" by employing progressive enhancement (in a nutshell, deliver a nice backend-driven web application e.g. following ROCA principles and add "bells and whistles" *if* the browser supports it). But once you're in SPA land, using "great" frameworks like React, Angular, whatnotever, everything is lost.


----------



## kpedersen (Oct 23, 2021)

Zirias said:


> and add "bells and whistles" *if* the browser supports it


I would say that my browser supports pretty much everything. Yet I still would rather the web pages were simple (X)HTML. I wish there was a browser setting akin to DoNotTrack that says "keep your tacky gimmicks to a minimum please".

Absolutely agree that once a framework is involved, the developer basically gets strung along by their nose.


----------



## Buck (Oct 23, 2021)

Vull said:


> Here's a customer maintenance form with a scrollable customer search box


Well, that is fair, when there's literally no other way to do your presentation and folks really don't like refreshes. But I was thinking about the general web, not corporate backends. Of course there are other, way better solutions than a webpage with javascript in it, just much harder to program. Like a custom solution written in an OS-oriented language, that is querying an SQL database.


----------



## grahamperrin@ (Oct 23, 2021)

MrSalty said:


> So one can build a desktop app with JavaScript



– ignores the question.


----------



## Vull (Oct 23, 2021)

kpedersen said:


> This AJAX approach does avoid visual stutter. However it *is* more complex in how it works and browser requirements. You are making a trade for additional complexity vs "visual experience".
> 
> It is not a trade that I would particularly disagree with given your requirements but bloat appears when more and more of these compromises are made. From this point, there may now be two type of users:
> 
> Someone like myself who actually doesn't care about visual stutter and often has such shaky internet that a page refresh is nicer than a potential AJAX request timeout. ...


Thank you for reminding me of the term AJAX, because I had forgotten it. Having just now looked it up again, I suppose that the term could be applied to the approach I gradually developed out of my own 2004 era research, but I've never described my approach that way, or applied that term to it.

The last time I recall encountering the term was probably around 2011. At that time, the references I read associated it with the use of JSON, which I rejected out of hand after about 4 hours of experimenting with it. I do not use JSON, jQuery, ASP.NET, Ruby on Rails, or any other so-called AJAX "framework" and rejected them all on first encounter. I write all my javascript from scratch, and most of what I know about XHR technology comes from what I read on the w3schools.com site.

I don't use XHR indiscriminately, or for everything, but I could not implement the search box shown in my example above properly without it. That test customer table contains 100,000 records, but I can scroll through it from top to bottom in a number of seconds, on any of those three indexes, or switch indexes with the click of a column heading. I can also pop a search box up dynamically on the screen with the click of a button, to wit:








I find the XHR approach to be much faster, rather than slower, than a total screen refresh, and it requires considerably less network traffic, because the total amount of network I/O required is much lower. I can't find any evidence that XHR GETs and PUTs are inherently slower on a networking level. If you can show me such evidence I'd love to review it. What I have read suggests that such delays are probably due to ( 1.) overuse, ( 2.) concurrent requests, and ( 3.) some of those AJAX frameworks that I don't use in my own implementations.



> ...
> 2. A "cool" user who might not want to have the default HTML buttons but instead have cool glowing buttons that fade in and out when hovered over.
> Most projects will always try to please user #2 because... "progress".


Besides being "cool" (lol), different browsers and operating systems have their own particular button appearances and sizes. CSS buttons provide the same sizes and appearances in a cross-platform compatible way.

Other "cool" features which don't always involve XHR, but do require well-written, unclunky, and non-bloated javascript include drag-and-drop features, time-out avoidance, and progress bar/counter displays.

Time-out avoidance? Time-consuming processes like end-of-day updates, end-of-month updates, end-of-year updates, backup, and restore typically require so much server time to process that some kind of additional measures must be taken to avoid getting HTTP 408 "Request Timeout" errors in the browser. One good way to avoid such errors is to run a job control process in javascript, breaking the overall process into smaller chunks, which can be sent, one request at a time, to the server, using XHRs, while displaying either a progress bar or progress counter of some kind in the browser between requests.



Zirias said:


> You can avoid that "trade" by employing progressive enhancement (in a nutshell, deliver a nice backend-driven web application e.g. following ROCA principles and add "bells and whistles" *if* the browser supports it). But once you're in SPA land, using "great" frameworks like React, Angular, whatnotever, everything is lost.


These applications are intended for office personnel who have capable web-browsers. They can't be run on smart phones or PDA devices. Those are complications I've never had to deal with and don't intend to deal with.



Buck said:


> Well, that is fair, when there's literally no other way to do your presentation and folks really don't like refreshes. But I was thinking about the general web, not corporate backends. Of course there are other, way better solutions than a webpage with javascript in it, just much harder to program. Like a custom solution written in an OS-oriented language, that is querying an SQL database.


My applications do query SQL databases, but customized OS-specific solutions are precisely what I'm trying to avoid, and, on one level or another, have been trying to avoid for the past 20 plus years.


----------



## hardworkingnewbie (Oct 24, 2021)

Buck said:


> What even is "_the fullest potential_"? Is it maybe the reason we need to upgrade to latest XY-core Ryzen each year just to view animations that serve nothing but entertain at best and create an annoyance at worst? A great website in my view, is the one that has 1. The original information or research presented in 2. Accessible format. The rest is just fluff. You can easily do that with CSS. 5% design, 95% content.


Well, that would be a dream come true! The reality is much, much _worse_!

When Electron was younger, around in 2017, people reported CPU usage of up to 15% on an idle Electron Desktop App. The app was Visual Studio Code from Microsoft.

Somebody looked at it and found out, that it takes 13% of CPU usage to render the blinking cursor. No joke! The reason why is because it depended on Chromium to do that stuff, which had that inefficiency itself.





__





						Blinking cursor devours CPU cycles in Visual Studio Code editor
					

Crappy Chromium code is the culprit, we're told




					www.theregister.com
				



Of course that's not the end of it, a bug report from 2020: Moving the mouse increases CPU usage on renderer process to 7-10%. That's what I call a highly consistent framework in terms of delivering shit performance!









						Moving mouse increases CPU on renderer process to 7-10% · Issue #22459 · electron/electron
					

Electron Version: 7.1.8 Operating System: macOS 10.15.3 Expected Behavior Moving mouse should have a minimal impact on CPU. Actual Behavior Moving mouse increases CPU usage of renderer to 7-10%. To...




					github.com
				




And the standard excuse from the Electron guys then is always "Not our fault, this comes from Chromium." WRONG! It's their fault because they have chosen a crappy software as foundation for their own stuff.


----------



## grahamperrin@ (Oct 24, 2021)

hardworkingnewbie said:


> comes from Chromium



1188505 - Moving the mouse increases CPU significantly - chromium


----------



## Buck (Oct 24, 2021)

As an aside, or maybe not, even using Electron doesn't guarantee the end users get all the benefits, like portability to run on toasters. We've been waiting for years, literally, for Deezer to publish their awesome Electron-based app to run on Linux. Well years later some folks came and repackaged it, but still no official word from Deezer. As you can imagine most of what this app is doing is displaying a Deezer website page inside a window.


----------



## zirias@ (Oct 24, 2021)

This whole Electron stuff is a plague (and it seems especially Microsoft is excited using it, see VS Code, Teams, Skype, ...). Yes, it's portable "in theory", but the result still has stuff compiled for the target platform, so with closed source projects, you still only get the platforms supported by the vendor.

The whole idea reminds me of a bad idea that was around like 10 years ago: ASP.Net WebForms. Back then, Microsoft had this "great" idea: Hey, every developer knows how to develop an event-driven desktop app, so let's make developing a web app the same. The result was pure horror. Sure, you had this programming model "add an event handler to the click event of this button" that somehow worked in the backend (on the server). But the price was, among other crap, a huge "viewstate" blob doing each roundtrip in a hidden form field…

Now, Electron seems to be the same "great" idea, but into the other direction: Hey, every developer knows how to develop client-side web apps with "awesome" Javascript frameworks, so let's just make development of desktop apps the same. History repeating, still crap.


----------



## drhowarddrfine (Oct 24, 2021)

dd_ff_bb What you are doing is describing what XHR is used for. What the other poster was describing was XHR being God's gift to the internet and the universe, world peace and ending hunger. XHR is not that. (I may  have exaggerated the previous claims.) It is a function that does one thing and does it well. It does not prevent visual stutter and a lot of the things you list are not done by XHR but other processes not involving XHR at all.

XHR can send and receive your data in the background. How it gets handled and displayed is entirely other processes. That's why I said you shouldn't give XHR more credit than what it deserves.


----------



## hardworkingnewbie (Oct 24, 2021)

grahamperrin said:


> 1188505 - Moving the mouse increases CPU significantly - chromium


My favorite comment by one of the Chromium project members: 
_
I don't see anything out of the ordinary in the trace. The mouse moves are fired at a frequency of 60Hz. There does appear to be a lot of javascript listeners for mouse out, over, move so I imagine that is a fair amount of the work being done. Over for the input team to look at the trace further._

Chromium must be for sure a premium quality code base originating at thedailywtf.com!


----------



## Jose (Oct 24, 2021)

Mr. Salty said:


> Hasn't everyone been told that desktop apps are dead and browser based apps are the future?  If so, then what's the reason for Electron, or desktop apps do have a future?


Electron allows you have almost a single code base for your Web, desktop, and most importantly phone apps. See how Discord does it, for example:




__





						Discord isn't an Electron app, its React native. And as a frequent user on my Ma... | Hacker News
					






					news.ycombinator.com
				




Again it comes down to minimizing the number of programmers you need, and the skills those programmers have to have.


----------



## grahamperrin@ (Oct 24, 2021)

hardworkingnewbie said:


> > … Over for the input team to look at the trace further.



FYI <https://bugs.chromium.org/p/chromium/issues/detail?id=1188505#c8>

You can perform the steps to reproduce with firefox-93.0_2,2, varied to include movement up and down.


----------



## Buck (Oct 24, 2021)

Jose said:


> and the skills those programmers have to have.


this


----------



## Vull (Oct 24, 2021)

drhowarddrfine said:


> dd_ff_bb What you are doing is describing what XHR is used for. What the other poster was describing was XHR being God's gift to the internet and the universe, world peace and ending hunger. XHR is not that. (I may  have exaggerated the previous claims.) It is a function that does one thing and does it well. It does not prevent visual stutter and a lot of the things you list are not done by XHR but other processes not involving XHR at all.
> 
> XHR can send and receive your data in the background. How it gets handled and displayed is entirely other processes. That's why I said you shouldn't give XHR more credit than what it deserves.



The snippet you quoted me on (post #85):


> XMLHttpRequest are essential components for exploiting the clients' browsers to their fullest potential



The full sentence I posted (post #83):


> _Javascript manipulation of the DOM and_ XMLHttpRequest are essential components for exploiting the clients' browsers to their fullest potential, and for providing smooth and _dynamically changing_ user interfaces, with minimal network I/O.



But you're right. Even javascript isn't likely to end world hunger, and XHR alone will not prevent visual stutter. It takes careful painstaking programming, and a lot of implementations I've seen just don't do it right. I don't use any of the popular AJAX frameworks discussed above, and don't recommend them to anybody. You and I might not see eye to eye about javascript in general, but you make good points overall.



Mr. Salty said:


> Hasn't everyone been told that desktop apps are dead and browser based apps are the future?  If so, then what's the reason for Electron, or desktop apps do have a future?


I'm also not saying that desktop apps are dead, they just aren't my specialty. There's more than one way of doing things. I like writing browser base apps, but desktop apps have no doubt improved a lot since I retired. I'm just one more old dinosaur, who is not at all up to date on desktop apps. Programming is just a hobby for me now. Something to keep me out of mischief in my decrepitude.


----------



## drhowarddrfine (Oct 24, 2021)

Mr. Salty No. Lisp

Vull  Since I closed my web dev business, I'm amazed how quickly and how much I've forgotten about how some things work or how to do things.


----------



## Vull (Oct 24, 2021)

drhowarddrfine, Glad I found this forum to keep me up to date on a few things I'm still very interested in. My brother calls this being "hooked on the game." But I have to admit I don't understand, or try very hard to understand, about half the thread topics that get posted on this forum.

For all of the 28 years I was a programmer, I found I had to learn a whole new set of skills every 3-5 years just to keep from becoming completely obsolete.


----------



## hardworkingnewbie (Oct 25, 2021)

Vull said:


> For all of the 28 years I was a programmer, I found I had to learn a whole new set of skills every 3-5 years just to keep from becoming completely obsolete.


Well maybe learning COBOL would have provided you with less hassle...?


----------



## drhowarddrfine (Oct 25, 2021)

hardworkingnewbie said:


> Well maybe learning COBOL would have provided you with less hassle...?



In the late '70s, I ridiculed my best friend for studying to become a COBOL programmer. Until he died, a few years ago, he was still programming in COBOL.


----------



## Crivens (Oct 25, 2021)

Somehow I would not be surprised should someone implement a http subprotocol for lean client/server apps that would smell strongly of X11, with some buzzwords thrown in...


----------



## jbo (Oct 25, 2021)

blockchain.


----------



## shkhln (Oct 25, 2021)

Crivens said:


> Somehow I would not be surprised should someone implement a http subprotocol for lean client/server apps that would smell strongly of X11, with some buzzwords thrown in...


You jest, but web apps are already directly analogous to X11 or DisplayPostscript. They only lack separately controlled windows.


----------



## Vull (Oct 25, 2021)

hardworkingnewbie said:


> Well maybe learning COBOL would have provided you with less hassle...?


Learned COBOL 35 years ago but have forgotten most of it. Too much other furniture in the attic.


----------



## Jose (Oct 25, 2021)

shkhln said:


> You jest, but web apps are already directly analogous to X11 or DisplayPostscript. They only lack separately controlled windows.


Iframes anyone?


----------



## astyle (Oct 25, 2021)

Ahh.. I'd like to chime in, and say that software bloat is frankly a vicious circle.  Yes, it's possible to do sound on a device that has just a few MB of RAM - but then we want more features from the same setup. We want the sound output to go anywhere on the device, we want the sound quality to not degrade while being transported to output device, we want sound in response to something happening on the device, and we want 16-bit sound, not 8-bit. All that makes the original hardware unable to handle the new requirements for sound - it just takes too many bits. Oh, and don't forget the security around the sound subsystem!

So, you can do something small and simple, but it won't be up to snuff in the modern world.  Gotta get on the big-boy bicycle, buddy.


----------



## hardworkingnewbie (Oct 25, 2021)

Vull said:


> Learned COBOL 35 years ago but have forgotten most of it. Too much other furniture in the attic.


COBOL was already 35 years ago a dated and ancient language. Its beauty lies within these two points:

there was and is still a massive base of programs mostly in big companies, banks/insurances and public authorities around with ancient code base (often 30 years and more) at important places in their infrastructure where you cannot replace that system easily
there are not many programmers around who have mastered COBOL,
additionally many COBOL programmers are going in retirement right now.
This was true 35 years ago, and still is true until today. So if you are fluent in COBOL, you will not run out of business anytime soon. And you will have not the need to learn a new language every few years, because the COBOL legacy will keep you busy for a long, long time.

Or as the ACM puts it: estimated 3 trillion US$ flow day by day through COBOL systems. In 1997 the Gartner Group estimated that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more written anually.









						COBOL Programmers are Back In Demand. Seriously.
					

States race to upgrade their legacy mainframe systems to meet the cascade of unemployment claims caused by the Covid-19 pandemic.



					cacm.acm.org


----------



## astyle (Oct 25, 2021)

hardworkingnewbie said:


> COBOL was already 35 years ago a dated and ancient language. Its beauty lies within these two points:
> 
> there was and is still a massive base of programs mostly in big companies, banks/insurances and public authorities around with ancient code base (often 30 years and more) at important places in their infrastructure where you cannot replace that system easily
> there are not many programmers around who have mastered COBOL,
> ...


Even if you bother to learn COBOL, are you gonna be trusted to write code that protects about $1 trillion in World Bank assets? We all know what happened when a disgruntled programmer broke the Internet.


----------



## mrbeastie0x19 (Oct 25, 2021)

hardworkingnewbie said:


> COBOL was already 35 years ago a dated and ancient language. Its beauty lies within these two points:
> 
> there was and is still a massive base of programs mostly in big companies, banks/insurances and public authorities around with ancient code base (often 30 years and more) at important places in their infrastructure where you cannot replace that system easily
> there are not many programmers around who have mastered COBOL,
> ...


haha wtf I had no idea this was such a huge issue


----------



## Crivens (Oct 25, 2021)

Web bloat is also caused by this: https://xkcd.com/2044/


----------



## astyle (Oct 25, 2021)

Crivens said:


> Web bloat is also caused by this: https://xkcd.com/2044/


This is easily applicable to software bloat, security bloat, etc. Makes me want to just get away from the damn keyboard and go do some REAL surfing.


----------



## ct85711 (Oct 25, 2021)

astyle said:


> Ahh.. I'd like to chime in, and say that software bloat is frankly a vicious circle. Yes, it's possible to do sound on a device that has just a few MB of RAM - but then we want more features from the same setup. We want the sound output to go anywhere on the device, we want the sound quality to not degrade while being transported to output device, we want sound in response to something happening on the device, and we want 16-bit sound, not 8-bit. All that makes the original hardware unable to handle the new requirements for sound - it just takes too many bits. Oh, and don't forget the security around the sound subsystem!


I'd more of break bloat into 2 categories.  One being more of feature creep, of adding everything including the kitchen sink into a package, because of new or "shiny".  The other being more of adding more layers (middle-mans) into the work; when you can just as easily do it without all the additional layers.


----------



## mrbeastie0x19 (Oct 25, 2021)

Unix itself would barely be recognizable if feature creep had been the guiding factor, could u imagine if u had a single program on your machine that did everything in /sbin ? Perfectly doable, but would be a nightmare.


----------



## ct85711 (Oct 25, 2021)

Hehe, just like systemd is working on doing


----------



## astyle (Oct 25, 2021)

ct85711 said:


> Hehe, just like systemd is working on doing


or busybox


----------



## ct85711 (Oct 25, 2021)

I'm not as sure I'd include busybox.  The reasoning, is more of 2 parts; one being that busybox does not seem to be constantly growing in other areas.  The other part, is more of the base foundation of what busybox looks more of intended to be used for system recovery.  In that case, what busybox includes would make sense.


----------



## mrbeastie0x19 (Oct 25, 2021)

astyle said:


> ct85711 said:
> 
> 
> > I'm not as sure I'd include busybox.  The reasoning, is more of 2 parts; one being that busybox does not seem to be constantly growing in other areas.  The other part, is more of the base foundation of what busybox looks more of intended to be used for system recovery.  In that case, what busybox includes would make sense.
> ...


busybox also has a clearly defined 'area' which it targets and it has not expanded outside of that and its purpose was not met by any existing tools.


----------



## astyle (Oct 26, 2021)

mrbeastie0x19 said:


> busybox also has a clearly defined 'area' which it targets and it has not expanded outside of that and its purpose was not met by any existing tools.


Even busybox grows. Its size today is 2.1 MB. When it was released in 1999, it was meant to be a *complete bootable system* that fits on a 1.44 MB floppy. Today, a full implementation of it (over 200 UNIX utilities) doesn't even hold a candle to the 413 MB of 13-RELEASE mini-memstick.img, the minimum that it would take to have a complete bootable FreeBSD system that can also be used as a rescue disk.


----------



## zirias@ (Oct 26, 2021)

ct85711 said:


> Hehe, just like systemd is working on doing


To see what's happening here, it's enough to write one single "unit file" for systemd and _read the docs_ to find out how. I've never seen such opinionated and arrogant docs before. They talk about "bad habits" when it comes to daemons forking and maintaining pidfiles.

Mind you, that's how a Unix daemon works, ensures only one instance runs, and offers a reliable way to detect it's up and running. And that's done in a _self-contained_ way with no external dependencies.

But no, systemd people think that's a "bad habit". They recommend to just fire up the daemon non-forking, with _no checks whatsoever for success_ before moving on to the next in the start sequence. And if you want some checks, they recommend to call some systemd API from within your daemon or use dbus. A sane software developer should just refuse to do this crap in his daemon. See also https://forums.freebsd.org/threads/...ailable-on-a-remote-machine.82395/post-538484

I wouldn't call that "feature creep" (it's more than questionable in terms of features what that pile of crap is doing), but certainly dependency creep.


----------



## ct85711 (Oct 26, 2021)

Zirias said:


> But no, systemd people think that's a "bad habit". They recommend to just fire up the daemon non-forking, with _no checks whatsoever for success_ before moving on to the next in the start sequence. And if you want some checks, they recommend to call some systemd API from within your daemon or use dbus. A sane software developer should just refuse to do this crap in his daemon. See also https://forums.freebsd.org/threads/...ailable-on-a-remote-machine.82395/post-538484
> 
> I wouldn't call that "feature creep" (it's more than questionable in terms of features what that pile of crap is doing), but certainly dependency creep.


That portion, I wouldn't consider that portion to be feature creep, even though I may not agree with their point of view.  It's more of; why does the service manager (or at least I consider that part to be in the service manager) need the systemd-home, or the logger, or the init, or the device manager (udev), among others.


----------



## astyle (Oct 26, 2021)

Zirias said:


> To see what's happening here, it's enough to write one single "unit file" for systemd and _read the docs_ to find out how. I've never seen such opinionated and arrogant docs before. They talk about "bad habits" when it comes to daemons forking and maintaining pidfiles.
> 
> Mind you, that's how a Unix daemon works, ensures only one instance runs, and offers a reliable way to detect it's up and running. And that's done in a _self-contained_ way with no external dependencies.
> 
> ...


 I think FreeBSD has been doing a better job than Linux in fighting dependency creep - and is seeing some success in the init system. Fighting the dependency creep in the ports tree - that's a losing battle, but still needs to be done, IMHO.


----------



## Hakaba (Oct 26, 2021)

There is also bloat when Devs refuse to use the right technology or layer to do things.
JavaScript Devs that I work with thinks that is a good idea to filter Elastic Search results in js instead of do the right ES call. Hoo they refuse to learn ES !
And because of that ES will be replaced with a full costly solution in tier company.
So to solve the lack of developers courage, the solution impact architects, security, lawyers (for the contract) and the company loose end user data.
Draw your solution in a paper, if there is more arrows, squares or circles that the existing case, you do not solve something, you just ask unknown poeples to solve it for you.


----------



## richardtoohey2 (Oct 26, 2021)

Hakaba said:


> There is also bloat when Devs refuse to use the right technology or layer to do things.


The "right" technology will depend on a lot of things.  The budget, the customer, the delivery timeframe, the required quality (man on the moon and back again or a website to advertise tennis balls), the available human resources, security requirements, etc.

Sometimes "good enough" is exactly that.  Not perfect.


----------



## grahamperrin@ (Oct 31, 2021)

astyle said:


> … Fighting the dependency creep in the ports tree - that's a losing battle, …



Is there really a sense of losing? 

What's an example of the creep?


----------



## Menelkir (Oct 31, 2021)

grahamperrin said:


> Is there really a sense of losing?
> 
> What's an example of the creep?


Some distributions are well known of adding unnecessary dependencies to packages. This is very common in deb and rpm based distributions.


----------



## zirias@ (Oct 31, 2021)

Menelkir said:


> Some distributions are well known of adding unnecessary dependencies to packages. This is very common in deb and rpm based distributions.


Quite often the choice is either disable some feature or accept some additional dependency. As a packager, this is a problem. You don't want to draw in everything possible, but you _do_ want to provide your users with a "sane" feature set…

*edit:* Giving your users an easy way to build/package themselves, with different build-time options, like FreeBSD ports do, isn't a perfect solution, but makes this problem more bearable. A port maintainer _should_ try to make everything optional in the port that's optional with upstream source.


----------



## grahamperrin@ (Oct 31, 2021)

I mean, what's an example of "dependency creep in the ports tree" (assuming ports = FreeBSD ports).


----------



## Alain De Vos (Oct 31, 2021)

Strigi/baloo/nepomuk/akonadi/zeitgeist/tracker ?
gvfsd ?


----------



## Deleted member 67440 (Oct 31, 2021)

This is not always the case.
My little BSD program is a single source file.

But I was used to writing C64 assembler programs in the tape player buffer or on the 1000 bytes of video memory


----------



## jbo (Oct 31, 2021)

Alain De Vos said:


> Strigi/baloo/nepomuk/akonadi/zeitgeist/tracker ?


Well, personally I consider any port relying on python a shit show 
Of course this is not something a port author/maintainer can do much about.


----------



## grahamperrin@ (Oct 31, 2021)

Alain De Vos said:


> Strigi/baloo/nepomuk/akonadi/zeitgeist/tracker ? …



sysutils/kf5-baloo

What's the alternative? 

Ideally, I want context-indexed files to be found as easily as they are with (Matador) Spotlight.


----------



## ayleid96 (Oct 31, 2021)

Alain De Vos said:


> Software did not became bad. Memory just became cheap.


I would say that memory became cheap and software became bad. Its a fact not an opinion.


----------



## zirias@ (Oct 31, 2021)

Alain De Vos said:


> Strigi/baloo/nepomuk/akonadi/zeitgeist/tracker ?
> gvfsd ?


Ports are a collection of 3rd party software, so upstreams decide on what to depend. If dependencies are optional, ports should reflect that and I'm fine with it. If not, there's unfortunately nothing a port maintainer could do 

When porting x11-wm/fvwm3, I had a hard time deciding whether to enable Go (the language) by default. It enables a very nice new module, but that has all its Go dependencies linked statically and therefore is somewhat large. But as it's self-contained and doesn't imply _runtime_ dependencies, I decided to include it in the default options…



fcorbelli said:


> But I was used to writing C64 assembler programs in the tape player buffer


It's indeed amazing what you can hide in these few bytes. For example, I once created a checksummer (for these type-in listings to verify you typed correctly) located there, with a collision-robust algorithm (using a 16bit LFSR) and display directly to the video hardware  – didn't even need all the bytes.


----------



## astyle (Oct 31, 2021)

grahamperrin said:


> I mean, what's an example of "dependency creep in the ports tree" (assuming ports = FreeBSD ports).


For example, Xorg being something that virtually every DE depends on. Recently, FreeBSD ports stopped specifying that runtime dependency outright in its packages, but that does not change the API's on either side of the dependency. I still remember the silly graphics/tesseract dependency of x11/plasma5-plasma-desktop in packages. That dependency recursively pulled in a few others, and *I could not stand that dependency creep*. This is partly why I was so motivated to switch to ports and ditch packages.


----------



## jbo (Oct 31, 2021)

astyle said:


> [...] dependency of x11/plasma5-plasma-desktop in packages. That dependency recursively pulled in a few others, and *I could not stand that dependency creep*.


Sounds to me like you simply shouldn't use KDE/Plasma 

When I see screenshots of FreeBSD users running KDE/Plasma I sometimes get somewhat jealous - they look gorgeous. But those dependencies... not for me. So I simply use something like i3 or XFCE.


----------



## astyle (Oct 31, 2021)

jbodenmann said:


> Sounds to me like you simply shouldn't use KDE/Plasma
> 
> When I see screenshots of FreeBSD users running KDE/Plasma I sometimes get somewhat jealous - they look gorgeous. But those dependencies... not for me. So I simply use something like i3 or XFCE.


Well, I like KDE, so that was the motivation for me to switch to ports. And that was liberating - I was able to disable the silly dependencies I just described.  Now it's on to setting up Poudriere just right so that KDE compiles and upgrades the way I want.


----------



## Jose (Oct 31, 2021)

grahamperrin said:


> I mean, what's an example of "dependency creep in the ports tree" (assuming ports = FreeBSD ports).


How about Wayland being on by default? I have yet to find a single reason to need this.

Same, but to a lesser extent, Dbus is on by default too. It's not even optional in at least one port where it is not needed. I can see more of a use for Dbus, though I personally do not need it.


----------



## hardworkingnewbie (Oct 31, 2021)

grahamperrin said:


> sysutils/kf5-baloo
> 
> What's the alternative?


deskutils/recoll ???


----------



## astyle (Oct 31, 2021)

hardworkingnewbie said:


> deskutils/recoll ???


Is it functional as a drop-in replacement? or is it some work to accomplish that?


----------



## grahamperrin@ (Nov 1, 2021)

hardworkingnewbie said:


> deskutils/recoll ???



Thanks, I guess that I must build with `X11MON=on`, however I suspect that it will not work as expected …

<https://www.freshports.org/deskutils/recoll#config>


----------



## grahamperrin@ (Nov 2, 2021)

From <https://cgit.freebsd.org/ports/comm...e?id=0f25fd88c6fc1870d978a2fc3629c1f5562e6f94> (2018-01-26):



> Mark BROKEN with X11MON, required libfam is not linked which breaks the installation



Postscripts: 

also Issue with /usr/home on (for example) FreeBSD? (#128) · Issues · Jean-Francois Dockes / recoll · GitLab
FreeBSD bug 259679 – deskutils/recoll: update to 1.31.2
From the latter: 



> - X11MON option is not longer broken, fixed in Makefile.in patch


----------



## hardworkingnewbie (Nov 16, 2021)

The magic of NPM and sugarcoating bullshit strikes again, this time:
GitHub’s commitment to npm ecosystem security​You'll find there much buzzwordy stuff about commitments and rolling out 2FA.

The juicy tidbits though are these:

_Second, on November 2 we received a report to our security bug bounty program of a vulnerability that would allow an attacker to publish new versions of any npm package using an account without proper authorization._

Now this sounds like major fun! Updating any existant package by using an account with proper authorization, who would not like to abuse that?

_This vulnerability existed in the npm registry beyond the timeframe for which we have telemetry to determine whether it has ever been exploited maliciously._

Even more fun! In other words: they cannot be sure that before September 2020 nobody exploited that! The sensible thing would be to shut down the repository immediately, and check for breaches. Are they doing that? No. What they are doing is rolling out 2FA as lame excuse for that, framing crisis management as achievement.









						GitHub's commitment to npm ecosystem security | The GitHub Blog
					

We're sharing details of recent incidents on the npm registry, our investigations, and how we’re continuing to invest in the security of npm.




					github.blog


----------



## mer (Nov 16, 2021)

sko said:


> I'd like to counter that... Today the approach to building a 'small' (in terms of functionality) program is sadly more often than not: "lets use this framework, which needs that ecosystem with this interpreter and drags in those few hundred libraries and dependencies and needs exactly *this* one version of that graphical framework and exactly *that* version of this obscure library someone abandoned in 2005"





kpedersen said:


> From what I have seen it is down to middleware.


I'll agree with both of these and posit they are actually almost the same thing.

Frameworks are nice, can be helpful, but you get locked into doing things their way.  That may not fit your use cases, so you contort things to fit the framework.

Another fun thing about frameworks that I've run into?  Writing abstraction layers around and over them "because we may want to change the framework in the future" or "yes we hired good people but we think they aren't smart enough to actually use the framework correctly so we write abstractions that become a framework for the framework".

Never once have I seen a framework change.


----------



## kpedersen (Nov 16, 2021)

mer said:


> Another fun thing about frameworks that I've run into?  Writing abstraction layers around and over them


That is a good point. Abstraction layers over abstraction layers is quite wasteful (and messy) and yet seems to be very common.

In fact one example is Bjarne Stroustrup in his book, where he writes a weird incomplete abstraction layer over FLTK rather than using it directly (or safely).


----------



## D-FENS (Nov 16, 2021)

mrbeastie0x19 said:


> 20 years ago the most advanced games console I can think of was the PlayStation 2, this ran on a remarkable 32mb of Ram (with a small amount of other ram, and when I say small I mean less than 10mb). Today it is common to see a single program use that amount for an extraordinary small task. This was a machine that could play sound, moving video, accepted input and did networking, all in a remarkably small amount of Ram.
> 
> How is it hardware has got so sophisticated but software has become so large and required the utilisation of this.
> 
> To put it into perspective the worst machine I can find on Amazon right now has 2gb ram. That's 62.5x the amount the Ps2 had.


It is the virtualization levels. People started programming simple things in Javascript on Node inside a VM inside a docker inside a VM on top of a framework inside a runtime inside a VM ....
So to calculate 1+1 you have to fire up and boot up a thousand abstraction layers first.
Today I installed vscodium and I noticed at least 5 different package systems layered upon each other while installing. I hate when people start inventing and encapsulating their own package management systems sidetracking the OS, examples are plentiful:
- node
- perl
- python
- eclipse
are some of which I have encountered.
Any product that has its own "marketplace" or "app store" or "updater service" or "download/update/installation manager" bloats the package management of your system and should be boycotted.

And while virtualization is good as a tool in our toolbelt, people have misused it for so long to write inefficient software that could be way faster and consume way less memory if it were implemented on a lower level.

That said, I am guilty of this sin myself. I program in Java.
It's the good tooling support and the productivity that seduces us to bloat a few more MB from your memory for the sake of bringing the app faster to the user.

A college professor of mine said - you can program efficient (or inefficient) code in any language (we asked about Java being so much slower than native code back then). So first thing, he said, make sure your algorithm is correct and efficiently implemented. Then try to optimize performance via native methods, Assembler or whatever if need be.


----------



## D-FENS (Nov 16, 2021)

kpedersen said:


> That is a good point. Abstraction layers over abstraction layers is quite wasteful (and messy) and yet seems to be very common.
> 
> In fact one example is Bjarne Stroustrup in his book, where he writes a weird incomplete abstraction layer over FLTK rather than using it directly (or safely).


That's probably because programs are written mostly for people to read. So sacrificing performance for the sake of much better maintainability in future could be worth it.


----------



## Alain De Vos (Nov 16, 2021)

I recently came acrosss the following. To compile this program you have to make use the intellij editor ...
An editor becomes a build system.


----------



## astyle (Nov 16, 2021)

roccobaroccoSC said:


> Any product that has its own "marketplace" or "app store" or "updater service" or "download/update/installation manager" bloats the package management of your system and should be boycotted.


Even FreeBSD's own Ports tree?


----------



## D-FENS (Nov 16, 2021)

astyle said:


> Even FreeBSD's own Ports tree?


Of course not, that's my point exactly.
The OS _*should*_ be managing the packages, that's its job. The ports tree *should* be the only package manager. What I dislike is when I install node.js via the ports, and then start using npm to manage a complete marketplace of scripts the OS knows nothing about.
This opens a big security whole for all kinds of malicious code, by the way.


----------



## astyle (Nov 16, 2021)

roccobaroccoSC said:


> Of course not, that's my point exactly.
> The OS _*should*_ be managing the packages, that's its job. The ports tree *should* be the only package manager. What I dislike is when I install node.js via the ports, and then start using npm to manage a complete marketplace of scripts the OS knows nothing about.
> This opens a big security whole for all kinds of malicious code, by the way.


Ahh, one of my complaints about the ports tree is actually the proliferation of perl, php, python, ruby, r-cran ports. What's in the ports tree is only a subset of the language's repo. The language's repo may have fresher stuff than the ports tree - think the port maintainers have the time to keep up?


----------



## D-FENS (Nov 16, 2021)

astyle said:


> Ahh, one of my complaints about the ports tree is actually the proliferation of perl, php, python, ruby, r-cran ports. What's in the ports tree is only a subset of the language's repo. The language's repo may have fresher stuff than the ports tree - think the port maintainers have the time to keep up?


Good point! Still, doesn't any other lib do the same? Take x11/libinput as an example. It installs it's own headers into /usr/local/include, shared objects into /usr/local/lib etc.
Should C/C++ as a language platform maintain its own package manager independently from FreeBSD? Why should Python or Perl be treated differently?
The libraries are building blocks, they are software packages like anything else. So in that sense, the port maintainer for Python should not necessarily be responsible for all packages developed in Python - true. But each Python library should have its own port maintainer and the user should not be coralled into using several independent package managers possibly creating conflicts between each other.
Also, another monstrosity: when products start baking their versioning schema into the package names. For instance: php6, php7, php8, .... and then you install stuff like nextcloud-php7-mysql105 and figure out... oh sh*t, I also need that other package that uses a different mysql version and I can't have both simultaneously.... And there you have it - it's one big mess.
Versioning I do like: just name your package "php" and then have an adequate versioning and dependency strategy.
Or is it a shortcome of the FreeBSD packaging system? I don't know. Maybe the architects could have thought of a streamlined way to reflect multiple support branches of a package, instead of merging it into the package name.
Portage in Gentoo for example has the concept of "slots". So for PHP for example you have a 7 slot and an 8 slot and you can have multiple versions installed at the same time. I find this neat. You need then means of switching between them in runtime.


----------



## Alain De Vos (Nov 16, 2021)

I set default version in make.conf.And should i need conflicting versions , i would run it in a different jail. There is imho no real need for slots.


----------



## astyle (Nov 16, 2021)

roccobaroccoSC said:


> Portage in Gentoo for example has the concept of "slots". So for PHP for example you have a 7 slot and an 8 slot and you can have multiple versions installed at the same time. I find this neat. You need then means of switching between them in runtime.


FreeBSD does allow for different versions of Python to coexist - py27, py36, py 37, py38, py39... And they have something similar for Ruby (ruby27, ruby30) Dunno if they have something similar for PHP.


----------



## freezr (Nov 16, 2021)

I would pointing out a contradiction...



> Hardware is cheaper; software is expensive.



This is not actually true, making hardware is more expensive than programming, it requires more workers, professionals, energy, materials etc, rather than programming software which is can be doing even with very old hardware.

However bloated software help to sell newer hardware hence the deal is made; for me the real problem here is, especially with closed software, because programming time costs more than computing time, commercial software is burdening by legacy code that is hidden under the mat and end users cannot see it (but they can feel the side effects).


----------



## astyle (Nov 16, 2021)

tgl said:


> I would pointing out a contradiction...
> 
> 
> 
> ...


Depends on where you look. SSD's and RAM are actually dropping in prices, while Adobe Creative Cloud is getting more and more expensive.


----------



## Vull (Nov 16, 2021)

astyle said:


> Depends on where you look. SSD's and RAM are actually dropping in prices, while Adobe Creative Cloud is getting more and more expensive.


Depends on how you look at it. Making one piece of hardware is terribly expensive, so it must be mass-produced in the millions of units just in order for manufacturers to recover their cost of goods sold.

Per unit price to consumers, on the other hand, is relatively inexpensive, but only because the cost of both development and manufacturing is spread over millions of consumers, facilitated by assembly line cost savings, automation, and exploitation of unprotected foreign nationals as laborers.


----------



## D-FENS (Nov 16, 2021)

astyle said:


> FreeBSD does allow for different versions of Python to coexist - py27, py36, py 37, py38, py39... And they have something similar for Ruby (ruby27, ruby30) Dunno if they have something similar for PHP.


Yes, but by means of putting the version number inside the package name. I don't like the idea of versioning the same product in multiple packages like that.
Not bashing on the porters here, I understand that they don't have a choice. Just wondering what would be a clean solution like.


----------



## grahamperrin@ (Nov 17, 2021)

roccobaroccoSC said:


> … monstrosity: when products start baking their versioning schema into the package names. For instance: php6, php7, php8,



Distinguished naming is usually a necessity.



roccobaroccoSC said:


> … like nextcloud-php7-mysql105 … other package that uses a different mysql version and I can't have both simultaneously. …



Could you not jail one of the two versions?


----------



## ct85711 (Nov 18, 2021)

The problem I see on everything having their own package manager, isn't so much about the package manager it's self.  It is more of the various package managers don't limit to just their stuff.  An prime example, would be pip/pypi.  On pip you can easily install meson, cmake and ninja.  Those 3 programs aren't python libraries, nor the bindings; but the build agents themselves.  That by it's self wouldn't be so bad, but the package manager defaults to wanting to install system wide; overwriting what the system had installed.  I know some upstreams are resorting to using like pip and other language package managers over the system one; more of because it's the only way to have a common base between various platforms/distros.


----------



## astyle (Nov 18, 2021)

ct85711 said:


> The problem I see on everything having their own package manager, isn't so much about the package manager it's self.  It is more of the various package managers don't limit to just their stuff.  An prime example, would be pip/pypi.  On pip you can easily install meson, cmake and ninja.  Those 3 programs aren't python libraries, nor the bindings; but the build agents themselves.  That by it's self wouldn't be so bad, but the package manager defaults to wanting to install system wide; overwriting what the system had installed.  I know some upstreams are resorting to using like pip and other language package managers over the system one; more of because it's the only way to have a common base between various platforms/distros.


Yeah... Shouldn't the FreeBSD version of pip/pypi *check* for presence of stuff like ninja, cmake and meson on the system? I would think that the ports system (not the pkg) would be smart enough to pull them in as deps, instead of letting pip/pypi pull that in from Python repos. Sometimes, those language package managers really reinvent the wheel.


----------



## Alain De Vos (Nov 18, 2021)

Idem dlang dub or ocaml opam


----------



## jbo (Nov 18, 2021)

astyle said:


> Sometimes, those language package managers really reinvent the wheel.


"sometimes"...

It's the same shit (almost?) every time.


----------



## kpedersen (Nov 18, 2021)

jbodenmann said:


> "sometimes"...


Indeed. Their train of thought is always the same.

1) I have a nifty, easy to use language
2) Ah, to do anything useful I need to call into a native binary
3) Ah, calling into native binaries isn't easy because my language doesn't have the ability to parse header files unlike C, C++ and Obj-C
4) I'll create a binding generator
5) Ah, turns out this is a big faff to use as part of a workflow
6) I'll create some tooling that fetches pre-generated bindings as dependencies
7) Holy shite, there are millions of GB worth of these things! Turns out my language really just glues together C libraries.
8) I'll develop a package manager to deal with them all.

Rust is doomed to fail in the same way unless they bolt on a small C compiler frontend and properly solve the issue around #3.


----------



## astyle (Nov 18, 2021)

kpedersen said:


> Indeed. Their train of thought is always the same.
> 
> 1) I have a nifty, easy to use language
> 2) Ah, to do anything useful I need to call into a native binary
> ...


Old Fart C Programmer may have a point here (here's looking at you, Geezer  )


----------



## Alain De Vos (Nov 18, 2021)

dlang dub, "dub add tcltk" , pulls in x11 & tcltk sources and start to compile these in your user home directory to create bindings.
Even if you have x11 & tcltk installed with headers and shared libraries.


----------



## astyle (Nov 18, 2021)

Alain De Vos said:


> dlang dub, "dub add tcltk" , pulls in x11 & tcltk sources and start to compile these in your user home directory to create bindings.
> Even if you have x11 & tcltk installed with headers and shared libraries.


 And here I thought Java and Windows were the worst offenders.


----------



## kpedersen (Nov 18, 2021)

astyle said:


> Old Fart C Programmer may have a point here


For so long I was a Java developer. But over time I just realized the old fart C programmers wrote better software, achieved better things, earned more money and got assigned cooler projects. So I joined them basically!

UK inflation and house prices has kinda nulled the benefits of the higher pay... but the rest stayed true


----------



## D-FENS (Nov 18, 2021)

astyle said:


> And here I thought Java and Windows were the worst offenders.


Windows SxS is my favorite  Ain't talkin' bout bloat ...


----------



## ct85711 (Nov 19, 2021)

astyle said:


> Yeah... Shouldn't the FreeBSD version of pip/pypi *check* for presence of stuff like ninja, cmake and meson on the system?


Should is the key word, I haven't checked on FreeBSD if pip does or not; but either way that is an assumption that can (and have on linux distros) have disastrous results.  Then you get into the part, just because pip does, what about rust, ruby, npm, and any others?  Sure the examples are more relatively harmless/less dangerous, but at the same time consider what damage would say pulling in clang/llvm or even libressl (libressl isn't part of base, but it uses some of the openssl library names, so could overwrite libraries)

Sure, some of the damage can easily be mitigated (and possibly avoided) using some common practice (like not running as root); but we all seen several times people ignore or even straight out do so anyways.  Afterwards come complaining (like usual) that their system is
broken.

Update:
Just did a small check, and pip does NOT check if the system installed package is installed or not.  While it will use the already installed package if it is needed as an dependency; it still overwrote the file(s) that was already installed.
What I did was, installed ninja and cmake and python38-pip through pkg.  Next, I ran `pip install ninja`.  Afterwords, did `pkg check -s ninja`.  Result, the /usr/local/bin/ninja did not match the checksum.  Also verified through `pip show -f ninja` that is where ninja was installed.

Now, yes did run pip as root (this was intentional for testing), which pkg strongly recommends not to do.

Update 2:
Just noticed removing ninja through pip; also removed the executable, thus completely leaving the ninja pkg broken.


----------



## grahamperrin@ (Nov 19, 2021)

ct85711 said:


> pip as root (this was intentional for testing), which pkg strongly recommends not to do.



Simple: we can't expect root pip to respect what's done by root pkg, and vice versa.


----------



## Jose (Nov 19, 2021)

grahamperrin said:


> Simple: we can't expect root pip to respect what's done by root pkg, and vice versa.


Awesome! Now calculate the blast radius of the Cartesian product of pip, cargo, rubygems, npm, CPAN, CRAN, and the native package manager.


----------



## astyle (Nov 19, 2021)

Jose said:


> Awesome! Now calculate the blast radius of the Cartesian product of pip, cargo, rubygems, npm, CPAN, CRAN, and the native package manager.


3 TB, 1 GB, 4 MB, 1 KB, 5 bytes, and 2 bits exactly.


----------



## Alain De Vos (Nov 19, 2021)

astyle, do not forget the blast-radius of "ansible", "nim nimble", "ruby gem","go","ocaml opam".
But R & Python are huge. On my desktop:

```
pkg info | grep -i cran | wc -l                                                                                                                  
373
pkg info | grep -i py38 | wc -l                                                                                                                
 464
```


----------



## covacat (Nov 19, 2021)

```
/2001-11/freebsd.log.gz:*** [31-Oct:21:03] Signoff: thn1k3r (ircII2.8.2-EPIC3.004 --- Bloatware at its finest.)
```


----------



## grahamperrin@ (Nov 20, 2021)

Jose said:


> Now calculate



42.


----------



## Alain De Vos (Nov 20, 2021)

Interested how ocaml compiles hello world ?
here it is:

```
CC  -v -o 'a.out'  \
'helloworld.s' 'a.out.startup.s' \
'/usr/home/x/.opam/4.13.1+options/lib/ocaml/std_exit.o' \
'/usr/home/x/.opam/4.13.1+options/lib/ocaml/stdlib.a' \
'/usr/home/x/.opam/4.13.1+options/lib/ocaml/libasmrun.a' \
'-L/usr/home/x/.opam/4.13.1+options/lib/ocaml' \
-pthread -Wl,-E -lm
```
The opam directory in the home folder contains dynamic and static libraries compiled by the "package manager opam".
Sources of it are downloaded somewhere over the rainbow.


----------



## astyle (Nov 20, 2021)

Alain De Vos said:


> somewhere over the rainbow


Wonder who else gets the reference to the song, considering the context?


----------



## D-FENS (Nov 21, 2021)

Alain De Vos said:


> astyle, do not forget the blast-radius of "ansible", "nim nimble", "ruby gem","go","ocaml opam".
> But R & Python are huge. On my desktop:
> 
> ```
> ...


maven, gradle, ivy, sbt.


----------



## Alain De Vos (Nov 21, 2021)

npm (the list goes on)


----------



## grahamperrin@ (Nov 21, 2021)

gravy, myrtle, chuff, cbt.


----------



## Alain De Vos (Nov 21, 2021)

But package managers share one common thing.
Multiple versions of the same library, a problem known as "hell".


----------



## kpedersen (Nov 21, 2021)

Alain De Vos said:


> But package managers share one common thing.
> Multiple versions of the same library, a problem known as "hell".


The closest I have seen to a solution to this was Solaris. You could have different "software stacks" with different versions of the same library in:

/usr/csw
/opt/csw
/usr/local
/usr/sfw
etc
But you can easily achieve this with FreeBSD by changing the Ports PREFIX variable as you build them. It would be an interesting prospect to have something like:

/usr/local/2020
/usr/local/2021
/usr/local/2010 <--- Mainly for a decent Gnome 2 stack 
But we actually have this in place and can mostly work today already if you can be bothered to build the packages. FreeBSD is actually one of the few operating systems that has decent compatXX packages (allowing us to even run versions as old as 3.x in a Jail / chroot.)


----------



## Alain De Vos (Nov 21, 2021)

I think having multiple versions of a library is a bad software design.  (unless it is development or testing).
Alot of times i have seen it using just because software was not more maintained.


----------



## kpedersen (Nov 21, 2021)

Alain De Vos said:


> Alot of times i have seen it using just because software was not more maintained.


Such is life. This will never change. Throw in the fact that software can sometimes regress and you end up with quite a hard problem for a package manager to solve.


----------



## Alain De Vos (Nov 21, 2021)

In the blacklist of my desktop i currently have,

```
net/samba412
net/samba413
lang/python2
lang/python27
lang/python36
lang/python37
lang/ruby26
www/qt5-webengine
```
So whatever needs these accuse me for the wording ,obsolete versions, falls definitive of my system.


----------



## D-FENS (Nov 22, 2021)

Alain De Vos said:


> In the blacklist of my desktop i currently have,
> 
> ```
> net/samba412
> ...


Maybe blacklist any package that has a digit in its name ?


----------



## astyle (Nov 22, 2021)

roccobaroccoSC said:


> Maybe blacklist any package that has a digit in its name ?


You won't be able to install KDE (which depends on QT5) or Apache24. Sometimes, you just gotta accept that imperfections are unavoidable, go drink some tea, and watch a loud, smelly, burning dumpster truck rumble by once in a while.


----------



## Alain De Vos (Nov 22, 2021)

I have the following installed:

```
ap24-mod_fastcgi-2.4.7.1       Apache 2.4 fast-cgi module
ap24-mod_fcgid-2.3.9           Alternative FastCGI module for Apache2
ap24-mod_scgi-2.0              Apache module that implements the client side of the SCGI protocol
apache-commons-beanutils-1.9.4 JavaBeans utility library
apache-commons-codec-1.15      Implementations of common encoders and decoders
apache-commons-collections-3.2.2 Classes that extend/augment the Java Collections Framework
apache-commons-httpclient-3.1_2 Package implementing the client side of the HTTP standard
apache-commons-io-2.11.0       Collection of I/O utilities for Java
apache-commons-lang-2.6        Apache library with helper utilities for the java.lang API
apache-commons-lang3-3.8.1     Apache library with helper utilities for the java.lang API
apache-commons-logging-1.2     Generic logging wrapper library
apache-openoffice-4.1.11       Integrated wordprocessor/dbase/spreadsheet/drawing/chart/browser
apache24-2.4.51                Version 2.4.x of Apache web server
apachetop-0.18.4_1             Apache realtime log stats
```
But indeed no kde. But life is ok with sway.
If i could compile kde without its akonadi/nepomuk stuff it would be nice. In fact something between kde & lxqt would be nice.


----------



## baaz (Dec 9, 2021)

sko said:


> ah yes, python... "I learned programming with python... this is so easy! just import these 83 libraries and you can print 'hello world' on the screen with just 2 lines of code! lets use it for everything!"


you just touch the tiip of the iceberg . I once tried to get into Machine learning......


----------

