# Proposal. For Browser survival in FreeBSD



## Nicola Mingotti (Feb 11, 2020)

Hi, 

I am recently having some hard time with Chromium, it has worked for a very long time, now it is unusable.

It is about 2-3 years I am using FreeBSD as my main work system, and I think this is one of the thing that must be improved. 

I guess all of you use a Web Browser to work these days. I understand that FreeBSD is mostly a server side OS, but, If we want to have any expectation of anybody deciding to use it on the desktop we must fix the browser issue. 

Also, consider, FreeBSD is watching to the embedded world. But in the embedded world things have often a screen and a GUI, this GUI is often a just a web page. What people see of your embedded thingy will be just a screen with a browser window open. So, it is better the browser to work reliably, even if we don't care much about desktop users.

I remember when I started I had some other ugly issues with the browser. Then they were solved and for a while all went fine. Now we are back at the starting point.

I am not the most experienced when it comes to ports and packages but, I want to try to propose a possible solution.

What if we implement a package version of a main browser which is fully-statically-compiled? No dependency on other ports or local libs. The browser will be a big binary containing all its universe. 

Then, when times come for a new release of the browser to be released we don't delete the old binary-browser package, we just add the new one. Only on the next-next release the older browser will be removed from the repository. In such a way we will be reasonably sure people never remain without their trusted tool.

bye
n.


----------



## Sevendogsbsd (Feb 11, 2020)

Not sure this is a www/chromium specific issue. I use chromium that I build from ports and have zero issues. Not to discredit your proposal, I just wanted to add that this may not be a widespread issue.


----------



## aragats (Feb 11, 2020)

Nicola Mingotti said:


> we will be reasonably sure people never remain without their trusted tool


For Firefox we have the ESR version: www/firefox-esr which more or less serves such purpose.
Maybe such approach is more reasonable?

Honestly, I tried switching from Firefox to Chromium several times, and every time I got bad experience with stability and consistency.


----------



## Sevendogsbsd (Feb 11, 2020)

I just remembered these issues may be with Chromium version 80 - I am on 79.xxx. I have actually never had issues with Chromium. I only use it because it is purported to be more secure than Firefox. I do not like some of the privacy issues with Firefox but that's another topic.


----------



## tedbell (Feb 11, 2020)

Sevendogsbsd said:


> I only use it because it is purported to be more secure than Firefox. I do not like some of the privacy issues with Firefox but that's another topic.




I always heard the opposite.


----------



## Sevendogsbsd (Feb 11, 2020)

That's what I get for reading on the Internet  Seriously, I read a couple of months ago that chromium is far ahead of firefox in terms of general browser security, but again, I don't remember from where or whose opinion it was. I like to stick to mainstream browsers for my online activities that involve banking or logins, and for searching and research, I use netsurf. Mainstream to me means one of 2: chromium or Firefox, not sure what else there is for us...


----------



## ralphbsz (Feb 11, 2020)

Nicola Mingotti said:


> I guess all of you use a Web Browser to work these days.


Yes, but not on FreeBSD. Not on Linux either. I only use web browsers on OSes that are actually designed around being human-facing.



> I understand that FreeBSD is mostly a server side OS,


Exactly.



> but, If we want to have any expectation of anybody deciding to use it on the desktop we must fix the browser issue.


Who is "we" in this sentence? Personally, I don't want to have that expectation. If it works for people as a desktop system: great. If if doesn't: I don't care at all. But my opinion is irrelevant, I'm just one user of FreeBSD. I don't give large donations to FreeBSD, I don't write much code for it, and I'm not a member of the various steering committees and groups.



> Also, consider, FreeBSD is watching to the embedded world. But in the embedded world things have often a screen and a GUI, this GUI is often a just a web page.


Few embedded things actually have a screen; most of the interaction with them is done from people's desktop, running against a web server (not browser) on the embedded thing.
But a few embedded things actually do have screens. I would think that anyone who runs an unhardened, consumer-grade browser (whether hardened by statically linking or not) as a UI for a high-reliability embedded thing is not building a good embedded thing, they are building a toy. There are user interface toolkits for the embedded world, and they are not just running a full browser.



> I am not the most experienced when it comes to ports and packages but, I want to try to propose a possible solution.


I think the correct solution is to attract many volunteers who want to maintain not only the browser package itself, but also all the packages that the browser depends on. And turn it into a coherent and well-supported set of packages.

And I think in FreeBSD that's just not going to happen. FreeBSD does not make any money from people using it as a desktop, and the FreeBSD developers I know of are not interested in desktop usage. This is very different from Linux: while the biggest source of commercial funding for Linux comes from server systems (RHEL and SUSE support for myriads of servers), the various well-funded Linux organizations explicitly target desktop usage. Just as one example: If you look at the release notes for the latest version of Raspbian (the Debian version that's pre-built for the Raspberry Pi), nearly all of the improvements the Raspian team made are for the GUI. It 
seems that the RPi has transitioned from being a teaching system and embedded system to being a low-cost desktop, in the eyes of the people who do OS maintenance for it.

I'm not saying that what your are proposing is a bad idea (except for the technical details, statically linking seems like a step backwards); I'm just saying that within the FreeBSD ecosystem it is likely to get any attention, and other ecosystems are better suited for it.


----------



## 20-100-2fe (Feb 11, 2020)

Sevendogsbsd said:


> Seriously, I read a couple of months ago that chromium is far ahead of firefox in terms of general browser security



You mean, reading this kind of article: https://www.theregister.co.uk/2020/02/05/google_chrome_id_numbers/


----------



## ShelLuser (Feb 11, 2020)

Nicola Mingotti said:


> I am recently having some hard time with Chromium, it has worked for a very long time, now it is unusable.
> 
> It is about 2-3 years I am using FreeBSD as my main work system, and I think this is one of the thing that must be improved.


What must be improved? You're being extremely vague here and well...  it would help if you'd be more specific about the issue. For all I know, and to be honest I'm leaning towards this conclusion, you made a mistake during the upgrade procedure somewhere, ran into issues, and now you're blaming something else for it.

Also... considering the fact that the forum hasn't seen a massive increase of complaints regarding the upgrade I don't see what should be improved here.



Nicola Mingotti said:


> I guess all of you use a Web Browser to work these days. I understand that FreeBSD is mostly a server side OS, but, If we want to have any expectation of anybody deciding to use it on the desktop we must fix the browser issue.


What "browser issue"?  You have problems with 1 out of a dozen browsers and now the whole OS has a "browser issue"? I beg to differ, I've had no issues what so ever as of late. As such I fully fail to understand what needs to be fixed here.

Still, as always with these things: FreeBSD provides all the tools and info to work on the OS itself, so... maybe start there?



Nicola Mingotti said:


> What if we implement a package version of a main browser which is fully-statically-compiled? No dependency on other ports or local libs. The browser will be a big binary containing all its universe.


What makes you so sure that this doesn't exist already?


----------



## Sevendogsbsd (Feb 11, 2020)

Yeah, I thought about that. More of a privacy issue really. Firefox IS a shorter build using ports. There are a lot of settings in Firefox where data gets sent back to Mozilla and I always turn those off. Of course there are no settings like that in Chromium, but that doesn't mean data isn't getting sent back to the great Google monster. 

It's pretty difficult to have a seamless browsing experience without using a big name browser. For us, there are only 2, so it's a dice roll...


----------



## tedbell (Feb 11, 2020)

Sevendogsbsd said:


> Yeah, I thought about that. More of a privacy issue really. Firefox IS a shorter build using ports. There are a lot of settings in Firefox where data gets sent back to Mozilla and I always turn those off. Of course there are no settings like that in Chromium, but that doesn't mean data isn't getting sent back to the great Google monster.
> 
> It's pretty difficult to have a seamless browsing experience without using a big name browser. For us, there are only 2, so it's a dice roll...



I am torn. There is a lot of sketchy stuff to Firefox but Google is not known for browsing privacy at all. I want to switch back to Chromium on FreeBSD so I don't have to mess with my fontconfig to get fonts to work in Firefox. Windows is the opposite. After all these years Chrome still has that pale, ugly, washed out font.


----------



## Sevendogsbsd (Feb 11, 2020)

I feel exactly the same way. I wish I could find the article (dev mailing list?) I found about comparing the technologies behind both browsers. Made it sound like FF was ancient on the backend and chromium was more modern, but who knows, I am not a dev. I use no add-ins/plugins on any browser because despite the privacy benefits, any add-in can read all of your browsing traffic. I know some are trustworthy, or rather folks trust some, but I am a bit extreme in that I trust none. I use the 2 browser method of one for logins and one for general surfing. 

I have great fonts in both browsers. I normally use "ubuntu" font in both, but that choice doesn't rally matter. What matters for me is to disable bitmap fonts completely, then everything looks sharp and clear.


----------



## kpedersen (Feb 11, 2020)

Nicola Mingotti said:


> I guess all of you use a Web Browser to work these days.



Don't be so sure. Many people do *not* subscribe to cloud services and so really do not have these problems. Work has continued like always 

You are right though, consumer web browsers are all pretty terrible. Think of them as toys and don't depend too much on them.

If you really cannot sever your reliance on internet browsing, consider running a browser-only VM running the most "average" setup of Windows 10 and Chrome.

I do like your proposal however, if anything it would be useful as an experiment to see if we can really clean one up and tailor it to the OS rather than always being dragged along to the latest but weakly tested release.


----------



## Sevendogsbsd (Feb 11, 2020)

Not trying to derail this thread too much, but I absolutely depend on a browser, for both my job and for general life. I use cloud all the time - on FreeBSD, it's all done through a browser, on my MacBook, via whatever their connection mechanisms are, some browser, some apps. 

I like off-shoot browsers, but my fear with these is they may or may not receive attention in terms of maintenance and security fixes so I am not sure I trust them. Other off-shoots are just terrible in terms of rendering which obviously ruins the experience or make some sites unusable.


----------



## Nicola Mingotti (Feb 11, 2020)

Sevendogsbsd said:


> Not sure this is a www/chromium specific issue. I use chromium that I build from ports and have zero issues. Not to discredit your proposal, I just wanted to add that this may not be a widespread issue.



I use almost only packages Sevendogsbsd . So probably we are not on the same version of the application.

```
$> chrome --version
Chromium 79.0.3945.130
```

My FreeBSD runs in a very limited machine, so, when things don't work just well I see it immediately.


----------



## Nicola Mingotti (Feb 11, 2020)

aragats said:


> For Firefox we have the ESR version: www/firefox-esr which more or less serves such purpose.
> Maybe such approach is more reasonable?
> 
> Honestly, I tried switching from Firefox to Chromium several times, and every time I got bad experience with stability and consistency.




Hi aragats, I have a lot of simpathy for Firefox ESR. I used more Chrome|ium in the past because (1) more people use it (2) the developer console is a bit better IMO (3) pages run into separate processes => If one window goes nut it is improbable that you need to kill all the other 30 windows (4) it seems GoogleDocuments works better in the Chrome family (not sure here). [just by heart, check stuff to be sure] 

For me it would be fine if we decide that our "rock-solid-browser" package is based on Firefox. Totally fine. It is not a matter of brand. I require only that it is a known product because I develop stuff for people to use it

AFAIK Firefox ESR has a long term support so, Mozilla ensures bugs will be fixed but things will not change and it will be supported for about 5 years [i don't check, right by heart]. Ok, but still we need a way to prevent our package system to mess with the "rock-solid-browser".

What I would like to avoid is that after a *pkg upgrade* I can't run the browser anymore.

I can accept it for dolphin, okular and other Qt stuff, they are broken so often that I got used to it. But the browser is too much a critic piece of tech. I need it to ask for help or to dig out some kid of solution.


----------



## Phishfry (Feb 11, 2020)

ShelLuser said:


> What must be improved?


Well I am a SeaMonkey user and portmgr yanked my browser. So how about not yanking ports that are commonly used for starters:


> Upstream has poor history of delivering security fixes on time.
> 2.49.4 was released almost 1 year ago. While 2.49.5 with 60.2
> backports is planned[1] it's at least 1 month away while ESR60
> will reach EOL in 2 months. By the time 2.57.0 arrives it'll
> also be vulnerable.



I don't agree with any part of the comment that was used to remove the port.
The port has been updated several times since removed  so this was a bad call to me.

Please give me enough rope to kill myself.
I am able to contain risk and don't need a net nanny.

I totally agree with the original posters sediment in this thread.
When you have a near perfect system and 'upgrading' changes it, the results can be devastating.
I mean a browser is the most simple element I require to use FreeBSD as a desktop.
Xfce changing to GTK3 and was tough to swallow but GTK2 was quite old. That I can deal with.

I have since figured out a way around Seamonkeys removal by using `pkg add` and using the old packages and all its dependencies.
There is no way new users would be able to pull it off. Heck just locking dependent packages has not worked very well.

Sorry to vent but I have been using the same browser format since 1994 and don't plan on changing because of somebodies opinion of browser security.



Sevendogsbsd said:


> I like off-shoot browsers


I too tried to migrate.  Falkon, Otter Browser, Iridium, Netsurf. Heck even Firefox ESR is nowhere near SeaMonkey for me.
I guess I am engrained in my ways. If a layout works, why change it. That is my gripe.
I don't want tabs or hidden menus. Just a plain browser that does not change with the wind.

If I could static compile SeaMonkey I would. Unfortunately that ship has sailed log ago.
Most ports won't static compile without work..

I came to FreeBSD from an EOL'ed WinXP and SeaMonkey in ports made the transition seamless.
Having the same OS for browsing and serving has made FreeBSD the only alternative I have used.
Yanking SeaMonkey almost made me leave. I use the browser on FreeBSD daily.

So in summary what we need is stability with the browsers.
I say this not to bash our great OS but for constructive criticism.


----------



## Nicola Mingotti (Feb 11, 2020)

ShelLuser said:


> What must be improved? You're being extremely vague here and well...  it would help if you'd be more specific about the issue. For all I know, and to be honest I'm leaning towards this conclusion, you made a mistake during the upgrade procedure somewhere, ran into issues, and now you're blaming something else for it.
> 
> Also... considering the fact that the forum hasn't seen a massive increase of complaints regarding the upgrade I don't see what should be improved here.
> 
> ...



Your attitude is entirely critic, I have opened another thread on the forum to discuss this problem. As I have opened a Bugzilla report.

If you don't have the problem good for you. 

If you know of a fully static build just let me know. I am not sure of anything.


----------



## Nicola Mingotti (Feb 11, 2020)

ralphbsz said:


> Yes, but not on FreeBSD. Not on Linux either. I only use web browsers on OSes that are actually designed around being human-facing.
> 
> 
> Exactly.
> ...



Hi ralphbsz , i respect your point of view but i can't share it. If you don't use FBS as desktop you can't feel the pain. I use FBS to develop almost everything except Android stuff. Well, sometimes things go wrong and it is not going to be nice. The worst is, when also the browser leaves you. Then, you just need another computer to fix your computer.

About "*we*". If was good at Makefiles, was a Pudriere person, had strong hardware to compile a browser just for fun, and if I was a C++ developer, at least 1 month per year, I would have not even opened this thread. I would have just solved my problem. Since this is not the case. I can just raise the issue and hope somebody in the community who is good with the required tech is willing to solve it. Too much out of my scope.

I would respectfully disagree with you on a few points which are a bit out of the main topic.

1] embedded can be controlled via web or App but that not always is desirable. eg-0, think of your car dashboard. eg-1 Think mesurement instruments, you just don't want to use the Phone, and follow a super boring procedure to read that damn temperature. eg-2. Ever tried an headless oscilloscope, well, it is kind of annying, if you can buy one with a screen you will just do it. 

In general, if you have your device located in a room and you can add the basic controls over it, the user interaction will be easier and everybody will be happier. Then, computer guys can always use in order: (1) the App (2) the web interface (3) ssh (4) the serial cable. But for others, all these things are in order of "annoyment" and "detestability".

2] interfaces. I wrote a few weeks ago a little program in PySide2, guess what, there is a browser there. It is not called Chrome but I think the engine is the same [by heart, check to be sure]. Then if you try Google something like "map widget" you will find, as i did, that the recommended solution is to just run the Web widget and see the map inside it. So the trend is pretty much clear, Interfaces are converging to the web. Do i like it? Not much. But I can live with that. 

3] static linking. I understand that may not be fashionable. But that is the best I can figure out to actually improve the status quo. The status quo being, if you run "pkg install foobarbaz" something else may be upgraded by necessity and the consequence being, your browser not working anymore. I realize it is not an elegant way to solve things but, just for the browser we may close one eye ... isn't it the future OS ?


----------



## memreflect (Feb 11, 2020)

Phishfry said:


> I too tried to migrate.  Falkon, Otter Browser, Iridium, Netsurf. Heck even Firefox ESR is nowhere near SeaMonkey for me.


What about Seamonkey makes it so attractive to you?  Familiarity, perhaps?  Put a different way, what is lacking in other implementations you've tried?

Have you given www/opera a go?  I don't want to presume your reasoning, but Opera boasts a feature-set more akin to Seamonkey's "communication suite" rather than being a simple web browser, so you might find it to your liking after configuring it as you wish.  And to be completely fair, it's the only "suite" I'm aware of; such software is a bit of a dying breed as just getting HTML+CSS+JS to all work correctly is quite a task already these days.

Edit: Nevermind.  Even Opera's own site doesn't seem to function quite correctly in it, and it's apparently version 12.x when the newest is 66.x...  Perhaps this is why you're stuck with Seamonkey: nothing else offers the complete package, and mucking about with the settings for different pieces of software is a pain compared to the cohesive configuration experience that Seamonkey offers?


----------



## Nicola Mingotti (Feb 12, 2020)

Sevendogsbsd said:


> Not trying to derail this thread too much, but I absolutely depend on a browser, for both my job and for general life. I use cloud all the time - on FreeBSD, it's all done through a browser, on my MacBook, via whatever their connection mechanisms are, some browser, some apps.



Totally agree here. I don't own a TV since about 12 years ! I just watch stuff on the computer, mostly browser these days: youtube and fubo.tv. Since a long time chat and phone calls have been through the web for me, via Whatsapp. At work without a browser would be simply unthinkable, also our timesheet is via web. I am totally browser dependent. 

It is true, I run in a VM, so when I am really in trouble I use the browser in MacOS, but I try do to it less and less, because the system I love is FreeBSD, where almost everything has been configured to work the way a like


----------



## Nicola Mingotti (Feb 12, 2020)

kpedersen said:


> Don't be so sure. Many people do *not* subscribe to cloud services and so really do not have these problems. Work has continued like always


In my current workplace, a big one, everything has a browser procedure, I have a GoogleDocument just for the list of this "important" links. So, I guess my view is biased.



kpedersen said:


> If you really cannot sever your reliance on internet browsing, consider running a browser-only VM running the most "average" setup of Windows 10 and Chrome.


I considered something similar but still I have not found the silver bullet. I am alredy in a VM, so my only reasonable choice is to run the browser on metal and "import" its window in FreeBSD in some way.



kpedersen said:


> I do like your proposal however, if anything it would be useful as an experiment to see if we can really clean one up and tailor it to the OS rather than always being dragged along to the latest but weakly tested release.


This is exactly my point !


----------



## 20-100-2fe (Feb 12, 2020)

Sevendogsbsd said:


> Made it sound like FF was ancient on the backend and chromium was more modern, but who knows, I am not a dev.



The engine of Firefox has changed around version 60, resulting in a dramatic performance increase.
Thunderbird has adopted the same engine in version 68.

As to privacy, Firefox asks you on its first launch whether you want to contribute to the project by sending telemetry.
You can still change your mind afterwards, the settings are in Preferences.
Nothing is hidden.


----------



## ShelLuser (Feb 12, 2020)

Phishfry said:


> Well I am a SeaMonkey user and portmgr yanked my browser. So how about not yanking ports that are commonly used for starters:


I'm a SeaMonkey user too, have been for many years, and my update never gave me problems. I'm using Portmaster btw, so a pretty vanilla'ish upgrade scheme.


----------



## mark_j (Feb 12, 2020)

I've got to say I was quite happy with gopher...


----------



## cynwulf (Feb 12, 2020)

You could maintain a port of a static build of e.g. firefox.  Mozilla still distributes static builds for Linux, so there should be some instructions on the WWW somewhere...


----------



## Sevendogsbsd (Feb 12, 2020)

Well, started to build www/firefox last night but realized it requires pulseaudio. Not sure if build or runtime requirement but I thought I read it is a build requirement only. I don't have anything against it per se, but was trying to avoid it because frankly, sound works just fine without it and I didn't see the point to it. I even have 
	
	



```
OPTIONS_UNSET = PULSEAUTIO
```
 in my /etc/make.conf but this didn't matter - apparently a hard requirement?

I may just build firefox anyway. My beef with the author of pulseaudio is with his other (unnamed) project, not pulseaudio.


----------



## aragats (Feb 12, 2020)

Sevendogsbsd said:


> I thought I read it is a build requirement only.


I believe so, I'm using Firefox in FreeBSD for many years, always installed it with pkg() and never installed/used audio/pulseaudio.


----------



## DutchDaemon (Feb 12, 2020)

Hope to see Brave coming to FreeBSD.


----------



## Phishfry (Feb 12, 2020)

memreflect said:


> What about Seamonkey makes it so attractive to you? Familiarity, perhaps? Put a different way, what is lacking in other implementations you've tried?


Yes familiarity is what I find pleasing.
When I go to Firefox it seems they change the security settings location with every version.
I get the feeling that they don't want you to find the settings.

I am sorry to vent about my problems publicly and I am glad nobody gave me any flack.
If I was capable I would re-invigorate the port. It was de-orbited on July 3, 2019 and I miss it.
Obviously the browsers backend has changed since 1994, but visually it is close to the original implementation.

I also like SeaMonkey because it is a small group of volunteers that parse from the Firefox codebase and keep everything on the same preferences pages. No radical changes. XUL being removed from Mozilla has caused troubles.

There is no way you could compare SeaMonkey to Firefox. Mozilla has over 80 employees and SeaMonkey has 2 or 3 voulnteers.
Yahoo paid Mozilla $375 million dollars a year to make it the default search engine. Mozilla has over $400 Million in yearly revenues.
SeaMonkey only gets user donations.
To compare the security vulns of Firefox versus SeaMonkey is absurd. Firefox has a faster release cycle and larger development team.


----------



## Nicola Mingotti (Feb 13, 2020)

cynwulf said:


> You could maintain a port of a static build of e.g. firefox.  Mozilla still distributes static builds for Linux, so there should be some instructions on the WWW somewhere...



As said, I am not a black belt when it comes to C++, Makefiles, Poudriere etc. I am very week on these technologies, I never have occasion to work at that level. Rarely I write some C, but to be true, it is mostly for Arduino so it is just another story. 

Anyhow it is good to know that there is still a binary package around.


----------



## memreflect (Feb 13, 2020)

Phishfry said:


> Yes familiarity is what I find pleasing.
> When I go to Firefox it seems they change the security settings location with every version.
> I get the feeling that they don't want you to find the settings.


Firefox Quantum (i.e. Firefox 57+) was a huge redesign, and that extended to the browser's about:preferences page as well as under-the-hood performance improvements.  For example, preferences are now grouped into five primary categories, which each have one or more sections:

General
Home
Search
Privacy & Security
Sync
As you can see, the security section is immediately available.  I personally set the history preference to "Never remember history", which enables permanent Private Browsing mode.  Bookmark management and saved passwords are still functional; it simply means no history, cookies, or other site data are saved.

Maybe you'll like it, maybe you'll hate it.  Either way, I can't get the deleted SeaMonkey 2.49.4_27 port to even build on 12.1-RELEASE-p2; it doesn't even finish `make fetch` due to a long list of vulnerabilities, and it was revision 505753 that deleted the port, citing security reasons.  Of course, I could ignore those vulnerabilities and build anyway, but it just doesn't seem right to do so, though this may just be a personal preference.



Phishfry said:


> There is no way you could compare SeaMonkey to Firefox. Mozilla has over 80 employees and SeaMonkey has 2 or 3 voulnteers.
> ...
> To compare the security vulns of Firefox versus SeaMonkey is absurd. Firefox has a faster release cycle and larger development team.


That may be true, but I'd liken SeaMonkey to a ship full of holes: it eventually starts to feel like a lost cause to continue plugging those existing holes when more holes appear out of nowhere.  Mozilla's primary focus was originally on security.  If I recall correctly, that was what originally sold Firefox to the Windows crowd who was tired of fighting viruses, worms, etc. entering the system through Internet Exploder's nearly legendary number of security exploits.

The volunteers working on SeaMonkey, however, are prioritizing porting the functionality of Firefox ESRs into new versions of their software, and if there are so few volunteers as you say, there's simply no room to think about security.  An exploit arises, and they [may] patch it, but an official patched release isn't made available terribly quickly, leaving users of the software vulnerable to those exploits until they upgrade.

And with the FreeBSD port gone, you're effectively stuck at a partially patched version of 2.49.4 unless you download and compile the 2.49.5 source yourself from Mozilla's FTP site, leaving you vulnerable to any unmitigated security issues.  If I knew anything about maintaining ports and Seamonkey's build process, I'd consider volunteering to make 2.49.5 available, but the idea of maintaining a sinking ship makes me feel like I'd be wasting my time that could be spent on other, more actively maintained ports.


----------



## mark_j (Feb 13, 2020)

memreflect said:


> [...]
> Maybe you'll like it, maybe you'll hate it.  Either way, I can't get the deleted SeaMonkey 2.49.4_27 port to even build on 12.1-RELEASE-p2; it doesn't even finish `make fetch` due to a long list of vulnerabilities, and it was revision 505753 that deleted the port, citing security reasons.  Of course, I could ignore those vulnerabilities and build anyway, but it just doesn't seem right to do so, though this may just be a personal preference.
> 
> 
> ...



I may have been looking in the wrong repository but I recall last year looking at seamonkey and the source hasn't been touched since 2017.

Again, I may be wrong.


----------



## PMc (Feb 13, 2020)

Nicola Mingotti said:


> 3] static linking. I understand that may not be fashionable. But that is the best I can figure out to actually improve the status quo. The status quo being, if you run "pkg install foobarbaz" something else may be upgraded by necessity and the consequence being, your browser not working anymore.



Ah, now I see the problem.

Well, FreeBSD is mostly a server OS, and there it is supposed to have a software deployment scheme. That means, have control over what is put into the repository from where pkg fetches the stuff.
I'm doing it that way - actually I do not even see a difference between a server and a desktop besides that the desktop has a graphics card inserted and a mouse attached and an X server installed (which is just a network application server, anyway). And I have no such problems (using firefox-esr) - and yes, I need a browser, because my database GUI frontend happens to be ruby-on-rails, which is actually a web application builder.


----------



## aragats (Feb 13, 2020)

mark_j said:


> I recall last year looking at seamonkey and the source hasn't been touched since 2017.


According to their releases page «SeaMonkey 2.53.1 Beta 1 Released January 18, 2020»․
Of course, it's a way behind the "main line", but not too much:


> SeaMonkey 2.53.1 Beta 1 uses the same backend as Firefox and contains the relevant Firefox 60.3 security fixes.
> 
> SeaMonkey 2.53.1 Beta 1 shares most parts of the mail and news code with Thunderbird. Please read the Thunderbird 60.0 release notes for specific changes and security fixes in this release.


----------



## 20-100-2fe (Feb 13, 2020)

I don't really understand why a browser and an email client should be components of the same application (here, SeaMonkey). Having them separate looks much preferable to me.

I currently use Thunderbird as email client, not because it is a good application, but only because I couldn't find anything better - kind of the "least worse" option.  :/

When your only choice is between Thunderbird and Evolution, your decision is quick and easy... :/ Other email clients are either just buggy toys, or lack essential features such as CardDAV and CalDAV support, excluding them as viable alternatives.

A potential solution for this would be to set a private webmail up with CardDAV and CalDAV plugins, in which case I would no longer need an unsatisfying "heavy" email client.


----------



## rigoletto@ (Feb 13, 2020)

20-100-2fe said:


> When your only choice is between Thunderbird and Evolution, your decision is quick and easy... :/ Other email clients are either just buggy toys, or lack essential features such as CardDAV and CalDAV support, excluding them as viable alternatives.



I use mail/neomutt with CardDAV (don't know about CalDAV becuase I don't use it) but you need to put the parts together.


----------



## Nicola Mingotti (Feb 13, 2020)

20-100-2fe said:


> I don't really understand why a browser and an email client should be components of the same application (here, SeaMonkey). Having them separate looks much preferable to me.



I have an hypothesis on this. You remember like about 25 years ago when to connect to internet you had to download....ehm...get WinSock (i was a boy, Windows still was the only OS you could reasonable have) .

Well at that time updating your computer to connect to Internet was like, making it able to Connect in general. So it made sense to make a box of common applications for "Connection". Ok, maybe you, like me, wanted telnet, gopher, cucme, finger, an app for groups, etc. but for the general pubblic the all comprehensive box was: Internet Browser + Mail program. 

Still today, I guess for most of people Internet is just web and mail. It make kind of sense to make a suite, if you have the strength to make 2 good products.


----------



## Nicola Mingotti (Feb 13, 2020)

PMc said:


> Ah, now I see the problem.
> 
> Well, FreeBSD is mostly a server OS, and there it is supposed to have a software deployment scheme. That means, have control over what is put into the repository from where pkg fetches the stuff.
> I'm doing it that way - actually I do not even see a difference between a server and a desktop besides that the desktop has a graphics card inserted and a mouse attached and an X server installed (which is just a network application server, anyway). And I have no such problems (using firefox-esr) - and yes, I need a browser, because my database GUI frontend happens to be ruby-on-rails, which is actually a web application builder.



This does not correspond to my understanding of Firefox ESR. Which is good in the following scenario. You develop your web app today, test on Firefox ESR and then say to your customer: "If you want to use my app as it was designed, without flaws, for the next ~5 year use Firefox ESR". This is a great thing.

But, it is not the problem at hand. Your Firefox, ESR or not, being a regular package can stop working one day, just after an *pkg install|upgrade foobar*. If you use Qt applications you saw this happening several times.

My knowledge of the ports system is limited but, at best of my understanding, you can see that if you do: `cd /usr/ports/www/firefox-esr ; make run-depends-list`. You can see that it depends on 40 packages. For Firefox is the same number. For Chromium is 54.

If tomorrow Dolphin or Okular don't work right, you may be pissed but still, your day productivity is not totally mangled. Not so for the browser. When it stops working it creates problems. So, the proposal is to have a browser which is as much as possible indipendent respect to the other packages (being statillcally compiled); a standalone application whch you can expect to work untill you change your OS version.

Extra stuff.
1] I am using Firefox in these days, I see a big improvement respect to about 1-2 years ago. It is working well.

2] Somebody in Bugzilla has replicated the issue I have. And to him, it seems like if he presses right-click the browser unlocks.

3] I used Ruby for a few system services recently (not Rails). It is an extremely beautiful language. I like JRuby especially. My approach to the web my is Javascript both sides. Node on the server. Nginx in front of it. I am personally a big fan of 1 page web applications. I don't use "complex frameworks". Javascript+HTML+CSS+JQuery, this is my framework.


----------



## fernandel (Feb 13, 2020)

PMc said:


> Ah, now I see the problem.
> 
> Well, FreeBSD is mostly a server OS, and there it is supposed to have a software deployment scheme. That means, have


I am reading all the time "server oriented" but


> FreeBSD is a UNIX-like operating system for the i386, amd64, IA-64, arm, MIPS, powerpc, ppc64, PC-98 and UltraSPARC platforms based on U.C. Berkeley's "4.4BSD-Lite" release, with some "4.4BSD-Lite2" enhancements. It is also based indirectly on William Jolitz's port of U.C. Berkeley's "Net/2" to the i386, known as "386BSD", though very little of the 386BSD code remains. FreeBSD is used by companies, Internet Service Providers, researchers, computer professionals, students and home users all over the world in their work, education and recreation. FreeBSD comes with over 20,000 packages (pre-compiled software that is bundled for easy installation), covering a wide range of areas: from server software, databases and web servers, to desktop software, games, web browsers and business software - all free and easy to install.


They should put on "server oriented" and as drhowarddrfine many times wrote "for professionals just..." but people don't see those on the FreeBSD site.


----------



## shkhln (Feb 13, 2020)

fernandel said:


> They should put on "server oriented"



There is the ethos of FreeBSD and then there is the reality of FreeBSD development. The point of the project is to provide a quality general purpose OS, suitable as a basic building block for pretty much everything: server, desktop, embedded applications. In practice, individual developers have their own priorities. The sponsored work is, understandably, heavily skewed towards very specific corporate use cases as well.


----------



## ralphbsz (Feb 13, 2020)

Nicola Mingotti said:


> Which is good in the following scenario. You develop your web app today, test on Firefox ESR and then say to your customer: "If you want to use my app as it was designed, without flaws, for the next ~5 year use Firefox ESR". This is a great thing.
> 
> But, it is not the problem at hand. Your Firefox, ESR or not, being a regular package can stop working one day, just after an *pkg install|upgrade foobar*. If you use Qt applications you saw this happening several times.



Which means that you did not do a good job developing your web app today. If you want to go down your path of tightly integrating your app with a particular version of the browser, then you need to tell your user also: Please run my app on a computer that is running version X.Y.Z of the operating system, and version A.B.C of the Firefox ESR browser, and versions L.K.M of all the other middleware the influences the behavior of your app. This is actually commonly done in large corporate settings (for example data centers, where upgrades are very rare, and very well orchestrated and tested), and in the embedded world. One way to do it is to say: We will integrate and test the complete suite of all systems (hardware and software) exhaustively, and then install and run that version for a long period, perhaps 6 months or a year. While that version is in production, we will do either no changes at all, or only the absolute minimum necessary to fix gaping security holes. And we spend the 6 months or year testing the next version on a separate test system, to get ready for the next major upgrade. Underlying this is the philosophy of "never touch a running system", which I like to describe with this joke: To administer a computer you need a man and a dog; the man is there to feed the dog, and the dog is there to bite the man if he tries to mess with the computer.

But that way of deploying systems really means that you have not architected dependability into your system (hardware, software, middleware, limpidware, ...), but instead you tested it into the system. And we all know that you can not test quality into software. A house of cards remains a house of cards and will be fragile, even if it currently seems to be standing on its own.

So the second approach is to architect dependability and quality into the system. For example, for a compiled application: Look at the exact specification of the interfaces that the application needs. I'll focus on one particular thing here for concreteness, and for example say: I will write my application using exactly nothing other than specified and documented interfaces from POSIX.1-2008, using C++ exactly following the ISO C++14 standard. And when developing my application, I will audit all uses of language features and OS interfaces to make sure I only use things from standard C++ and POSIX. At this point, my application will work on any system that supports those interfaces, upgrades or not. And if my application breaks after an upgrade, I can be 99% sure to know the culprit: the underlying implementation of C++ or POSIX stopped being standards conforming.

This might sound implausible, but a lot of high-quality development is done that way. You start by writing down requirements for the development (like: it shall work on any system that supports POSIX version X, programming language version Y, Unicode version Z, ...), and then stick to the requirements document. In the case of web applications, it for example means writing down exactly which subset of Javascript will work compatibly on all supported browsers, and only using that subset, or explicitly write libraries that correct for incompatibilities. If you do that, then your application will become independent of running on browser A versus B. And we know this is possible. The example that impresses me most is the office applications that today run in browsers (like Microsoft Office in the online version, Google docs/sheets/... suite, and most amazingly MS Visio). These applications run perfectly, and they do so on Chrome, Safari, and Edge. And while they are extensively tested, their good behavior isn't tested and then band-aided into them, they are designed for browser independence.

In a nutshell: Your proposal of freezing a certain browser version is really an admission that some software is badly written, and is a house of cards.


----------



## PMc (Feb 13, 2020)

Nicola Mingotti said:


> This does not correspond to my understanding of Firefox ESR.



Maybe that's because I don't have an understanding. I just need the thing, so I build it and use it.



> But, it is not the problem at hand. Your Firefox, ESR or not, being a regular package can stop working one day, just after an *pkg install|upgrade foobar*. If you use Qt applications you saw this happening several times.



That won't happen. It might only "stop working" if I upgrade one of the packages in the _run-depends_ list. And that will also not happen, because if I upgrade a package in the run-depends list, my build system will automatically recompile firefox as well.
And no, this is not about "professional use", this is learned by long experience of running into problems, debugging and fixing the problems, and then writing scripts that make certain the problem will never appear again.
Because that's what I do: if I run into a problem, I do not waste time writing proposals. I figure out what's the problem, and try to fix it. Then, if it is of general interest and if I can reach the developers (this is usually the greatest difficulty) of the component that actually has the problem, I tell them and maybe suggest a fix (last time happened here and here),



> My knowledge of the ports system is limited but, at best of my understanding, you can see that if you do: `cd /usr/ports/www/firefox-esr ; make run-depends-list`. You can see that it depends on 40 packages. For Firefox is the same number. For Chromium is 54.



Yes, and I am surprized that all of this does work well. (There seem to be an awful lot of people who do not write proposals and just solve the problems.)



> 3] I used Ruby for a few system services recently (not Rails). It is an extremely beautiful language. I like JRuby especially. My approach to the web my is Javascript both sides. Node on the server. Nginx in front of it. I am personally a big fan of 1 page web applications. I don't use "complex frameworks". Javascript+HTML+CSS+JQuery, this is my framework.


The web didn't ask me if I want it. I just needed a GUI for my postgres, and in 2008 RoR appeared to be the only thing offering progammable extensions. That's also what I do: if I come across something, I don't state that I have no skills for it, I just start to learn it.


----------



## Nicola Mingotti (Feb 13, 2020)

PMc said:


> The web didn't ask me if I want it. I just needed a GUI for my postgres, and in 2008 RoR appeared to be the only thing offering progammable extensions. That's also what I do: if I come across something, I don't state that I have no skills for it, I just start to learn it.




NO! All wrong! Your opinion shows you never worked on a complex problem. 

You need other people ability to solve complex tasks. 

And the first thing is honesty to yourself. Or we are all kernel driver writer and also we can lay down a NeuralNet, Invest with it our Bond Portfolio, design a Quant. Comp. algorithm and why not, build a QC itself. This is not the reality. I am sorry. 

A few examples. Do you think McCarthty write the Lisp ? NO ! He designed it. Do you think Alan Kay wrote Smalltalk or the GUI ? NO ! He had the seminal ideas but there were a bunch of great hackers with him. Steve Jobs ... ?

There isn't scarcity of examples. You may have an idea, you may not be the perfect person to build it. If you have the humility to admit it, your idea may be built by somebody else who recognized its usefulness and can do it well and facilment.

My "idea" in this case is an humble static package, it is nothing revolutionary, but it will solve an issue that I guess several FreeBSD user have had and will have in the future.


----------



## blackhaz (Feb 13, 2020)

I fully support this idea - app bundles. MacOS does a great job of this. You never run into dependency hell there with common desktop stuff. I actually think it would be beneficial for all users if _all_ complex desktop-oriented software packages for FreeBSD would be statically compiled or supplied in MacOS-style bundles. This would definitely save lots of headache and improve the overall stability of the system.


----------



## tankist02 (Feb 13, 2020)

I think PCBSD tried bundles with PBI and it did not quite work out.


----------



## Nicola Mingotti (Feb 13, 2020)

ralphbsz said:


> Which means that you did not do a good job developing your web app today. If you want to go down your path of tightly integrating your app with a particular version of the browser, then you need to tell your user also: Please run my app on a computer that is running version X.Y.Z of the operating system, and version A.B.C of the Firefox ESR browser, and versions L.K.M of all the other middleware the influences the behavior of your app. This is actually commonly done in large corporate settings (for example data centers, where upgrades are very rare, and very well orchestrated and tested), and in the embedded world. One way to do it is to say: We will integrate and test the complete suite of all systems (hardware and software) exhaustively, and then install and run that version for a long period, perhaps 6 months or a year. While that version is in production, we will do either no changes at all, or only the absolute minimum necessary to fix gaping security holes. And we spend the 6 months or year testing the next version on a separate test system, to get ready for the next major upgrade. Underlying this is the philosophy of "never touch a running system", which I like to describe with this joke: To administer a computer you need a man and a dog; the man is there to feed the dog, and the dog is there to bite the man if he tries to mess with the computer.
> 
> But that way of deploying systems really means that you have not architected dependability into your system (hardware, software, middleware, limpidware, ...), but instead you tested it into the system. And we all know that you can not test quality into software. A house of cards remains a house of cards and will be fragile, even if it currently seems to be standing on its own.
> 
> ...



ralphbsz , this would material for a great discussion with a beer in a bar

I add a few considerations.

*Provocation*. Software is house of cards. You change a tiny bit and it all fall apart. We are very far from the "stability" you have on mech.eng. structures. (very rough, but i guess you understand what i mean). Some times ago in Windows a wrong pointer in whatever app and good bye OS, reboot. Unix is better but still, a mistake in a driver, you connect your device and hopla, system down.

*Stability of Microsoft Web App*. Umm, i think the stability of the web app backed by big corporation is not due to great apriori study of each browser. It is instead a continuous correction of bug reports. This is a good thing of web app, you can correct all the time, and insert some new bugs, nobody will know until they hit them ... but you need to be big to do continuous dev.

To my knowledge there does not exist a *specification* for each *browser*. You try your code and see what brakes. CSS is interpreted a bit differently in each one. The only truly constant I found in the web world is Javascript. That is kid of solid bay and cross browser.

In terms of *specifications* maybe *Android* is the worst I had to deal with, you write and test on a phone, OS-X relase Y.Z, then you load the app to another phone brand, same OS same relase and it does not work. This happened to me several times. Vendors customize the OS and make an olimpic mess. (one name for all, just to be concrete: Samsung).



ralphbsz said:


> And if my application breaks after an upgrade, I can be 99% sure to know the culprit: the underlying implementation of C++ or POSIX stopped being standards conforming.



But Android user do not care if vendor Q. broke things. They see your app does not work. It is your fault. You must ensure it works!  Everywhere? ... yes, that is impossible.

IMO, Firefox-ESR is good, as you say, when you are targeting a corporation or an institution who needs an internal service. Because they want something that works, does not break, and are not willing to pay for your continuous adjustments to the code. 

bye
n.


----------



## mark_j (Feb 13, 2020)

tankist02 said:


> I think PCBSD tried bundles with PBI and it did not quite work out.


I don't know specifically about PC-BSD, but I suppose a lot would 'fail' purely because of GPL.


----------



## ralphbsz (Feb 14, 2020)

Nicola Mingotti said:


> Software is house of cards. You change a tiny bit and it all fall apart. We are very far from the "stability" you have on mech.eng. structures.


I disagree. While lots of consumer-facing software (in particular that written by amateurs for free) is very bad and very unstable, there is also a lot of extremely well written software. Supercomputer centers (with tens of thousands of nodes) that only need to be "rebooted" or shut down once a year for power distribution maintenance, and run around the clock the rest of the year. The stuff that runs data centers for the great cloud companies. The embedded systems that run cars and dishwashers and airplanes (with a few famous exceptions, like the 737 Max, which was not a software bug but a training and hardware issue). The most famous example is the software (written in the 1960s) that ran the life support system on the Apollo capsules: they carefully measured the bug rate, and it was zero. Meaning they never found a bug, in spite of very careful checking. It also had a productivity of about one line of code per engineer-month. But that was considered a good investment, since a bug would have killed 3 astronauts. Along the same lines: When was the last time you did a google search and got an error page back, saying "due to a bug we can not give you a result"?

Really good software exists. We, as a profession, even know how to do really good software engineering. But good engineering comes at a cost, and it's not always worth investing in. And quite a few people (or companies or cultures) don't even know how to do it.

*



			Stability of Microsoft Web App
		
Click to expand...

*


> . Umm, i think the stability of the web app backed by big corporation is not due to great apriori study of each browser. It is instead a continuous correction of bug reports.


You're not all wrong here. The big developers of web-based applications clearly have enormous teams, including really good quality control and bug fixing. But: fixing bugs takes time. If you are using a spreadsheet on the web, and it calculates a wrong number, or it crashes on you, the bug that caused that will not be fixed for many days or weeks. Just rerunning all the tests after a bug fix will take considerable time. So the bug-free-ness of these great big web applications is mostly engineered into them, not added by fixing bugs.

This is actually a version of a very general statement: *YOU CAN NOT TEST QUALITY INTO SOFTWARE*. I'm deliberately writing this in bold uppercase, because it is important. Good (reliable, efficient, ...) software is that way because it is architected that way, and at all parts of the software production process (from collecting requirements through design, implementation, test and packaging), the people who do it have the resources required to do a good job, and they have a culture of wishing to do a good job. You can't get to good software by hacking something together, and then testing it and fixing bugs until it is perfect. That's because in a bad design, and with ill-understood requirements, every bug fix will create more problems. I was actually once in an organization carefully measured this (yes, good engineering is driven by metrics), and we had reached the point where our source base was so awful, on average every bug fix created 0.7 new bugs. Yes, testing and bug fixing is important, but it is mostly to fix minor nits and misunderstandings, and to measure and validate how well your software development process is really working.



> To my knowledge there does not exist a *specification* for each *browser*. You try your code and see what brakes. CSS is interpreted a bit differently in each one. The only truly constant I found in the web world is Javascript.


And that's why good app developers use Javascript mostly today (or other languages that run in the browser), and have carefully mapped the rendering incompatibilities of browsers. You can even get textbooks that explain differences between browsers. When developing browser-based software, you don't do it by "try throwing it against the wall and seeing whether it sticks", but by consulting documentation of how each browser handles what. No, those are not specifications distributed by the browser vendors, but documentation that is either publicly available (for example w3schools has compatibility charts), or much more detailed documentation developed internally by software development companies.

The funny thing is, by the way, that in the early 2000s, Javascript support between the various browsers was an AWFUL mess. I tried to do some AJAX work in the early 2000s, and at that time you had to spend a lot of your effort on compatibility libraries between browsers. This has improved massively today.[/QUOTE]


----------



## Nicola Mingotti (Feb 14, 2020)

Dear ralphbsz , i like your opinions, they are almost at the opposite of my mine


----------



## PMc (Feb 14, 2020)

Nicola Mingotti said:


> And the first thing is honesty to yourself. Or we are all kernel driver writer and also we can lay down a NeuralNet, Invest with it our Bond Portfolio, design a Quant. Comp. algorithm and why not, build a QC itself. This is not the reality. I am sorry.



_Argue for your limitations, and sure enough, they're yours._ -- Richard Bach


----------



## drhowarddrfine (Feb 14, 2020)

A few notes.

There is only one language that runs in a browser and that's JavaScript. Unless you want to include the javascript related WebAssembly but it's not intended for general use; that is, it's intended for transpiling another language into a javascript runnable.  To repeat, no other programming language runs in the browser.

There is only one specification for browsers and it's written by the browser vendors themselves. The WHATWG people splintered off from the W3C a while ago but now the W3C publishes the specs from WHATWG. Note that w3schools is not related to the W3C in any way, shape or form.

So the two best sources everyone uses for web development is the spec itself and the Mozilla Developer Network (MDN) which is supported and written by the browser vendors and CanIUse for compatibility checking.



ralphbsz said:


> While lots of consumer-facing software (in particular that written by amateurs for free) is very bad and very unstable


Take web development in general. I don't know about very large companies but most sites you see are held together by spit, glue and the latest fan craze on reddit. Speed of development at all cost is the overriding motto.


----------



## mark_j (Feb 14, 2020)

Javascript is a cancer. A world free of that junk would be a much simpler and less prone to <insert latest browser exploit here>.


----------



## 20-100-2fe (Feb 14, 2020)

Nicola Mingotti said:


> Software is house of cards. You change a tiny bit and it all fall apart.



Mine never did in 30 years of professional software development.
And I'm no exception, all the colleagues I worked with were achieving the same quality.
We have seen and fixed pieces of software that were that fragile only because they had been developed by incompetent persons.


----------



## ralphbsz (Feb 14, 2020)

mark_j said:


> Javascript is a cancer. A world free of that junk would be a much simpler and less prone to <insert latest browser exploit here>.


It's perfectly possible to write clean and stable code in JavaScript. It takes effort and a good engineering mindset, but it can be done. It has recently become easier, with TypeScript, which is JS with your checking.

Now, is a lot of JavaScript in web pages junk? Sure, because it's fine cheaply and sloppily. But that's by choice.


----------



## mark_j (Feb 15, 2020)

Javascript is bloated nonsense. It's what makes us need faster and faster CPUs to handle the guff that's on every web page these days. Take a look at the 'engines' required to interpret javascript; they're enormous and growing. As I said, the major way to break into any system is via a web browser, via a web page running this cancer called javascript.

I also challenge the notion that people can and are able to write good "clean and stable code" in javascript. Most of it seems to be re-used, re-purposed code where the authors have NO idea how it does what it does. Its main purpose seems to be to facilitate spying and selling of ads. It's got very few redeeming features.

Anyway, I digress from the topic at hand.


----------



## ralphbsz (Feb 15, 2020)

A agree that we are digressing.

And I agree that a lot of the code that runs in web pages today is junk.

But the problem is not JavaScript, the language. I know quite a few people who write JavaScript for a living, and are perfectly respectable software engineers. A significant fraction of it is NOT deployed in browsers, for example look at the Node.js initiative, which is explicitly intended to support the use of JavaScript outside of browsers. And I know people who can write well-engineered and reliable code that runs in browsers, and is neither inefficient nor attackable.

The basic problem is not JavaScript. Getting mad at JavaScript is like getting mad at guns, knives, clubs, and rocks, which are used by people to kill or hurt other people. The problem is that there is good money to be made by spying on people and hacking into their privacy. These are all illegal activities, and they happen to use JavaScript when on the web. The correct response is not to ban the tools that they (and many legitimate people) use, but to go after illegal activity.

By the way: How does this forum work? The web pages rely heavily on JavaScript! Look at the downloaded pages with an editor sometime. Without JavaScript, forums like these would be much more rudimentary and hard to use.


----------



## Nicola Mingotti (Feb 15, 2020)

We are wildly digressing so I will refrain form praising in favor of Javascript, and also to contrast the view the there exist at all a class of people how can produce bug-free software.


----------



## fernandel (Feb 15, 2020)

ralphbsz said:


> Getting mad at JavaScript is like getting mad at guns, knives, clubs, and rocks, which are used by people to kill or hurt other people. T


I do not know about JavaScript but guns and other weapons were made for killing through the history and present too.


----------



## PMc (Feb 15, 2020)

fernandel said:


> I do not know about JavaScript but guns and other weapons were made for killing through the history and present too.



You have a point in that, but then, I think there is a greater picture to view.

I think arguing if javascript is good or bad misses the actual point. Which is, the underlying architecture is fundamentally unsuited to the task. Http was devised as a stateless protocol for retrieval of static pages. But what we do now is 100% stateful and 90% distributed computing. If you put such "add-ons" onto a structure not the least designed for it, the outcome can only be ugly and problematic.

And javascript is not the only such issue. The whole matter of secure authentication and TLS is just a big trouble. And the idea of discerning pages on the same host by different hostnames (instead of different paths) ridicules both http and dns.
In short, that whole patchwork has become a horror creation of Frankensteinian dimensions. But this is what happens when the "free market" is allowed to take possession of something as beautiful als the Internet once was.

Then, concerning that "spying": this is simply a lie.
Nobody is spying on You, because nobody is the least interested in You personally. I know this is hard to take, but we must face the truth: the only thing that FAANG and those folks are interested in is Make Money Fast.
You are just a means to that end. You are cattle.
And FAANG is not spying on You; they are only taking care of their livestock.


----------



## Nicola Mingotti (Feb 15, 2020)

fernandel said:


> I do not know about JavaScript but guns and other weapons were made for killing through the history and present too.


caveat. broadly diverging, and provocative. 
well consider, if there were not weapon at all the, major of your village would still be who punch harder. Weapon are the only thing that give a chance to the fake concept of equality. In some sense they are the core fundation of democracy.


----------



## fernandel (Feb 15, 2020)

Nicola Mingotti said:


> caveat. broadly diverging, and provocative.
> well consider, if there were not weapon at all the, major of your village would still be who punch harder. Weapon are the only thing that give a chance to the fake concept of equality. In some sense they are the core fundation of democracy.


...who punch harder...but village will be here still. I don't agree about "core fundation of democracy" but nowadays if you have a nuclear power than they don't "bother' you but it has nothing with democracy.
But debate about weapons is IMO for FreeBSD forum but better for philosophical debate.


----------



## Nicola Mingotti (Feb 15, 2020)

fernandel said:


> ...who punch harder...but village will be here still. I don't agree about "core fundation of democracy" but nowadays if you have a nuclear power than they don't "bother' you but it has nothing with democracy.
> But debate about weapons is IMO for FreeBSD forum but better for philosophical debate.



Bah, nuclear, i would say it is the same. Instead of the village of people is the village of states. 

Anyhow, I will go to write some code. If find definitely more interesting to build stuff than talk about it. 

happy weekend folks !


----------



## ralphbsz (Feb 15, 2020)

PMc said:


> I think arguing if javascript is good or bad misses the actual point. Which is, the underlying architecture is fundamentally unsuited to the task. Http was devised as a stateless protocol for retrieval of static pages. But what we do now is 100% stateful and 90% distributed computing. If you put such "add-ons" onto a structure not the least designed for it, the outcome can only be ugly and problematic.



Actually, there is a way to look at it that makes it very sensible. Today state is kept in the cloud, far away from (unreliable and insecure) end devices. If you order something online, or renew your car registration online, the state of the transaction (the book order from Amazon, the payment of your registration fee for a license plate number) is kept in the servers of Amazon and of the DMV. But that means that you need an application on your end device to interact with the server. In the very old days, that application was a card punch: you submitted your requests as punched cards, and got printouts back. Then we moved to terminals, where users had either an ASCII terminal or a 327x on their desk, and submitted requests by filling in forms or entering numbers on the terminal, and then got the results back on the screen. None of that is comfortable, nor does it scale well.

So what we do today is that we give users applications which run on their end devices, and that communicate with the servers over standard sockets (usually with authentication and encryption). There are two distinct modes of doing that. One is to make the user actually download and install the application, and then explicitly run it. That's for example how a lot of e-mail and navigation is done. The other way is much more lightweight, flexible, and dynamic: the user finds the service using the web (which after all has good search mechanisms), and then downloads an "application", namely a big chunk of JavaScript, using the http(s) protocol. That application then runs (in a standardized sandbox, that being the web browser), interacts with the user (like making them fill in fields, for example the name of the book they want to order, or the license plate they want to renew the registration on), and it interacts with the server via standard sockets (to find out how much the book costs and what it looks like, or to find out how much the registration fees are, and then to actually perform payment). In this way http(s) has become a protocol that is used for many things that are not really static pages any more: download blobs of executable code (namely the JavaScript that creates the actually visible page and interaction), and tunnel many other protocols.

If Tim Berners-Lee had proposed this in the early 90s, it would have seemed sensible then, and it still is. It has taken us 30 years to get to reasonably standardized programming languages for "instantly downloadable applications" (that being JavaScript) and reasonably standardized mechanism for rendering output to the human and getting their input (that being HTML and the user interactions of JavaScript), and we went through several dark ages in the process (remember when for example everyone thought the web applications would all be compiled Java classes, and all we really got was nervous text).



> The whole matter of secure authentication and TLS is just a big trouble.


Yes, the design of https, TLS, SSL, authentication and all that is a patchwork, a mess. But it ultimately works.



> And the idea of discerning pages on the same host by different hostnames (instead of different paths) ridicules both http and dns.


Dirty hack. Genius for simplicity of setting up many virtual web hosts on a single server, but a giant mess to administer. I hate it. I've never set something like that up, but I've had to do maintainance on it. Unpleasant.



> In short, that whole patchwork has become a horror creation of Frankensteinian dimensions. But this is what happens when the "free market" is allowed to take possession of something as beautiful als the Internet once was.


I've used networks-of-networks since 1982, and the "internet" (meaning TCP/IP based networks with long-range connectivity) since 1986. It was never beautiful. It was always a horrible mess, hacked together with enthusiasm but without critical thinking. For example, the horror of trying to route an e-mail from a Bitnet host to a UUCP host via an intermediate Arpanet machine, and the replying to that e-mail. Or setting up how to download from Simtel20 (because that machine hat 36-bit words, so all the files that were ftp'ed from it were interesting). It has always been a mess. There have been well-organized top-down designed networks in the world, but either they failed, or they were internal to organizations (such as IBM or the military) that have very strong command and control mechanisms.



> Then, concerning that "spying": this is simply a lie.
> Nobody is spying on You, because nobody is the least interested in You personally. I know this is hard to take, but we must face the truth: the only thing that FAANG and those folks are interested in is Make Money Fast.


That is mostly true. The companies that everyone is upset about (FAANG, plus their Chinese counterparts, and the ISPs) really have no interest in spying. All they want to do is sell their services. Sometimes you are the customer, sometimes you are the ingredient.

On the other hand, some organizations are really spying on you (and on all of us), namely a variety of non-existing agencies.


----------



## 20-100-2fe (Feb 15, 2020)

I've not worked with punch cards, but I've seen quite a bit of the evolution of HMI.
My perception of it is that it was user experience has driven the evolution of technology all that time.
Now, front-end technology is mature enough in terms of user experience and it continues its evolution to be as satisfying for the developer as for the user.

ECMAScript, even with the improvements brought by TypeScript, is still a piece of sh*t compared to say, Java.
But currently available ECMAScript frameworks allow developers to bring smiles of their end-users' faces.
And this is ultimately why we get up every morning and cope with technology all the day - for the smile of our users.
If it were only for money, we could earn it with another job.
If it were only for the fun of technology, we would soon stop because technology doesn't make sense in itself.
What makes our efforts sensible and meaningful is the contribution we can make to a collective work thanks to our command of technology.
And the smile of a user is a much more tangible measure of the value of our contribution than a number on our bank account statement.



ralphbsz said:


> Actually, there is a way to look at it that makes it very sensible. Today state is kept in the cloud, far away from (unreliable and insecure) end devices. If you order something online, or renew your car registration online, the state of the transaction (the book order from Amazon, the payment of your registration fee for a license plate number) is kept in the servers of Amazon and of the DMV. But that means that you need an application on your end device to interact with the server. In the very old days, that application was a card punch: you submitted your requests as punched cards, and got printouts back. Then we moved to terminals, where users had either an ASCII terminal or a 327x on their desk, and submitted requests by filling in forms or entering numbers on the terminal, and then got the results back on the screen. None of that is comfortable, nor does it scale well.
> 
> So what we do today is that we give users applications which run on their end devices, and that communicate with the servers over standard sockets (usually with authentication and encryption). There are two distinct modes of doing that. One is to make the user actually download and install the application, and then explicitly run it. That's for example how a lot of e-mail and navigation is done. The other way is much more lightweight, flexible, and dynamic: the user finds the service using the web (which after all has good search mechanisms), and then downloads an "application", namely a big chunk of JavaScript, using the http(s) protocol. That application then runs (in a standardized sandbox, that being the web browser), interacts with the user (like making them fill in fields, for example the name of the book they want to order, or the license plate they want to renew the registration on), and it interacts with the server via standard sockets (to find out how much the book costs and what it looks like, or to find out how much the registration fees are, and then to actually perform payment). In this way http(s) has become a protocol that is used for many things that are not really static pages any more: download blobs of executable code (namely the JavaScript that creates the actually visible page and interaction), and tunnel many other protocols.


----------



## PMc (Feb 15, 2020)

ralphbsz said:


> Actually, there is a way to look at it that makes it very sensible. Today state is kept in the cloud, far away from (unreliable and insecure) end devices. If you order something online, or renew your car registration online, the state of the transaction (the book order from Amazon, the payment of your registration fee for a license plate number) is kept in the servers of Amazon and of the DMV. But that means that you need an application on your end device to interact with the server.



Ah, I see. You have the experience and perception to perceive the continuity from the mainframe to the cloud.
I was always looking at something entirely different.

With the mainframe we have a big and expensive central provider machine and small and (rather) inexpensive consumer machines (terminals). The same is true with radio stations, newspapers, in facht all the traditional media.
And this is so by technical necessity: it doesn't work any other way.

With the Internet things are completely different. On the Internet every attached piece can act as a server or client, and there is no technical distinction between them. I think this is a very essential difference, and at this point it could have been possible to get rid of the centralized structures in toto. (Some technologies do actually use this: torrents, bitcoin).
But this concept is highly disruptive on a social level: it breaks our understanding of hierarchy and dependency.

And so, as I perceive it, after some time of experimentation (usenet et al.) there was a fallback to hierarchical structures. We do not need an e-mail provider, because we have all the technology to run our own e-mail. We do not need facebook, because we have all the technology to run our own webserver. But people value dependency over knowledge, and so, with the spread of the Internet, those providers (of technically superfluous things) came into being, and happened to become the richest and most powerful people in the world.
"The slaves shall serve", as Aleister Crowley put it.

There was once a promising technology called DCE, which brought the idea of computing onto a generally distributed level. This was by no means complete, but it addressed the issue that plain equality means chaos, so in a distributed landscape you need even more of a solid organizational structure. But here also the market was not ready for it, and then somehow OpenGroup and IBM hosed it.


----------

