# (Solved)Can't view this website



## golpemortal (Jun 6, 2022)

Fellow FreeBDS users, I am pulling my hairs of why this website cannot be view. Can any one help me shade some light on thtis issue?


http://teslagenx.com

I am using FreeBSD 13.1


I can view the website on other Linux distros and Windows but NOT on FreeBSD


----------



## Vull (Jun 6, 2022)

I don't think .coml is a valid domain name. Try: http://teslagenx.com/


----------



## golpemortal (Jun 6, 2022)

vull - That was a typo... its edited now. Still cannot be view on my box


----------



## golpemortal (Jun 6, 2022)

See a screenshot of the site when I try to view it.


----------



## Menelkir (Jun 6, 2022)

golpemortal said:


> See a screenshot of the site when I try to view it.


Are you using local-unbound? I had some issues with a couple websites with unbound some time ago.


----------



## gpw928 (Jun 6, 2022)

Can you fetch index.html with `wget http://teslagenx.com/`?
And does it look right with `w3m index.html`?


----------



## jmos (Jun 6, 2022)

403 is a server side decision. As we can't see the configuration and/or setup on the server: Ask the maintainer of the website - no one here can tell you why.


----------



## gpw928 (Jun 6, 2022)

They are playing games on the server side with the "user agent".  Try using something that looks like Linux:
	
	



```
chrome --user-agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
```


----------



## getopt (Jun 6, 2022)

golpemortal

Which browser do you use?

The site builds here fine on RELEASE-13.1 with Firefox, even when disabled Javascript.


----------



## fjdlr (Jun 6, 2022)

Ditto for me with:
/www/firefox/ (plugins noscript disabled), I using /dns/unbound/


----------



## scottro (Jun 6, 2022)

I also can't view it on FreeBSD, in chromium or firefox. Brave browser on Linux worksl as does firefox on Linux.


----------



## getopt (Jun 6, 2022)

scottro said:


> I also can't view it on FreeBSD, in chromium or firefox. Brave browser on Linux worksl as does firefox on Linux.


Try setting in firefox about:config

```
privacy.resistFingerprinting true
```
This changes the user-agent string:

```
Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Firefox/91.0
```


----------



## scottro (Jun 6, 2022)

Yup, that fixed it in firefox. golpemortal please note, that fixes the issue on Firefox in FreeBSD.


----------



## getopt (Jun 6, 2022)

jmos said:


> no one here can tell you why.


Hmm, ...


----------



## drhowarddrfine (Jun 6, 2022)

I said it elsewhere here. In this day and age, I find it astounding that a web site would do such a thing based on the user agent string which has been labeled an amateur blunder decades ago.


----------



## jbo (Jun 6, 2022)

drhowarddrfine said:


> I said it elsewhere here. In this day and age, I find it astounding that a web site would do such a thing based on the user agent string which has been labeled an amateur blunder decades ago.


I fully agree with that. I'm currently working on a project which involves making regular HTTP requests to a HTTP server. Nothing special about that. The HTTP client is home-grown tho. During testing, we found a lot of rejections simply because of the user agent string.

I fail to understand the reasoning behind this on every single level. After all, the user agent string is just that: A simple string. You can set it to whatever you like. This cannot (and must not) be used for any kind of "security" aspects. The only reasonable explanation I could come up with is to prevent "bots" from successfully making requests but again: Just change the agent string and you're done.
If the reasoning instead if metrics: You screw your own metrics up by forcing people to set whatever user agent is necessary to "pass".

The web has become a truly ridiculous wasteland. There are few things that give me less pleasure than working with "modern web technologies".


----------



## drhowarddrfine (Jun 6, 2022)

jbodenmann The only thing that used to make sense was that some would adjust their CSS or Javascript for the user agent to make up for problems with that particular browser. We gave up on that long ago cause we strictly followed the standards and IE went away (for our users) so we never bothered to do that but, as you said, some users will change the string so that wouldn't help.

Then, again, any user that changes their user string and doesn't get what they expect shouldn't expect to get what they expect.


----------



## fjdlr (Jun 6, 2022)

Oh, yes......., user agent is fun !


----------



## getopt (Jun 6, 2022)

And oh, yes....., Mozilla has more user-agent fun, which is actually related to the first 3-digit-version bug  






						Difficulties opening or using a website in Firefox 100 | Firefox Help
					

A potential temporary solution if a web page is not opening correctly in Firefox 100.




					support.mozilla.org


----------



## obsigna (Jun 6, 2022)

The world is not black & white. All depends on what we do with the user agent information.

For example my embedded web server offers HTTP SHA256 Digest Access Authentication (according to RFC 7616) besides the obsoleted HTTP MD5 Digest Access Authentication (according to RFC 2617). The problem now is, that with the exception of Mozilla an Opera, the other browser producers obviously didn't recognize that MD5 is obsolete, and some of these browsers bail out on the SHA256 authentication offer.

For this reason I check the user agents for the browser versions:

```
shaDigestReady = (fields.UserAgent.content
                     && ((s = strstr(fields.UserAgent.content, "Firefox/")) && strtol(s+8, NULL, 10) >= 93
                      || (s = strstr(fields.UserAgent.content, "OPR/"))     && strtol(s+4, NULL, 10) >= 80));
```

You will have a hard time convincing me, that there is anything wrong with this.


----------



## jbo (Jun 6, 2022)

obsigna said:


> You will have a hard time convincing me, that there is anything wrong with this.


The thing that is "wrong" here is that we (as in humanity) apparently need to distinguish clients on the server side. This makes little to no sense. There are standards for a reason. It just seems that web technologies are either poorly defining standards, poorly maintaining standards or poorly implementing standards - most likely a combination thereof.

Web technologies have become so accessible that people without the necessary backgrounds, skills & experience _think_ that they can pull of what others study years for and dedicate their life towards.
I'm not trying to express that things should not be accessible but the more accessible technology becomes the more it tends to be bend left and right to "just get this feature landed as quickly as possible".
This ripples through all the layers. Suddenly you have something like web browsers having more frequent releases than any other piece of software which means that there is little to no time to properly care for security which means that you need yet more frequent releases to patch issues that were not discovered prior - while adding more issues to be discovered and fixed a few days later.
Obviously this is not different from the regular software development workflow. I'm complaining about the fact that we seem to need/require/want extremely fast moving software rather than taking it slow and keeping things nice & neat.

These days, web browsers are pretty much an entire operating system. Why? Why is this necessary? Because we want shiny new bling-bling features and we want them yesterday and we want everybody with a coffee mug to be able to "make" something. To accomplish that, software gets engineered poorly, release management is done poorly, security is handled poorly. And to get everything done even faster, lets pull in 48719 dependencies without actually auditing, authoring or maintaining those. Again: Because you can't. It's beyond anyones man power. We have seen governments trying to do this and they failed. So I get that it's a man power & logistical issue. But whenever there are resource limits in the real world it's time to take a step back and re-evaluate what is actually necessary and what is just popular demand and regulate from there.

The web has become a disease. And we'll most likely not be able to get away from that without a skilled group of people dedicating themselves to rebuild. I doubt that what we currently have is fixable - because it's a systematic problem, not a technological one.
We keep piling stuff on top of each other. This used to work well in the past because things moved way more slowly. But these days somebody starts building something new on some "technology" that was "invented" a week earlier, implemented by a group of juniors that think that they are senior and audited by nobody.


----------



## golpemortal (Jun 6, 2022)

getopt said:


> Try setting in firefox about:config
> 
> ```
> privacy.resistFingerprinting true
> ...



This is it... This fixed the issue on the browser by about:config and changed the variable,  now I can view the site from FreeBSD 13.1  Thanks getopt and all of you for your feed back I really appreciated.

The browser that I use 100% of the time is Firefox


----------



## meaw229a (Jun 7, 2022)

getopt said:


> Try setting in firefox about:config
> 
> ```
> privacy.resistFingerprinting true
> ...


I like to mention that this also changes the browser time zone to GMT0 and as a result the browser and computer time is different
if someone is in another timezone than GMT. 
This can be and will be used for user tracking and identifying.
From there I believe it is better to use one of the user-agent-switchers, set it to Win10 but stay in your timezone.
I'm always on my way as Win10 with Chrome or Edge to look like a billion other sheep. Just dont stand out.
What's under the hood is a differnt story.


----------



## obsigna (Jun 7, 2022)

jbodenmann said:


> The thing that is "wrong" here is that we (as in humanity) apparently need to distinguish clients on the server side. This makes little to no sense. There are standards for a reason. It just seems that web technologies are either poorly defining standards, poorly maintaining standards or poorly implementing standards - most likely a combination thereof.


We either have to live with what we get or we simply desist.

This reminds me on a joke which was told by the mathematics professor when I was studying chemistry at the university. In one of the lessons he touched the number series which converge but never reach zero, and he used it to tell the difference between mathematicians and physicists.

_At a university party, the girls and the boys should line up in two rows at the respective end of the dance hall, and move towards each other by half of the current distance at each drum beat. The music + dance would start as soon as they meet in the middle._​
_The mathematicians desisted, because a halfway rule is such a converging number series which never results in a distance of 0.00000..._


_The physicists figured that after the 5th beat of the drum the distance between the girls/boys would be near enough for practical purposes._
I finally graduated as a physical chemist, and with said joke in mind, I won't desist using a system which is good enough for practical purposes and instead wait for it converging to a hypothetical ideal state in the infinite distant future.


----------

