# Browser encryption of DNS



## Phishfry (Sep 30, 2019)

What are your thoughts on the browser handling DNS?
Mozilla using Cloudflare. Google with Chrome.
Now your ISP is sad.








						Google faces scrutiny from Congress, DOJ over plans to encrypt DNS | Engadget
					

Google's bid to encrypt domain name requests appears to be raising hackles among American officials.




					www.engadget.com
				




Personally I don't think DNS should be a browser function but an operating system function.
Thoughts?


----------



## ralphbsz (Sep 30, 2019)

Phishfry said:


> Personally I don't think DNS should be a browser function but an operating system function.


That was a nice theory when there was still something like an operating system, which ran a variety of programs (local ones, network clients, network servers), networks were small (hundreds or thousands of nodes that one computer may reach in the foreseeable future), and caching DNS made sense, because if an outside host is accessed by one program, it is likely that it will be accessed by another program soon.

But that's not the situation we're in today for desktop machines. On those, fundamentally the only program that is running is the web browser. Many people do their e-mail in the browser, they edit documents in the browser, and they do all the "browsery" things too. For many people it has reached the point that they use an "OS" that consists of nothing but the web browser: I know lots of people who use Chromebooks, and are super happy with them (and that's seasoned computer professionals). Once you have only the browser running, it makes sense to do DNS management and caching right in the browser. Matter-of-fact, allowing anyone else to touch the DNS packets or the DNS cache just opens the doors to bugs or security problems. And since on many desktops today (even when people still have an OS that allows other apps), the browser does 99% of the work, this makes sense.

As an example, consider the thing on my lap. It is a high-end 15" MacBook pro (paid for by my employer). I am running three applications right now: A browser (which does most of the work, including both home and work e-mail), an ssh client (which I use to log in to computers), and a VNC client (for working on a stationary Mac at home, which I use for scanning documents). So other than 3 or 4 fixed host, nearly all the network traffic goes to/from the browser.


----------



## obsigna (Sep 30, 2019)

This has been called DNS over HTTPS, i.e. DoH. This has been pushed through all the instances in no time by Mozilla and Google, and a RFC exists as well -  RFC 8484. They are telling that this is for improving the privacy of the users. Actually, Google does this for circumventing DNS based Ad-Blocking and Mozilla becomes paid for it from Cloudflare. At least in Chrome, it can be disabled by one switch, while in Firefox the setting is deeply carved in. The obstacle starts, that also Mozilla names this officially DoH, but the setting got the pseudo-technical nonsense abbreviation trr, and for completely disabling it, you need to set network.trr.mode = 5. Why the hell 5 and not 0? Because Mozilla don't want you to disable it.

In any case, besides disrupting any trr setting in Firefox, I added an ipfw rule to the FreeBSD gateway for blocking access to Google’s and Cloudflare’s DNS services:

```
...
# Block DNS bypassing via Cloudflare's 1... and Google's 8...
/sbin/ipfw -q add 96 deny ip from any to 1.0.0.0/24,1.1.1.0/24,8.0.0.0/9 53,443,853
...
```

I know this is not enough, in the future I will add known DoH services to my dns/void-zones-tools. Unfortunately, it is not a one switch operation for stopping this bullshit. The network gaming platform Roblox uses this since the RFC came out at the end of 2018, why? This is to defeat corporate's DNS based policies.


----------



## Phishfry (Sep 30, 2019)

ralphbsz I have a different opinion.
Where we used to have network layer settings for DNS this DoH scheme makes it an application level DNS scheme.
So Mozilla and Chrome will now use two different DNS servers.
Too much control in companies I don't trust.
How about our very own `pkg`.
It uses DNS so we will have different DNS servers being queued for different applications.
I would call this fragmentation.


----------



## rigoletto@ (Sep 30, 2019)

I've be using dns/unbound forwarding only to servers (list below) supporting DNS Over TLS (aka DoT) since while and currently I already disabled DoH on Firefox (`network.trr.mode=5`).

In regards to `network.trr.mode`



> 0 - Off (default). use standard native resolving only (don't use TRR at all)
> 1 - Reserved (used to be Race mode)
> 2 - First. Use TRR first, and only if the name resolve fails use the native resolver as a fallback.
> 3 - Only. Only use TRR. Never use the native (This mode also requires the bootstrapAddress pref to be set)
> ...



However, be aware, if you set to 5 and then change it to ON using the "Preferences" dialog in Firefox, when you disable it again Firefox will set to 0. Also, you can setup your own DoH proxy server but I didn't bothered with that.


```
forward-zone:
    name: .
    forward-tls-upstream: yes
    forward-addr: 193.17.47.1@853        # CZ.NIC
    forward-addr: 185.43.135.1@853        # CZ.NIC
    forward-addr: 37.252.185.232@853    # Foundation for Applied Privacy
    forward-addr: 146.185.167.43@853    # SecureDNS
    forward-addr: 91.239.100.100@853    # UncensoredDNS
    forward-addr: 89.233.43.71@853        # UncensoredDNS
```

Cheers!


----------



## toorski (Sep 30, 2019)

_"Google has maintained that its Chrome tweaks would give users control over who shares their info, and that it won't force people to switch to encrypted DNS"._

Chrome tweaks? by majority of dumb-end users that have no clue what DNS is or how it works, just like cookies and meta data collection trails that their web browsers leave when visiting web servers.

Google should offer encryption with their own freebie Gmail service before they start using and abusing DNS to their liking as they do with WWW.


----------



## obsigna (Sep 30, 2019)

From what I remember about the statements of Mozilla about the trr.mode has a different notion. Mode 0 = OFF by default is misleading, since it should be read Mode 0 = default -- for the time being = OFF. And Mozilla has announced one month ago that they’re in the public rollout phase now, so they switched already Mode 0 = default to ON for an unknown number of clients in the U.S. So, to prevent Mozilla does the choice for you, you need to set it to 5.


----------



## Crivens (Sep 30, 2019)

I think this is a bad idea. It is me who sets up DNS server configs depending on whom I trust.

And mozilla is on its way off my systems because of their actions.


----------



## MarcoB (Sep 30, 2019)

What happens when you have both DoT and DoH on your system? Does it interfere with each other?


----------



## 6502 (Sep 30, 2019)

Google wants to encrypt DNS traffic for others but to watch it.


----------



## toorski (Sep 30, 2019)

MarcoB said:


> What happens when you have both DoT and DoH on your system? Does it interfere with each other?



Then, they won’t know where you coming from and you won’t know where you going to, they won’t know your name and how to get back to you 

Unless you figure out howto eliminate the IP from TCP, encrypt the TCP and then get somewhere without the IP, there’s nothing that you can do to change the level of security or privacy over TCP/IP


----------



## MarcoB (Sep 30, 2019)

I do understand how DoT and DoH work but when both are on, is FF (with DoH) then circumventing DoT running on the system?


----------



## obsigna (Sep 30, 2019)

For DoH, FF would need the system’s DNS, either of normal or DoT, only for bootstrapping, namely for resolving the IP address of the DoH server it wants to use, in case it would not be already known, like 1.1.1.1. After this FF would circumvent DoT.

At this stage, we can stop DoH from happening only at the firewall, and only in case we know the DoH address.


We could prevent DoH bootstrapping by imposing our DNS policies, by either let DoH domains not resolve, and force FF to fallback to the system’s DNS or by informing the IP of our own DoH server. The latter might become tricky, when we would need to fake Google or Cloudflare certificates, because we would need to install our local CA root certificates to all the clients.


We may of course trust Google and Mozilla, and others to always provide a switch for users may easily and effectively disable DoH once and forever. Perhaps this is asking for too much, and therefore it is good to have measures (1.) *and* (2.) in place -- _„Trust, but verify!“_


----------



## usdmatt (Sep 30, 2019)

Been involved in the Internet since the 90's and TBH I'm not the biggest fan. Seems like eventually every service will end up being tunnelled through HTTPS. We already have HTTP moving towards a binary protocol over UDP so you gain a few milliseconds of efficiency for some website to then go and load 400kb of javascript. (Maybe part of the end goal of Google developing HTTP3 is to make it more applicable as a transit protocol for other udp/binary services that would normally be far more efficient that http...)

Not that I'm against improvements in privacy, but it's interesting that Google are involved when their entire business is based on tracking people. Of course, using the Chrome browser and their DoH servers is only going to improve their tracking abilities.

I'm sure a lot of thought and design has gone into it, but the sceptic in me can't help but think companies like Google have a specific interest in developing it the way it is. I would of much rather had it developed as a standard part of DNS, so any DNS server you've configured your OS to use could use TLS over standard dns ports (assuming the server supported it).


----------



## xtaz (Sep 30, 2019)

There are checks built in to Firefox that you can take advantage of to force it to be disabled. If any enterprise policy is configured or if security.enterprise_roots.enabled is set to true then it will be disabled. Also if the canary domain "use-application-dns.net" returns NXDOMAIN or SERVFAIL from DNS then it will also be disabled.

This is a temporary measure by Mozilla until such mechanisms are standardised. It remains to be seen if Google build the same into Chrome.


----------



## rigoletto@ (Sep 30, 2019)

MarcoB said:


> I do understand how DoT and DoH work but when both are on, is FF (with DoH) then circumventing DoT running on the system?



DoH is nothing but a web proxy running some _smol_ code to hook on some _regular_[1] DNS server. So, FF is not circumventing anything but just ignoring it.

[1] which can be a DoT DNS server.


----------



## kpedersen (Sep 30, 2019)

ralphbsz said:


> I know lots of people who use Chromebooks, and are super happy with them (and that's seasoned computer professionals).



I have always been interested by how seasoned computer professionals are able to do anything using nothing but consumer websites. For example, there are no decent UML, database, network diagram tools. No serial port access; no disk label utils, etc. There is also barely a terminal emulator or grep tool. How do they even find documents?

Most companies use a Microsoft samba share to store internal shared documents; how are web browser thin clients able to get the data stored there, uploaded to a web server and then viewed using the web services viewer. That must be so many steps just to view a document! Surely this employee would have to basically be spoonfed documents via email or other medium. I would absolutely hate to work with a "special" guy like that in my department XD

Perhaps I just don't quite understand what a computer professional is these days.


----------



## Crivens (Sep 30, 2019)

kpedersen said:


> Perhaps I just don't quite understand what a computer professional is these days.


Who is and who calls himself one are two things. A friend tries to get me into docker and stateless services. Not my cup of tea but...


----------



## Geezer (Sep 30, 2019)

Will Iridium use DoH?


----------



## xtremae (Sep 30, 2019)

I personally use (and prefer) unbound as it provides DoT pretty much across the board, instead of _just_ browser sessions. This, combined with the ability to use multiple DNS providers with round robin selection and configurable caching TTLs, has the benefit of increased privacy over browser defaults.



kpedersen said:


> Perhaps I just don't quite understand what a computer professional is these days.


As a term it has become tarnished since people misuse it to defend a particular point of view. _If professionals use xyz, then xyz is fine for most people_. However, this only tells half the story because, in this context, there is no evidence to support the argument that professionals are privacy minded individuals, or at least, more than average users.


----------



## shkhln (Sep 30, 2019)

kpedersen said:


> Most companies use a Microsoft samba share to store internal shared documents



Do they? That seems like a mess to be honest.



kpedersen said:


> That must be so many steps just to view a document!



We can easily reverse that argument by comparing, say, Google Docs collaborative editing features with SMB/NFS collaborative editing  features (if you even consider unreliable file locking a feature, else there is nothing to compare). Not to mention that everyone and their dog nowadays has a file sync app.


----------



## abishai (Sep 30, 2019)

I read some articles, but I still don't understand. If I run local resolver, do I need DoT locally or through public services? Is FF starts to ignore my local resolver? This is stupid as I have local zone for my domain.


----------



## kpedersen (Sep 30, 2019)

shkhln said:


> Do they? That seems like a mess to be honest.



Yes, it is always a mess. I will not defend it 



shkhln said:


> We can easily reverse that argument by comparing, say, Google Docs collaborative editing features with SMB/NFS collaborative editing  features (if you even consider unreliable file locking a feature, else there is nothing to compare). Not to mention that everyone and their dog nowadays has a file sync app.



A file sync app is not really the same as web browser only solution (like a chromebook or what marketing refers to as "the cloud"). A sync app is basically an even weaker alternative to an smb/nfs approach and nothing more.

Some workflows of how I imagine the "special" cloud user to compare for image editing:

*The typical desktop user*

1) Copy from NFS/SMB in file explorer to desktop
2) Double click file to open in (i.e photoshop), edit and save
3) Copy file back

*The typical developer*

1) svn update
2) Open file in photoshop, edit and save
3) svn commit

*The webbrowser muppet*

1) Copy from NFS/SMB... [Fail, a web browser cannot do this]
2) Pretend the file is already in dropbox
3) Open in photoshop [Fail, photoshop "Cloud" doesn't actually run in a browser]
4) Pretend there is a web photoshop equivalent
5) Open file in web based image editor [Fail, no way to obtain file from dropbox]
6) Copy file from dropbox onto local machine and then upload back to the specific web service (~15 clicks).
7) Edit file and save. [Fail, the web service only saves to its own database, not dropbox]
8) Manually transfer from web service to your drop box (~15 mouse clicks)
9) Copy file back to NFS/SMB [Fail, again no functionality in web browser]

Not a single step here can actually work. Yes, if you stick to entirely google docs, perhaps but then you can forget about using actual tools. It is an absolute joke to try to depend entirely on ratty websites for any kind of workflow. Not even for tweaking holiday photos XD.


----------



## shkhln (Sep 30, 2019)

kpedersen said:


> A file sync app is not really the same as web browser only solution (like a chromebook or what marketing refers to as "the cloud").



I don't think anyone can do any work, other than maybe creative writing, with chromebook as their _only_ computer. As secondary machines they are very compelling as long as you don't mind Google spying on you. Computer enthusiasts are precisely the type of people I would expect to buy such a device for the sheer novelty factor, if nothing else.


----------



## ralphbsz (Sep 30, 2019)

kpedersen said:


> I have always been interested by how seasoned computer professionals are able to do anything using nothing but consumer websites. For example, there are no decent UML, database, network diagram tools.


There is a version of ssh that runs in browsers. Works perfectly well, even with multiple monitors and many ssh windows.
Diagramming tools are all over. For example, I have been using Microsoft Visio (for the last 15 or 20 years), and recently switched to run Visio on the web, instead of installing a copy on Windows. It saves me having to install a Windows "machine" just for running visio.



> No serial port access; no disk label utils, etc.


Serial ports are de-facto obsolete. You need them for embedded development, nothing else.
Disk label? As I said above, people today use a desktop machine that has no OS (for example a Chromebook or iOS/Android tablet). There is no disk labels, there are no user-visible disks, there are no utilities. You open a browser or installed canned apps, nothing else.



> There is also barely a terminal emulator or grep tool.


Terminal emulator: See above, web-based ssh exists. Grep: That is built into your work flow tools.



> How do they even find documents?
> 
> Most companies use a Microsoft samba share to store internal shared documents; how are web browser thin clients able to get the data stored there, uploaded to a web server and then viewed using the web services viewer.


You run what amounts to a search engine. You can run it in a server. For example, at home I have a large ZFS-based file system, on a FreeBSD machine, where I store scanned documents (there are tens of thousands of those, I have a paperless archiving system at home). On that server there is a simple CGI page that allows me to search for files by string in file names (20 lines of Python); that already allows me to find files by file name. To find files by a string in them, I have run glimpse a.k.a. agrep before, and made it accessible via another simple CGI script. Alas, that didn't work well: My paper scanning software has really bad OCR built in, so the PDF files it creates have very little searchable content. I need to take all my documents and re-run the OCR on them (and add the text output to the PDF files as another layer or comments). While I know how to do it, it is quite time-consuming, and I haven'gotten around to it.

Have you tried using Cloud accounts like Azure or Google? All your documents are online (and when I say "documents", I don't mean just word files, but databases, programs, queries, make file, spreadsheets, e-mails), and are all searchable.



> Perhaps I just don't quite understand what a computer professional is these days.


In my example, typically someone with a PhD (or at least MS) in Computer Science, who works as a software engineer or project manager at a computer company? And doesn't have a "computer" (in the sense of a device with an OS), but uses a lightweight stateless desktop or client (like a Chromebook or tablet) for all their work? This example is quite common today.


----------



## Crivens (Sep 30, 2019)

xtremae said:


> As a term it has become tarnished since people misuse it to defend a particular point of view. _If professionals use xyz, then xyz is fine for most people_.


You can short circuit such nonsense by pointing out that professionals drive cars with 4-digit hp in rain and without any fancy electronics. Something joe sixpack not even wants to try out.


----------



## Deleted member 30996 (Sep 30, 2019)

ralphbsz said:


> As an example, consider the thing on my lap. It is a high-end 15" MacBook pro (paid for by my employer). I am running three applications right now: A browser (which does most of the work, including both home and work e-mail), an ssh client (which I use to log in to computers), and a VNC client (for working on a stationary Mac at home, which I use for scanning documents). *So other than 3 or 4 fixed host, nearly all the network traffic goes to/from the browser*.



While I would never question your computer expertise somehow that doesn't sound like a good thing to me. I realize SSH is encrypted traffic but having it routed though www/firefox-esr after everything I do to quash its tracking and spying eyes gives me a bad feeling. 

That said, I don't even allow myself remote access and have never used a VNC but it seems to go against what I personally consider good security practice.

Feel free to correct me if I'm wrong.


----------



## kpedersen (Sep 30, 2019)

ralphbsz said:


> Have you tried using Cloud accounts like Azure or Google? All your documents are online (and when I say "documents", I don't mean just word files, but databases, programs, queries, make file, spreadsheets, e-mails), and are all searchable.



I think this is the core of this issue; it is too hard to get documents between different cloud accounts. If you wanted to get your Google docs files into an Office 365 or web ssh session; it is too awkward. You end up having to download it to the local machine and then re-upload it again.

And I suppose then that there is a web ssh, but you have to run it yourself; there is no "cloud" provider for it that integrates with other cloud services; like say a traditional PC approach where it is all on the same disk and easy to access.


----------



## ralphbsz (Oct 1, 2019)

Trihexagonal said:


> That said, I don't even allow myself remote access and have never used a VNC but it seems to go against what I personally consider good security practice.


That's a darn good question, and I had to think about it for a while. Why is this not insanely insecure?

The answer is: That desktop machine is on the *internal* network. While connections that originate from it to the outside world are allowed, it can not be seen from the internet at large. As in: it doesn't even have an IP address that's routable to the world.

In addition, the VNC connection is password-protected. Which currently annoys me, I have to type in that password (a pretty random string of about 12 or 15 characters) every time I start the VNC connection. Clearly pretty annoying, but not so much that I have put any effort into working around it (I only use that desktop machine roughly once per week, on a weekend, for a few hours). And I think VNC connections between mac's are encrypted, so even if someone were to listen to network traffic (which is de-facto impossible in our network setup and geographic location, unless they are in a helicopter), I think I'm good to go.

So I think it's actually pretty secure.

(The helicopter part is actually not a joke: We live in California, very near Silicon Valley, but in a rural and mountainous area. This afternoon there was a very small wildland fire near our house, and there were two helicopters right above. As in 100 feet outside the bedroom window. This is a rare case when an outsider can actually even get to our WiFi signal; our house is so isolated that without a helicopter you have to be literally sitting inside or on the veranda to get signal. The fire was extinguished within minutes, nothing to worry about.)


----------



## Maelstorm (Oct 2, 2019)

This gives me quite a bit to think about.  The problem that I have is does the browser now handle DNS traffic itself or does it still defer to the resolver?  From everything that I have read, DoH is handled by the browser.  That raises a number of red flags for me because how do we know that the browser can be trusted to use our DNS servers that we have configured?  I'm all for added privacy on the web, but DNS is supposed to be handled by the OS, not an application, certain specific tools exempted.

To add to some specific comments, a ChromeBook does have an OS, iOS is an OS, and other things.  The browser may be one of the only apps running on the machine, but the underlying software running the hardware is the operating system whether that be Android, Linux, FreeBSD, iOS, etc....  That is still required.

As for computer professionals, I am a computer professional and I use a traditional desktop running Windows for my general work.  FreeBSD is used for servers and such.  Linux for embedded.  The right tool for the job.  I've been doing web development these past few months, so I have Apache, PHP, and MySQL loaded on my Windows machine.

Back to the topic at hand, what should be happening is that the browser uses the local resolver as is, and the resolver uses DoH to connect to a DNS server....Or have a local DNS server use DoH to communicate with the outside world in a corporate environment.


----------



## ekingston (Oct 2, 2019)

Maelstorm said:


> what should be happening is that the browser uses the local resolver as is, and the resolver uses DoH to connect to a DNS server....Or have a local DNS server use DoH to communicate with the outside world in a corporate environment.



While I agree with you complete it defeats Google's desire to do even more tracking of your activities on the Internet. If Chrome does it's own resolving and bypasses your OS settings, then they know when you go to sites that don't otherwise have any connection with Google (like analytics or ads). That is information that the big ISPs do track and sell, Google wants to take it for themselves.

The other side of the argument is that DNS has had the ability to do DNS over SSL for quite a while now but very few DNS servers actually implement it. So, even if you (and I) change our default resolve to something not from our ISP, our ISP can still sniff the traffic as it goes through their network to track us. Google will say that they are helping protect our privacy by preventing the ISP from collecting that data and selling.

And, of course, Google's solution is going to hurt corporations that use DNS to blackhole malicious domains. And it is also going to hurt those of us who do the same thing at home. While there are ways around DNS black holes, they generally require the end user to intentionally work around DNS. That makes DNS black holes good enough for a lot of situations. But if my browser is bypassing my DNS, then all of a sudden those known to be malicious malware servers that sneak into ad networks are bypassing my DNS servers thanks to Google.


----------



## Crivens (Oct 2, 2019)

Google will say that they provide a service by centralizing the blocking of dissident ideas ^h^h evil guys so everybody will be safer.

The computer is your friend. Trust the computer.


----------



## obsigna (Oct 2, 2019)

Granted, Google is evil. However, according to reports, with respect to DoH quite a lot less than Mozilla. I don’t use Chrome, regularly, although, what I read is that with Chrome it is very easy to opt-out of DoH, and this even by the way of a company wide policy. And this makes DoH only annoying, because somebody needs pull the plug, but not exactly evil.

https://www.translatetheweb.com/?fr...-testweise-auf-DNS-over-HTTPS-um-4520039.html

While with Firefox it is hidden with misleading settings by default, and Mozilla recklessly wants to push this through at all costs. I am a web developer as well, and I use Chrome and Firefox for testing purposes only. I have a test web server installed on localhost on the development machine, and my local DNS resolves the test virtual hosts to localhost. I found some obfuscated settings in FF which presumably disables DoH, and I disabled it. However, in the first occasion, when Firefox does not more resolve my virtual host sites to localhost, I won’t search for other settings or do any troubleshooting with it, I will simply shoot the f**ing fox off my systems -- once and forever(fullstop)


----------



## rigoletto@ (Oct 2, 2019)

obsigna,

There is an option to disable DoH at the Firefox "Network Settings" dialog.


----------



## CraigHB (Oct 2, 2019)

What I don't like is relinquishing control of DNS to the browser, basically giving Chrome or FF ultimate say on what can and can not be filtered. 

To me DoH looks like a strategy to block content filters under a smokescreen of improving privacy.  Based on the fact Google makes a living off collecting personal data, I don't see how anyone could say that's a believable motivation.  There's also the fact that Google is already removing controls in Chrome that allow content filters to work as browser extensions.  Sorry I don't recall the technical details on that.  My feeling is it's an all out blitz on Google's part to remove the users' ability to filter content on the web and further their profit potential.

I just hope controls remain to disable DoH.  It's bad when corporations use their market share to force their policies down your throat and worse when there's no way around it. 

As far as DoT, seems fine to me if it's supported by the traditional resolver and integrated into the DNS standard.  That does seem like something that could be truly aimed at improving security and privacy.


----------



## rigoletto@ (Oct 2, 2019)

CraigHB said:


> What I don't like is relinquishing control of DNS to the browser, basically giving Chrome or FF ultimate say on what can and can not be filtered.



IDK about chromium but on FF you can choose what DoH proxy you want to use, including your own.


----------



## Maelstorm (Oct 3, 2019)

ekingston said:


> While I agree with you complete it defeats Google's desire to do even more tracking of your activities on the Internet. If Chrome does it's own resolving and bypasses your OS settings, then they know when you go to sites that don't otherwise have any connection with Google (like analytics or ads). That is information that the big ISPs do track and sell, Google wants to take it for themselves.
> 
> The other side of the argument is that DNS has had the ability to do DNS over SSL for quite a while now but very few DNS servers actually implement it. So, even if you (and I) change our default resolve to something not from our ISP, our ISP can still sniff the traffic as it goes through their network to track us. Google will say that they are helping protect our privacy by preventing the ISP from collecting that data and selling.
> 
> And, of course, Google's solution is going to hurt corporations that use DNS to blackhole malicious domains. And it is also going to hurt those of us who do the same thing at home. While there are ways around DNS black holes, they generally require the end user to intentionally work around DNS. That makes DNS black holes good enough for a lot of situations. But if my browser is bypassing my DNS, then all of a sudden those known to be malicious malware servers that sneak into ad networks are bypassing my DNS servers thanks to Google.



My solution is to blacklist Google's DNS servers on my firewall, which cannot be bypassed.  The IPFW rule to do this was posted earlier in the thread.  So with that, Chrome has no choice but to use my configured DNS servers, which I happen to run myself.


----------



## xtaz (Oct 4, 2019)

You're assuming though that google will always run DoH on well known servers like 8.8.8.8. The problem with DoH is that it could in theory run on anything. Cloudflare for example could run DoH endpoints on every one of its front end servers. If you then decide to block these you block half the internet from loading.

At least with DoT it uses a well known port and gives you that choice. DoH doesn't.


----------



## CraigHB (Oct 4, 2019)

There's so many instances I encounter with corporate software products where they try to force the user into doing things the way they want.  In this case it's commandeering your DNS to take control of content.  I really get tired of all the non-standard things I have to do with products to make them behave the way I want.  DoH is just another one of those things on an ever growing pile.

It's a trend I've noticed now with software products where they take more and more choice away from the user.  It really extends into all consumer products.  Quality and support seems to be on a steady decline as well.  It's all about making products as cheap as possible with the lowest overhead.

I remember a time where they tried to establish TQM as a way of doing things in the corporate world, don't know if anyone remembers that.  The objective was to maximize quality.  That idea has been run out of town like a rail over the last couple decades.


----------



## ralphbsz (Oct 4, 2019)

CraigHB said:


> I remember a time where they tried to establish TQM as a way of doing things in the corporate world, don't know if anyone remembers that.


Absolutely, been there done that. Total Quality Management, Six Sigma, all that. It was horrible, and it was great. That might sound contradictory, but there is an explanation. The idea behind it came from the observation that quality (in particular of software artifacts) was getting horribly bad, much software was chock full of bugs or completely missed the requirements, and fixing and improving it was hard and expensive, sometimes so much that it was outright impossible. Many famous software projects of the 80s died a terrible death due to these problems.

And then people figured out the key observation: the real root cause of software quality problems is not a simple technical thing. You can't fix software quality with technology; new coding rules (like where you put the braces in C code, or how many spaces you indent), or new programming languages help a little bit, but they don't solve the problem. Giving people a more efficient programming language (like Cobol -> Pascal, or C -> Java -> Python) only makes them get to unmaintainable software that's over budget and behind schedule even faster. The real root cause of the engineering crisis is sociological, and it is corporate culture. That's what TQM and such set out to fix. In order to have better quality (deliver artifacts that actually work, on time and on budget), you need to first define what you really want (what is the software supposed to accomplish? meaning write a requirements document), you need to measure how well you are doing (are we behind schedule or ahead? what fraction of projects fail?), you need to change your behavior (let's see whether coding goes faster if we turn the phone system off), and you need a feedback system (the elephant project worked really well, let's use the same design method for hippo and rhino). This is what TQM taught us. Engineers hated it, because suddenly you had psychologists, sociologists and bean counters telling them what to do. But it worked.

And it didn't go away. Instead, it became part of the culture. The direct outcome of it was the CMM a.k.a. Capability Maturity Model, and all that still underlies the software development processes that we use today.


----------



## rigoletto@ (Oct 5, 2019)

ralphbsz said:


> Absolutely, been there done that. Total Quality Management, Six Sigma, all that. It was horrible, and it was great. That might sound contradictory, but there is an explanation. The idea behind it came from the observation that quality (in particular of software artifacts) was getting horribly bad, much software was chock full of bugs or completely missed the requirements, and fixing and improving it was hard and expensive, sometimes so much that it was outright impossible. Many famous software projects of the 80s died a terrible death due to these problems.
> 
> And then people figured out the key observation: the real root cause of software quality problems is not a simple technical thing. You can't fix software quality with technology; new coding rules (like where you put the braces in C code, or how many spaces you indent), or new programming languages help a little bit, but they don't solve the problem. Giving people a more efficient programming language (like Cobol -> Pascal, or C -> Java -> Python) only makes them get to unmaintainable software that's over budget and behind schedule even faster. The real root cause of the engineering crisis is sociological, and it is corporate culture. That's what TQM and such set out to fix. In order to have better quality (deliver artifacts that actually work, on time and on budget), you need to first define what you really want (what is the software supposed to accomplish? meaning write a requirements document), you need to measure how well you are doing (are we behind schedule or ahead? what fraction of projects fail?), you need to change your behavior (let's see whether coding goes faster if we turn the phone system off), and you need a feedback system (the elephant project worked really well, let's use the same design method for hippo and rhino). This is what TQM taught us. Engineers hated it, because suddenly you had psychologists, sociologists and bean counters telling them what to do. But it worked.
> 
> And it didn't go away. Instead, it became part of the culture. The direct outcome of it was the CMM a.k.a. Capability Maturity Model, and all that still underlies the software development processes that we use today.



This sound like the Ada "way" to me.


----------



## CraigHB (Oct 6, 2019)

ralphbsz said:


> And it didn't go away. Instead, it became part of the culture.



I was more making a joke about it rather than being technically accurate, but TQM actually extended over the whole of industry in the US.  Somebody was really successful at promoting an idea. 

At the time I experienced TQM I was an Avionics engineer working for a large corporation designing flight control systems for commercial aircraft.  I didn't stay in that profession long enough to see what became of it there, but I did still see it in my next job for a large corporation which revolved around systems design.  I don't believe TQM  extended beyond the US, it might have, but I never saw it. 

I'm sure Asian industry (which owns the most of the industrial pie now) has never heard of TQM and will never subscribe to such a thing.  It seems in foreign industry quality is barely part of the equation.


----------



## Phishfry (Oct 6, 2019)

TQM is now codified in ISO9001 standards.
They are used worldwide.
Quality is not something you can codify. It is a way of life.





						ISO 9001, Lean, TQM and Six Sigma – same same or different? - PwC's Auditor Training
					






					auditortraining.pwc.com.au


----------



## Phishfry (Oct 6, 2019)

Boeings previous CEO was in line to take Jack Welch's job at GE.
Take a look at Boeing now and think what SixSigma did to that company.
They went from engineers running the company to bean-counters running the show.
Hence total meltdown for a few pennies saved.
This is the current state of affairs at many companies.
Apparently at business school they don't teach what harm to a companies reputation actually costs.


----------



## CraigHB (Oct 6, 2019)

Well based on the general state of industry in the US (dismal), there was definitely something wrong, though I don't think TQM had much influence in its success or failure.  There were other more pertinent factors that killed industry in the USA (mainly the drive to cut labor costs).


----------



## Chris236 (Oct 6, 2019)

Phishfry said:


> What are your thoughts on the browser handling DNS?
> Mozilla using Cloudflare. Google with Chrome.



It is a very very very bad idea.

For starters, indeed, it should be an OS function.

Then, it doesnt solve any problem we have - but it creates a ton of new ones.

Basically, the aim is not security but to move your DNS stream from your provider to Google and Cloudflare.
This is bad on many fronts, but lets start with the worst: On the internet, if you are not the paying customer, you are the product.

Remember that well. Many of you dislike your ISP (full disclosure - I work for one) , but it is a company that operates in the same country you do, and works by the same laws. Ideally, you have some political input to the legal framework it works under.  And you are a paying customer, you have a contract.

Nothing of that holds true for Google or Cloudflare.  You have no legal relation to them whatsoever, to most of the world's population they are foreign thugs, and they are completely unregulated.  And they don't get a penny from you - making you just a filet piece in their offering to actually paying customers.  Another eternal truth is that There Ain't No Such Thing As A Free Lunch.  These companies need to make money - so the HAVE to sell you out, to someone, somehow.

My second objection is environmental. Basically, introducing cryptography where none is needed burns energy. And since that is large scale we can expect large scale cpu power needed to implement that shit. Even if we ignore client side,  server side lets assume by very basic back-of-a-napkin calculation we currently need
about 2mW/user to provide DNS service (based on real world data, probably a little low). With DoH I expect that to increase by a factor 4 to five. Extrapolated to 4 billion internet users, this gets us something like 30 MW additional electrical power needed. That is the power requirement of a small town.  This is not quite yet Satoshi-Nakamoto-sized bad, but still Google's brain fart visibly increases world power usage for no gain in a time we desperately need to reduce it, not to mention the thousands of additional servers that have to be built, transported, installed, de-installed, transported and scrapped every five to seven years for this alone.

My third objection may seem strange to you. Right now, in many countries DNS provides the angle to implement Internet censorship on behalf of those institutions that are powerful enough to force it. It is comparatively cheap and the obstacle it establishes  is high enough to make it acceptable to those requiring it but low enough that most of us can live with it.  It is foolish to assume that you can simply out-power those institutions. Most of us in the first world live one court decision away from something like the electronic Chinese wall implementing their local censorship regime. The widespread adoption of DoH might trigger this exact decision.

My fourth objection is quality.   Google and Cloudflare sit somewhere, while we, the ISPs, sit directly where you connect. We can, and often do, implement better and lower-latency DNS service than Google and Cloudflare possibly could. Additionally, the cryptography inherent in DoH will probably cause additional, quite visible latency penalties.  More DNS latency makes the Internet feel "slower" to you.  Also the cryptography will increase complexity and, this, operational risk (leading to lower service availability) and increased attack surface for intruders, making DNS services more vulnerable and thus, also, less available and less trustworthy.
In sum, the DNS will be slower, and less trustworthy because of that, and it will fail more often.

And my fifth objection is auto-configuration and discovery. Google messes with a vast ecosystem of existing auto-configuration and discovery mechanisms. They barely have any answers to questions arising from that yet.

There probably are a few more, but that should suffice as an intro


----------



## Phishfry (Oct 6, 2019)

Directly to your forth point I wanted to share this read from HN:








						CloudFlare is ruining the internet (for me) > Slashgeek
					

Follow up: Cloudflare: Making the internet a little bit faster – for a select group of people CloudFlare is a very helpful service if you are a website owner and don’t want to deal with separate services for CDN, DNS, basic DDOS protection and other (superficial) security needs. You can have all...




					www.slashgeek.net
				



Us Americans are lucky to have speedy cloudflare responce time.
Not all the world is so lucky.


----------

