# Localizing Central DNS Records in Bulk Data; anyone?



## StreetDancer (Nov 7, 2020)

Hey there Experts!

StreetDancer here again! 

I am wondering if anyone could point me in the correct direction as to the current authority of Central DNS records and any documentation that goes along with this. I am looking to replicate the Central DNS Server Records in Bulk Data and replicate locally in a test environment for a project.

If anyone has any suggestions; please let me know! Thank you so much!

Best Regards,

~ TruthSword


----------



## SirDice (Nov 7, 2020)

StreetDancer said:


> current authority of Central DNS records


You mean the root DNS servers?









						Root name server - Wikipedia
					






					en.wikipedia.org
				







__





						Root Servers
					






					www.iana.org
				






StreetDancer said:


> I am looking to replicate the Central DNS Server Records in Bulk Data and replicate locally in a test environment for a project.


I don't think you realize how big they are.


----------



## Jose (Nov 7, 2020)

SirDice said:


> You mean the root DNS servers?
> 
> 
> 
> ...


Also, don't they just delegate the lookups? This is why we have glue records, no?

I don't think such a global database exists, and its usefulness is doubtful.

What's wrong with the DNS caching we've been using for the last 40 years or so?


----------



## StreetDancer (Nov 8, 2020)

SirDice said:


> You mean the root DNS servers?
> 
> 
> 
> ...


SirDice,

Yes sir; root DNS servers. I also asked this question on freenode and was given a reference point to "Large Open DNS Resolvers". I am by no means a specialist in anything computers. I just know the basic principles of hierarchy DNS servers and the basic queries.

Regarding the size... I would like to know the exact size. I also would like to know the proper technique, github project, freebsd port, etc that would allow someone to make an offline copy of specific regions even if possible (of the world). Starting with the USA.

Have you ever seen someone accomplish such a feat?

Thanks man!

~ Brandon
~ TruthSword


----------



## StreetDancer (Nov 8, 2020)

Jose said:


> Also, don't they just delegate the lookups? This is why we have glue records, no?
> 
> I don't think such a global database exists, and its usefulness is doubtful.
> 
> What's wrong with the DNS caching we've been using for the last 40 years or so?


Jose,

I do not know exactly what you mean by glue records, sir. As far as DNS caching goes... you mean; trusting that a remote computer server will properly output the queries at any given time; in any given situation, at any given time, across the planet? The current records at X time for a resolve; should be able to be "mined" or "synced" or "slaved". 

This is my take on it.

Best Regards,

~ Brandon
~ TruthSword


----------



## Jose (Nov 8, 2020)

Suppose I want to know the IP address for www.example.com. I would start by querying the root servers. The root server that I queried would delegate the lookup to the DNS server for example.com. Say that's a machine called dns.example.com. Problem is, I can't ask dns.example.com for the address to dns.example.com. The root server would then help me by returning a "by the way, the IP address for dns.example.com is 192.168.1.53". How does the root server know this? Because the administrator for example.com set up a glue record to point dns.example.com at 192.168.1.53 when s/he registered the domain.





__





						How To Configure Bind as a Caching or Forwarding DNS Server on Ubuntu 14.04  | DigitalOcean
					

Bind is an extremely flexible DNS server that can be configured in many different ways. In this guide, we will discuss how to install Bind on an Ubuntu 14.0…




					www.digitalocean.com


----------



## rootbert (Nov 8, 2020)

you can't just download "all the DNS data", you can only use the data of nameservers you have access to by using e.g. DNS IXFR/AXFR or the files/backenddump of the server software. I recommend using unbound as resolver with prefetch if you want fast responses. Otherwise, if you need a large dataset on domains, subdomains, entries etc. I recommend generating them via a script.


----------



## SirDice (Nov 8, 2020)

Jose said:


> I would start by querying the root servers. The root server that I queried would delegate the lookup to the DNS server for example.com.


Actually, this gets delegated to Verizon, who manages the .com TLD. Top Level Domains (TLD) are delegated to other companies/organizations. IANA only manages and maintains the root servers.





__





						Top-level domain - Wikipedia
					






					en.wikipedia.org
				






StreetDancer said:


> I also would like to know the proper technique, github project, freebsd port, etc that would allow someone to make an offline copy of specific regions even if possible (of the world).


I suggest you start by figuring out how DNS actually works because it's fairly obvious you don't have any idea how it is organized.


----------



## Jose (Nov 8, 2020)

SirDice said:


> Actually, this gets delegated to Verizon, who manages the .com TLD. Top Level Domains (TLD) are delegated to other companies/organizations. IANA only manages and maintains the root servers.
> 
> 
> 
> ...


Right you are, and Verizon/Verisign is the reason why I have

```
zone "COM" { type delegation-only; };
zone "NET" { type delegation-only; };
```
In all my named.conf(5) files. Read about the Verislime patch.




__





						Re: Verisign Countermeasures - BIND and djbdns patches
					





					archive.nanog.org


----------



## StreetDancer (Nov 11, 2020)

Jose said:


> Suppose I want to know the IP address for www.example.com. I would start by querying the root servers. The root server that I queried would delegate the lookup to the DNS server for example.com. Say that's a machine called dns.example.com. Problem is, I can't ask dns.example.com for the address to dns.example.com. The root server would then help me by returning a "by the way, the IP address for dns.example.com is 192.168.1.53". How does the root server know this? Because the administrator for example.com set up a glue record to point dns.example.com at 192.168.1.53 when s/he registered the domain.
> 
> 
> 
> ...


Jose,

Thank you for explaining this. The glue record; is this equivalent to "A" records, "NS", etc?


----------



## StreetDancer (Nov 11, 2020)

rootbert said:


> you can't just download "all the DNS data", you can only use the data of nameservers you have access to by using e.g. DNS IXFR/AXFR or the files/backenddump of the server software. I recommend using unbound as resolver with prefetch if you want fast responses. Otherwise, if you need a large dataset on domains, subdomains, entries etc. I recommend generating them via a script.


rootbert,

Regarding the "all the DNS data"; has anybody even attempted to accumulate a large portion of the contemporary bulk DNS. I feel as if the Department of Defense has a copy of all of this and a Freedom of Information Act would take up to 10-15 years to be produced.

Thank you for the reference on "DNS IXFR/AXFR" ... I will do some research. Unbound as a resolver with prefetch; duly noted. 

For certain need large dataset on domains; what type of script would you recommend sir?

Best Regards,

~ Brandon
~ TruthSword


----------



## StreetDancer (Nov 11, 2020)

SirDice said:


> Actually, this gets delegated to Verizon, who manages the .com TLD. Top Level Domains (TLD) are delegated to other companies/organizations. IANA only manages and maintains the root servers.
> 
> 
> 
> ...


SirDice,

Thank you for clarifying who managers TLD's; Verizon Corp. And I most certainly agree with you that I do not understand how DNS actually works. 

Best Regards,

~ Brandon


----------



## StreetDancer (Nov 11, 2020)

Jose said:


> Right you are, and Verizon/Verisign is the reason why I have
> 
> ```
> zone "COM" { type delegation-only; };
> ...


Jose,

Thank you for that information. Looks pertinent to DNS security!

Best Regards,

Brandon


----------



## SirDice (Nov 11, 2020)

StreetDancer said:


> has anybody even attempted to accumulate a large portion of the contemporary bulk DNS


There's no such thing as "contemporary bulk DNS". There is no single party that has it all. There's IANA at the top and hundreds of registrars are delegated to control and maintain their own portion. Each of those registrars typically has a lot of sub-registrars that actually do the work. 






						Domain name registrar - Wikipedia
					






					en.wikipedia.org
				






StreetDancer said:


> I feel as if the Department of Defense has a copy of all of this and a Freedom of Information Act would take up to 10-15 years to be produced.


DNS records are publicly accessible, or else the whole DNS protocol would stop working. Why would the DoD have a copy?


----------



## Jose (Nov 11, 2020)

StreetDancer said:


> Jose,
> 
> Thank you for explaining this. The glue record; is this equivalent to "A" records, "NS", etc?


I didn't know this, so I had to look it up. Yes, "glue" records are just A or AAAA records returned in the "Additional" section of the DNS answer:





						Chapter 15 DNS Messages
					






					www.zytrax.com


----------



## StreetDancer (Nov 11, 2020)

Jose said:


> I didn't know this, so I had to look it up. Yes, "glue" records are just A or AAAA records returned in the "Additional" section of the DNS answer:
> 
> 
> 
> ...


Jose,

Thank you very much sir. So the answer is to harvest public glue records to build a large data set! 10-4! 


LORD Bless!

~ Brandon
~ TruthSword


----------



## Jose (Nov 12, 2020)

You'd have to get a list of every domain in every TLD first. What are you trying to accomplish?


----------



## ralphbsz (Nov 12, 2020)

The first important question is: You have to explain why you want this data. The only use I can see is illegal or unethical, so at this point, I have to suspect that you are a (very incompetent) black hat hacker.

The second problem is this. There is no "directory listing" mechanism within DNS, other than zone transfer, which is turned of on most DNS servers, and certainly on all the big ones. In the old days, the interactive version of the nslookup command (is there even an interactive mode today, haven't used it in ~20 years) used to have a "ls" command, which made it look like you could do a directory listing. So you first did a "nslookup example.com", which tells you that the DNS server for that domain was dns.example.com. Then you did "nslookup", and entered "server dns.example.com" followed by "ls" or "ls example.com". That stopped working about 20 years ago when AXFR became restricted, because too many hackers were abusing it.

So today, you know that a ".com" TLD (top level domain) exists. You can get a list of all TLDs, there are about 1000 of them. But how can you get a list of all domains within ".com"? You can't go to the Verizon server for that, and ask for an ls or AXFR. You can guess a few ... you know that ibm.com and ford.com exist (one makes computers, the other makes cars), you know that stanford.edu and harvard.edu exist, but already when it comes to GM (the car company), you have to start guessing: is it gm.com or generalmotors.com? How about the University of Southern North Dakota at Hoople, how are you going to guess the domain name for that one (hint: it doesn't exist, that university is a joke).

Please tell us what you are really trying to accomplish, and perhaps we can help.


----------



## a6h (Nov 15, 2020)

hosts.txt? R.I.P.


----------

