# Any ideas how to fake SSL certificates for use on local computer?



## Snurg (Dec 15, 2017)

I have read an interesting article about session recording.
There are quite a lot of companies offering such, see a (incomplete) market overview here.

Now I got curious, because the list compiled by the Princeton university only covers the Alexa top 10000 websites.
*I now ask myself, is it possible to locally fake SSL connections?*

Because, my idea is, to locally redirect/fake the DNS data so that every request to these session recording companies' servers gets redirected to a locally jailed webserver, who then could unencrypt the spy packets using the fake DNS/SSL information to find out the referers etc.

*The idea behind this is to build kind of a "data collector daemon" that detects when sites try to record your sessions. 
So you can discover, using the "data collectors'" web interface, which of the sites you visit are/were trying to keylog you, etc.*

Edit:
To make more clear what I mean: This page contains a list of the dozenmost "popular" session recording servers' host/domainnnames.
One can locally redirect requests to one of these hosts to an internal own server.
As long as the requests are http only, it is no problem to find out the referer.
But if they are https, all data except the hostname are encrypted.
Thus it would be ideal if there is a way of creating a fake self-signed certificate that one can use to intercept, decrypt and disclose the contents of the referer fields etc.
Ideally this could be done as a collective effort, like collecting evidence reports of session recorder eavesdropping.

Why? Because I think people deserve to know who voyeurs them.

Edit 2:
Maybe another approach could be to modify the browser so that requests to these spy sites in the list are always been done in http, making analyzing them easier.
Maybe a small plugin could be sufficient...


----------



## ShelLuser (Dec 15, 2017)

Of course it's possible. A certificate is nothing more but a public key signed by a party your environment (usually a browser) already trusts. As such it's easily done to create your own root CA certificate, ensure that your environment trusts that, and then create a certificate request for, say, google.com which then gets signed by your CA.

If you then direct traffic for google.com onto your website (for example using /etc/hosts or local fake DNS) and then use the certificate above then you'll get a perfectly encrypted local connection.


----------



## Snurg (Dec 15, 2017)

Maybe it might be as easy as just creating the .crt and .key files by using a command like

```
echo "Creating a new Certificate ..."
openssl req -x509 -nodes -newkey rsa:2048 -keyout $SSLNAME.key -out $SSLNAME.crt -days $SSLDAYS
```
as shown here?
I apologize for my stupid questions.
I have no real clue about encryption etc. And I do not really want to become deeply familiar with the technics there.
If it is sufficient to create .crt and .key files this way for the web server then I'd be happy!


----------



## ShelLuser (Dec 15, 2017)

Snurg said:


> Maybe it might be as easy as just creating the .crt and .key files by using a command like


That'll work, but only if you also add this certificate to your certificate pool of trusted CA's.


----------



## Snurg (Dec 15, 2017)

ShelLuser said:


> ...add this certificate to your certificate pool of trusted CA's.


This seems to be easier than I thought... Mozilla Wiki has a sweet page about it.
And there seems even to be a trick to do something like that server-side.

Thank you so much for guiding me where to start reading


----------



## obsigna (Dec 15, 2017)

I would install a transparent http/https proxy on the gateway -- www/squid can be used for this, as a matter of fact, I got this part working on my gateway. It does intermediate TLS decryption/encryption utilizing a self signed certificate chain. The self signed root certificate must be installed on the local clients in order to accept the re-encrypted traffic.

Then you would use the eCAP Service facility of Squid to implement full traffic logging -- I didn't explore this part yet, however, there seems to be a third party module which stores all traffic into a MongoDB -- http://www.squid-cache.org/Misc/ecap.html

The eCAP protocol is documented here: http://www.e-cap.org/docs/


----------



## getopt (Dec 15, 2017)

Snurg said:


> it would be ideal if there is a way of creating a fake self-signed certificate that one can use to intercept, decrypt and disclose the contents of the referer fields etc.


For analysing https there is www/mitmproxy. It's easy to use.


----------



## ekingston (Dec 15, 2017)

I'm curious, once you get your ssl decrypting proxy set-up what are you going to use to analyse the flows of data going through it?

I ask, because I'm thinking of a variation on this idea for home use. Specifically, I would like to force all the IOT things to go through a transparent proxy (because most of the things I have ignore autoproxy and only try direct connect). My actual long-term goal is to figure out what they are doing and set-up firewall rules to limit outbound traffic to what is needed. This way, should one get compromised, at least it would have limited capabilities outside my network.


----------



## Snurg (Dec 16, 2017)

Thank you so much obsigna, getopt and ekingston for that input!

obsigna
This is a great idea! Listening and recording the traffic between the browser and the session recording company servers could help explore what exactly happens and to develop insights, which in turn can help detecting common patterns etc.

I used squid myself until I found out that it is not able to do simple SNI passthrough (i.e. without decryption). There are a few reasons why I do not want the reverse proxy terminate the SSL connection instead of the web server. It would prevent my webapps to use secure cookies etc. It would force me to concentrate LOTs of certificates at one point, the squid.

That is the reason why I switched to haproxy. And then comes in what getopt and ekingston say...

getopt
That mitmproxy seems to be a cool thing. Could be ideal for such investigations, for which wireshark would possibly be overkill.

ekingston
Your project is another thing which could be well done in a dedicated jail, which solely serves to proxy and surveillance the traffic of a particular client network.

*I think I need to explain the background what I am working on and what I am going to implement. (Sorry for tl;dr)*
My current project is a simple-to-use jail manager that does what I need.
For thorough testing I need to build a few test application jails.
And these should be actually useful applications, not just useless demos.
Because, the "killer features" of such a jail manager need good examples to show how things work, and make people want to use it.

Thus I will make, among other jail application examples, a test server jail which takes the requests to the mentioned session recording company servers, logs them and returns an empty document. For the user there will be a simple CGI web interface which reads the logs and lists the domains whose pages tried to contact the session recording companies' servers.

Creating this using the jail manager will look something like this:

```
# jailboy create spywatcher -m apache -rproxywww='list of spying servers domains' -p resolv=host -p access=host -yq
```
This will create and start a jail named spyblocker, populated with programs and configured by the jail creation method "apache". This method, actually a Perl snippet, takes care that Apache gets installed and configured together with all the modules necessary when using reverse proxying and cloudflare.
The domains listed in the rproxywww parameter will be configured to be reverse-proxied http and https to that jail (by the jailed haproxy).
The apache's httpd.conf as well will be preconfigured for these domains with a simple "hello world" stub site.
The resolv parameter tells the jail manager that it should configure the domain names as locally-accessible only (overriding DNS resolution).
The access parameter tells that haproxy and PF should be configured so that the jail and the sites it "hosts" can be only accessed from the host itself.
The -y option (accept all defaults) in conjunction with the -q (quiet) option completely turns off interactive mode (good for script-controlled jail management).​
Then I will do some testing and refining. When all works to my satisfaction, I will write a new method, say "apachespywatcher", which overloads the "apache" create/clone/config/delete methods.
_So you can create, configure and start a completely functional applicaition jail with a single command like shown above._

Because, this creation method then does all that installation+configuration stuff, including creating and installing the fake certificates on the jailed server.
This way it becomes possible not only to create blank jails, but to actually set them up to working configurations that can be created, cloned, (re)configured as well as destroyed in a snap, using simple commands.
_This way it also becomes unnecessary to deal with actual IP numbers, interfaces and networks, because the the jail manager also takes care of the PF configuration._.
And so I am collecting ideas and examples for useful jail applications for which I can supply sample methods for my jail manager.

For example, I will supply a method which allows to install, configure, use and delete the most popular browsers jailed, and even dynamically spawn separate jailed browser instances.
(Conveniently, the create/config methods for these jailed browsers then could ideally also take care that these browsers accept the fake certificates created by the abovementioned "apachespyblocker"...)

A jailed recording/logging Squid, Mitmproxy, Wireshark, or whatever to surveillance and filter what is going on a local network, be it for IoT or other purposes, should probably also be part of the collection.
But I have no idea what are the best programs for that purpose yet. Will have to do some research. Every suggestion helps


----------



## Deleted member 30996 (Dec 16, 2017)

I was wondering why you didn't use www/mitmproxy myself.


----------



## obsigna (Dec 16, 2017)

Snurg said:


> Thank you so much obsigna, getopt and ekingston for that input!
> 
> obsigna
> This is a great idea! Listening and recording the traffic between the browser and the session recording company servers could help explore what exactly happens and to develop insights, which in turn can help detecting common patterns etc.
> ...



With version 3.5, Squid got the splice and peek features, which let Squid make decisions upon the informed SNI, however, I am not sure, if this could be of any help in your usage case -- https://wiki.squid-cache.org/Features/SslPeekAndSplice


----------



## Snurg (Dec 16, 2017)

obsigna said:


> With version 3.5, Squid got the splice and peek features...


Ahh... Back then when I did my research regarding SNI Squid was 3.4. I am still a bit hesitant to switch back to Squid, because the info you linked to hints that the new feature is not yet completely reliable.
But it's no real matter which proxy being used... it's just changing the templates to configure either proxy, the script will do the stuff. The difficult thing is to find out how to configure things, so one can write templates...



Trihexagonal said:


> I was wondering why you didn't use www/mitmproxy myself.


Didn't know that program until getopt's post today. Will look into it definitely!


----------

