# Squid proxy: transparent/intercept/tproxy



## deadeyes (Jan 11, 2014)

Hi all,

I'm currently experimenting with Squid. The idea is the following: during office hours you can only go to certain sites. Outside these hours you can browse to whichever site you want. This is for specific IP(s) or IP ranges/subnets. No client side configuration. This is how the network looks:


```
client --- --- router/firewall --- internet
             |
       squid proxy
```

The router routes packets for the client to the Squid proxy. The client has the Squid proxy as the gateway. I have FreeBSD 9.2 on the proxy. Squid 3.4 compiled from source with the --with-nat-devpf and --enable-pf-transparent options. Firewall is PF.

Somewhere in the process I got confused with the different ways to configure Squid and PF. I need intercept or TPROXY as http_port option. From what I understand intercept will see packets that are for a webserver on the internet through port 80. The Squid proxy server will then set up its own connection to the webserver with its own IP.

Therefor this won't work with HTTPS. Unless making use of SslBump. SslBump creates a secure connection with the proxy server, which in his turn creates an SSL connection to the webserver. So this is a man-in-the-middle. This reason and because you need to install a root certificate in the client browser don't want to make use of this method.

So I need TPROXY here. If I understand it correctly TPROXY acts as the client (sending IP packets to the webserver with the client's IP). I'm not clear on if the client still has an SSL connection to the proxy server or not (I would think it still has) and if that needs SslBump anyway (still needing to need to add a certificate to the server? TPROXY should be really transparent).

However, any way I configure PF it seems like the packets are either arriving at the Squid proxy or something goes wrong with them. I read that I need ipdivert.ko which is available in my kernel.

I'm currently fiddling around with these rules:


```
rdr pass on $lan_if inet proto tcp from 192.168.1.32 to any port 443 -> 127.0.0.1 port 3129
rdr pass on $lan_if inet proto tcp from 192.168.1.32 to any port 80 -> 127.0.0.1 port 3128
```
OR

```
pass in quick log on em0 proto tcp from 192.168.1.32 to any port 80 divert-to localhost port 3128
pass in quick log on em0 proto tcp from 192.168.1.32 to any port 443 divert-to localhost port 3129
```
It's unclear to me what the difference is between the divert-to and rdr rules.

Might there be a problem with these rules because I only have one interface? Please correct me if I'm wrong at any point. What way should I go here to get the explained goal?

Thanks in advance for any help!


----------



## deadeyes (Jan 17, 2014)

It seems like _the_ FreeBSD community_'s_ knowledge on Squid is pretty limited

Anyway, I got a few answers on my setup and I want to share so hopefully it's of help to someone out there.

The conclusion of it all is this: fully transparent doesn't exist.

You ALWAYS have to configure something on the client if you want to limit on domainnames, content, ...:

autoconfigure browsers using WPAD
configure browsers to use the proxy
add the root certificate to the browser
The reason is that you need to be able to read the content of the packets to see for example the domain that is contacted (as one IP can hold multiple websites). With SSL all information that is sent for the HTTP protocol is encrypted. So is the Host header. So *S*quid can never know which site you want to visit (vhosts in *A*pache). SSL has an extension server_name which adds the domain name in the SSL negotiation (so before it gets encrypted). I don't know if *S*quid supports this.

If you don't want to check domainname, content, ... you can use the CONNECT method. Which just tunnels traffic. It doesn't do anything more.

Whether you want to use TPROXY, Intercept (transparent -> NAT) doesn't matter. You need sslbump for getting SSL to work. Basically sslbump creates the certificates for the websites again and signs it with its own certificate. This means you have to add this certificate as root certificate in the clients browser. So basically the encryption from the real website ends at the *S*quid host. This makes some stuff insecure as there is a point were packets can be eavesdropped (legal and ethical issues; at the *S*quid proxy). It also makes the Squid server to decide if a certificate is actually valid. See these directives:

```
sslproxy_cert_error allow all
# Or may be deny all according to your company policy
# sslproxy_cert_error deny all
```
If the certificate is not valid you won't see it if Squid gives you the traffic. Remember that a lot of sites use self signed certificates. Which means that you either block your users from those sites or allow users to browse to sites which differ from whom they claim to be.

Wether it's TPROXY or *I*ntercept, *S*quid will work the same. TPROXY just is a fully transparent proxy. Intercept uses NATting. This brings some security issues. With TPROXY no *NAT*ting is done. Squid spoofs the IP address of the client.

Your traffic from and to HTTP and HTTPS sites should go through the *S*quid box. In my case the *S*quid box is the gateway for the client. For the *S*quid box my firewall/router is the gateway.  And on the firewall/router itself I added a route that traffic for the client goes to the *S*quid box. [Is it squidbox or Squid box? Be consistent. -- mod.]

Try to do one thing at a time. First get HTTP running. Then copy the firewall rules and create _a/the_ configuration for SSLbump.

I got it to work in this combination:

Intercept
sslbump
rdr rules
If you have issues getting pf to send your packets to *S*quid, stop *S*quid, start `nc -l <portnr>` and try again with the browser. If output comes out, translation is working. Make SURE you remove skip rules in _the_ pf configuration!

However not with divert-to.


----------

