# LAN Development 'Domain' SSL Setup



## daBee (Aug 6, 2017)

I'm wanting to generate a self-signed certificate for LAN-only development and testing.  The virtual host will be alpha.local, and it is only for nginx serving.  

The handbook requires a machine name for a virtual host, i.e.:  
	
	



```
Common Name (e.g. server FQDN or YOUR name) []:[I]localhost.example.org[/I]
```

Can I just use alpha.local or does it need a machine name for the cert?  Again, this is for LAN development only, self-signed.  

Also, I'm going to use the port's OpenSSL instead of the base.  Anything bad I should know about?  Eventually I want to use the same setup for the public side once I publish this server to the interwebs.  

Cheers


----------



## SirDice (Aug 7, 2017)

The CN needs to be the machine's exact fully qualified domain name. That is, hostname plus domain.


----------



## daBee (Aug 8, 2017)

OK, so how do I qualify the FQDN for a machine that's local and meant to switch over to public later on?  Let's say the name of it is alpha.  One of my domains is domainb.ca but it's currently pointing to another server that's publicly live now.  Can I change /etc/rc.conf to show 
	
	



```
hostname="alpha.domainb.ca"
```
 so that it works locally?

I'm wanting to use this server for development testing outside my workstation for the meantime.


----------



## usdmatt (Aug 8, 2017)

I don't see any reason why you can't generate a self-signed certificate for alpha.local. The certificate doesn't really care what the common name is, and the browser just wants it to match the web address you typed in. The majority of certificate on the net aren't an FQDN - alpha.local is technically no different to google.com.

You'll always get a browser warning for a self-signed cert of course unless you jump through some hoops. It's worth considering the option of installing security/py-certbot and getting a genuine letsencrypt certificate. Just means making the nginx site globally accessible (at least while requesting the cert), and possibly some local DNS entry so you can access the server internally via its private address.


----------



## daBee (Aug 8, 2017)

I went with alpha.domainb.ca and everything seems ok so far.  I hear you on the `google.com` but just wanted this to be simple and done with until I implement a real SSL.  I don't want this site visible to anybody either, as that just creates problems.  This is dev only for now, and I'm forcing everything through the cert.


----------



## usdmatt (Aug 8, 2017)

Yeah I appreciate you want the site local only - just thought I'd mention letsencrypt as it's just as easy as generating self-signed, and you only really need the site Internet accessible for a couple of minutes while you request/renew the cert. You then don't have to deal with any browser warning screens.


----------



## daBee (Aug 8, 2017)

On MacOS that's just a "trust cert" box first time around, isn't it?  

I can't seem to get nginx running the socket anyway.  Currently working on that today.


----------



## usdmatt (Aug 8, 2017)

Not sure, I don't use a Mac. I know it's getting more and more of a pain with the big browsers making the 'proceed anyway' button harder and harder to get to.

Regarding nginx, here's what I do for my systems to try and keep a fairly clean config (might be a good starting point) -

/usr/local/etc/nginx/nginx.conf

```
user  www www;
worker_processes  4;

events {
    worker_connections  1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    sendfile on;
    gzip on;

    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
    ssl_dhparam /data/ssl/dhparam.pem;
    keepalive_timeout  70;

    include sites/*;
}
```

Then a config file for each site
/usr/local/etc/nginx/sites/website1.com

```
server {
    listen 80;
    server_name website1.com;

    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl;
    server_name website1.com

    root /data/websites/website1;

    ssl_certificate /data/ssl/cert.crt;
    ssl_certificate_key /data/ssl/key.pem;

    access_log /var/log/nginx/website1/access.log;
    error_log /var/log/nginx/website1/error.log;

    include templates/standard-website;
}
```

And a template containing basic settings which tend to be the same for each site.
(May as well just be part of the website config file unless you have lots of sites)

/usr/local/etc/nginx/templates/standard-website

```
charset utf-8;

index  index.php index.html index.htm;

error_page   500 502 503 504  /50x.html;
location = /50x.html {
    root   /usr/local/www/nginx-dist;
}

location ~ \.php$ {
    try_files $uri =404;
    fastcgi_pass   127.0.0.1:9000;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    include        templates/fastcgi_params;
}

location ~ /\. {
    deny all;
}

location = /favicon.ico {
    log_not_found off;
    access_log off;
}

location = /robots.txt {
   allow all;
   log_not_found off;
   access_log off;
}
```


----------



## daBee (Aug 8, 2017)

Ya I just got 2 vhosts on that server rolling on port 8081 ssl.  Whoohooo!  Both had me choose the dialog box questioning the bizarre cert, but that persists.  It also asked me to authenticate in the OS to approve this trust choice.  So that should guard against anything else.  

I'm lightly chased up a way to force SSL, which seems to be a favoured practice by google these days: Forcing a full site under ssl.  

I've been playing around with some ruby gems to automate a _blank website template, which takes a parameter and populates everything including the nginx.conf with proper servers under servers/*.conf for further formatting later.  Wonderful stuff for trials using `passenger` and `rbenv`.  It also modifies /etc/hosts for entries, which requires special access through root...I forget how I got it to work, but it's magic.  

Moving right along...


----------

