Developing in Docker: HTTPS FTW
I've been writing a lot about using Craft CMS in Docker, but a comment on this article rightly mentioned that I haven't touched on the use of HTTPS at any point. So let's fix that.
Why HTTPS?
I don't want to discuss the merits of transport encryption here because, if you've found this article, you probably already know why you need it.
One point I do want to quickly make though, is why we might want to use it during local development. Recently a lot of services, integrations and browser features have begun demanding that the user is in a secure context before they can be used. From Facebook to Chrome, it's becoming increasingly important for us to have access to a testing environment which implements HTTPS.
It's also a good idea to test in an environment which is as close to your production setup as possible, and you do want to have HTTPS set up in production, right?
In this article I'm going to quickly cover a few scenarios for implementing SSL when working with containers that should cover you for the majority of scenarios.
Local HTTPS using Nginx
The specific hoops that you need to jump through in order to get HTTPS working in each of your environments are often not specifically linked to the application you are building. Given that the docker images you build should be specifically tailored to your application I usually opt to include SSL/TLS termination in a separate container which simply proxies traffic to my application specific containers.
For local development we can achieve this using a combination of nginx-proxy and some self signed certificates.
Lets start with a docker container which represents our application:
mkdir app
wget -O app/cat.jpg https://secure.i.telegraph.co.uk/multimedia/archive/03188/maru_the_cat__3188629k.jpg
echo '<img src="./cat.jpg">' > app/index.html
Create app/Dockerfile:
FROM nginx:latest
COPY cat.jpg /usr/share/nginx/html/cat.jpg
COPY index.html /usr/share/nginx/html/index.html
Test it:
docker build --tag cat-box ./app
docker run --rm -p 80:80 cat-box
http://localhost Enjoy your cat in a box.
Let's allow our new friend to rest for now: Ctrl+c
Now that we have an application that we'd like to secure we need to create our SSL termination layer, and inside it we need a self-signed certificate.
mkdir ssl
openssl req \
-newkey rsa:2048 \
-x509 \
-nodes \
-keyout ssl/localhost.key \
-new \
-out ssl/localhost.crt \
-subj /CN=localhost \
-reqexts SAN \
-extensions SAN \
-config <(cat /usr/lib/ssl/openssl.cnf \
<(printf '[SAN]\nsubjectAltName=DNS:localhost')) \
-sha256 \
-days 3650
Unfortunately you probably can't just copy and paste the above. You'll need to find the location of openssl.cnf on your system. For macOS users it is probably here: /System/Library/OpenSSL/openssl.cnf, for Ubuntu users it will probably be here /usr/local/ssl/openssl.cnf or here /usr/lib/ssl/openssl.cnf.
If you get the path wrong it'll error out so keep trying until you find it. In the unlikely event that you don't have openSSL installed at all, you'll need to install it. ☺️
Create ssl/Dockerfile:
FROM nginx:latest
COPY localhost.crt /etc/nginx/localhost.crt
COPY localhost.key /etc/nginx/localhost.key
COPY default.conf /etc/nginx/conf.d/default.conf
Finally we just need to create that ssl/default.conf nginx config to forward all traffic to our application and use the self signed certificates we generated:
server {
listen [::]:443 ssl spdy;
listen 443 ssl spdy;
server_name localhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/localhost.crt;
ssl_certificate_key /etc/nginx/localhost.key;
location / {
proxy_pass http://application:80;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
We're nearly there, promise.
Notice that the nginx config is just listening on port 443 and forwarding all traffic to a host called application using standard http.
We just need to link these containers together now and give it a test.
docker build --tag ssl-proxy ./ssl
docker network create cat-box-network
docker run --rm -d --network cat-box-network --network-alias application cat-box
docker run --rm -d -p 443:443 --network cat-box-network ssl-proxy
Visit https://localhost to see a lovely warning message. This is telling us that our certificate wasn't created by a trusted authority. Indeed, we created it and we are not very trustworthy. Your browser should give you the option to continue anyway, so do so.
And now you can see our friend Box-Cat again, but more secure now.
If you'd like to avoid the warning message in your browser there are ways to do this but it's a bit beyond the scope of this article. Google trust self signed certificate and you'll find the right path.
You can now re-use this nginx container to forward traffic to any old application that you're running locally - you just need to make sure that it's on the same docker network as your application and that your container listening on port 80 has a network alias of application set.
Production HTTPS Using Trusted Certificates
You can use exactly the same method as we've just described for production deployments too. You'll just want to tweak a few bits:
- Generate your certificates via a trusted certificate authority. You can do this easily using lets encrypt's CLI tools or you can purchase your certificates from one of those people who still sell them.
- Play with the nginx ssl terminator's config to allow connections via HTTP but redirect to HTTPS.
- Wrap the whole lot in a docker-compose for easier management.
Production HTTPS Using Nginx Proxy + Lets Encrypt Helper
This solution is useful if you don't want to manage any certificates yourself or you have multiple applications running on a single server.
We're going to make use of two docker containers which do magical things.
The first, nginx-proxy, is an nginx reverse proxy which listens for container creation events fired by the docker process. When these events are detected the proxy will check for annotations on these containers and use them to re-write its own config files in order to route traffic to them. Magic.
The second, letsencrypt-nginx-proxy-companion, works similarly, but when it detects new containers it generates Let's Encrypt SSL certificates instead.
Combining these two elements allows us to set up a single HTTP and HTTPS entry-point on a server and then simply spin up other containers which will automatically get traffic routed to them and certificates generated with no effort required.
To follow along with the rest of this article you'll need a server and a domain name. You'll need your chosen domain name pointed at your server's IP and access via SSH before we continue. I'll wait here until you're done...
Good job.
I'm going to use the awesome subdomain catbox.mattgrayisok.com for my examples moving forward. So make sure you replace that with your own domain when you're copy and pasting.
ssh root@catbox.mattgrayisok.com #Log into our new server
cd ~ #Make sure we're in the home directory
curl -L http://bit.ly/dockerit | sh # Install docker and compose
mkdir nginxproxy #Make a dir to house our proxy config
cd nginxproxy
We just need a simple docker-compose to get our nginx proxy set up. Add the following to nginxproxy/docker-compose.yml:
version: '2'
services:
nx-proxy:
image: jwilder/nginx-proxy
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /etc/nginx/certs
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true
letsencrypt-nginx-proxy-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nx-proxy
Here we're just setting up the nginx proxy and lets encrypt helper and adding a few volumes which will allow them to persist certificates.
Importantly we're also mounting the host's docker socket into both of these containers. This is how they are able to listen for container start events which occur on the host.
docker-compose up -d
Now that we have our proxy in place we can start up an application which is going to sit behind it.
cd ~
mkdir catbox #Create a directory for our catbox application
cd catbox
Add the following to catbox/docker-compose.yml:
version: '2'
services:
catbox:
image: mattgrayisok/catbox:latest
restart: unless-stopped
expose:
- 80
environment:
VIRTUAL_HOST: catbox.mattgrayisok.com
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: catbox.mattgrayisok.com
LETSENCRYPT_EMAIL: your@email.com
networks:
default:
external:
name: nginxproxy_default
Give that a save. We're simply running the catbox application but rather than binding to the host's port 80 in order to receive traffic we're just exposing it so that other containers can forward traffic to us.
We're also setting some environment variables which act as flags to the nginx proxy and lets encrypt containers, giving them instructions on how to forward traffic to the application and generate an SSL cert.
Finally we make sure that this container is attached to the same network as the nginx proxy by setting the default network to the one created by docker-compose when we uped the proxy containers.
docker-compose up -d
Wait a minute or two for the certification process to complete then try accessing your secured application: https://catbox.mattgrayisok.com
Feel free to have a look at the log output of the two proxy containers to see a little more detail about what they've been doing to make this all work.
You can now add as many projects to this server as you'd like. Just remember to point the appropriate (sub)domain towards the server before starting it and make sure you add appropriate environment variables to your web server container.
Production HTTPS Using Traefik
The final scenario that I want to run through in this post uses a tool called Traefik. It's very similar to our previous nginx-proxy solution but has taken a few steroids.
Traefik is a single container which is able to perform the automated routing we've seen from nginx-proxy, combined with SSL management using Let's Encrypt and also bundles a dashboard which allows us to see the state of our routing platform along with info about the traffic flowing through it. The Traefik website will be much better at explaining things in any more detail. I've used it in production for both single host projects and also as an ingress router in Kubernetes clusters. It works well in both situations.
If you followed along with the nginx-proxy guidelines above we'll reuse the same server. Just docker-compose stop all the running containers.
For those joining us with a blank slate you'll need a server and a domain name. You'll need your chosen domain name pointed at your server's IP and access via SSH before we continue.
I'm going to be using the subdomain catbox.mattgrayisok.com for my testing, you'll need to replace this with your own domain.
ssh root@catbox.mattgrayisok.com #Log into our new server
cd ~
curl -L http://bit.ly/dockerit | sh # Install docker if it isn't already installed
mkdir traefik
cd traefik
I mentioned earlier that Traefik comes with a dashboard. We don't want any old person peeking at it, so we'll start by generating a password which we'll use to secure it.
apt-get install apache2-utils
export PASSWORD=`date +%s | sha256sum | base64 | head -c 32` #Generate random password
htpasswd -nb admin $PASSWORD #Create a htpasswd file contents
echo $PASSWORD #Show us our new password
Copy the output of the htpasswd line and make a note of the password which was echoed, you'll need this shortly.
As well as a password we also need a Traefik configuration file. Add the following to traefik/traefik.toml:
defaultEntryPoints = ["http", "https"]
[web]
address = ":8080"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "your@email.com"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
onDemand = false
[acme.httpChallenge]
entryPoint = "http"
There's a few things happening here:
- We're telling Traefik that we're going to set rules for http and https traffic
- We tell the dashboard to listen on the container's port 8080
- We set up a default redirect for all port 80 traffic to https
- We enable https via TLS for traffic on port 443
- We define a few parameters required for Traefik to talk to Let's Encrypt and get us some tasty certs
You'll just need to update your email address in there.
The file that we've referenced called acme.json is where Traefik will be storing all of its required certificate information. We don't want to risk losing this if the container is destroyed so we'll create this file on the host and mount it.
touch acme.json #Create an empty file to mount into the container
chmod 600 acme.json # Traefik wants the permissions to be strict, root only
Now we need to create the docker-compose which will fire up our Traefik container. Add the following to traefik/docker-compose.yml:
version: '2'
services:
traefik:
image: traefik:latest
ports:
- 80:80
- 443:443
expose:
- 8080
labels:
traefik.enable: true
traefik.port: 8080
traefik.frontend.rule: "Host:traefik.mattgrayisok.com"
traefik.frontend.auth.basic: "admin:password_from_htpasswd_output"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- ./acme.json:/acme.json
restart: unless-stopped
command: ['traefik', '--docker']
** If you have any dollar signs in your htpasswd string, double them up to stop docker-compose interpreting them as variables**
With this file we're allowing Traefik to bind to both port 80 and 443 on our host, we're also exposing port 8080 which we'll use for the dashboard.
The labels that we've set are all related to the dashboard configuration. We're enabling it, routing dashboard traffic to port 8080 of the container, setting a rule which tells Traefik to route any incoming requests to ports 80 and 443 with a Host set to traefik.mattgrayisok.com towards the dashboard and finally setting up basic auth using the htpasswd output that we copied earlier.
If you'd like to check out the dashbaord you'll need to get a domain or subdomain set up for it and pointed towards the server and then drop that domain into this file replacing traefik.mattgrayisok.com.
We're also setting up some volumes to mount into the container when it's running. The first is the host's docker process socket. Mounting this into Traefik allows it to listen for events fired when other containers start and stop so that it can dynamically route to them.
Secondly we're mounting in our config file and finally the acme.json file that we created earlier for storing Let's Encrypt goodies.
Let's test out what we've done so far:
docker-compose up -d
If you have your dashboard domain name set up properly you should now be able to visit it to see the Traefik dashboard! In my case that's https://traefik.mattgrayisok.c...
Log in using admin and the password that you generated earlier and you'll hopefully see a sweet dashboard with a single Frontend and Backend, both for the dashboard we're currently looking at.
The next step is to link up an application that Traefik can start routing to:
cd ~
mkdir catbox #Create our application directory if it doesn't already exist
cd catbox
Add the following to catbox/docker-compose.yml:
version: '2'
services:
catbox:
image: mattgrayisok/catbox:latest
restart: unless-stopped
expose:
- 80
labels:
- traefik.enable=true
- traefik.port=80
- traefik.frontend.rule=Host:catbox.mattgrayisok.com
- traefik.docker.network=traefik_default
networks:
default:
external:
name: traefik_default
Replace catbox.mattgrayisok.com with your application's (sub)domain that's pointing at this server.
Here we've just used labels on our container to tell Traefik that we want it to route any traffic with a Host set to our domain to port 80 on our application container. We've also made sure that our application is connected to the same network as Traefik itself.
docker-compose up -d #Start our application
You should immediately see our application's front and back ends appear in Traefik's dashboard. Wait 30 seconds for certificates to be generated and then try out your app's URL, in my case: https://catbox.mattgrayisok.co...
You're all sorted. You can now add as many applications to this server as you like - just make sure to point the (sub)domains to your server first and add appropriate labels in your docker-compose files.
Wrapping Up
We've explored three ways to set up HTTPS comms which cover both local and production deployments. Each of them suits different scenarios. It's up to you to decide how much control you need over your certificates vs how much out-of-the-box functionality you'd like.
Personally I tend to use a mixture of nginx-proxy and Traefik deployments in production because they're simple to implement and once they're set up you don't need to worry about certificate renewals - they're handled for you. They do come with some configuration overhead though, and if anything goes wrong be prepared to dig through documentation in order to find a solution.
✌️