Keter behind an nginx Reverse Proxy over HTTPS


This blog is hosted on Digital Ocean. It is running on a small Debian 8 (Jessie) machine. The architecture is very simple. The actual blog application is a Haskell (Yesod) web application, being supervised by Keter. Keter is sitting behind nginx. nginx is receiving the https requests, stripping off the TLS, and passing Keter http requests.

Setting this up was not complicated, but it does require multiple steps. I wanted to detail how I did it.

Setting up the Machine on Digital Ocean

If you don't have a Digital Ocean account, you can use this link to create one. When creating your machine, pick the smallest machine and a recent version of Debian.

(The steps for any other hosting provider should be very similar. There is very little Digital-Ocean-specific content in the following steps.)

Update Packages

It is a good idea to make sure that the machine has the latest packages and all relevant security updates. The following two commands update the packages on the machine.

$ sudo apt-get update
$ sudo apt-get upgrade

Update Kernel

If you are running on a Digital Ocean machine created some time in the past, it may be a good idea to check if the machine is using a recent Linux kernel.

This can be done through the Digital Ocean console. In the Digital Ocean console, click on your Droplet, and then click on the "Kernel" tab. This will let you select a more recent kernel. You will have to restart your Droplet for the new kernel to be used.

Install nginx

nginx is used as a reverse proxy server, mainly to deal with TLS. nginx listens for https requests on port 443. When it gets a request, it strips off the TLS and sends Keter a raw http request. Keter forwards the request to the actual Haskell web application. The web application sends a reply to Keter, which forwards it back to nginx. nginx wraps the reply up in TLS and sends it back to the client.

The following commands can be used to install nginx, and then enable it as a system service using systemd. This makes sure it is run every time the machine boots up.

$ sudo apt-get install nginx
$ sudo systemctl enable nginx
$ sudo systemctl start nginx

Lets Encrypt

I am using Let's Encrypt to obtain a free HTTPS certificate. Let's Encrypt provides a program called certbot that can be used to get the initial certificate. It can then be setup to renew the certificate every 90 days. The following steps show how to install certbot, how to create the initial certificate, and how to setup auto-renewal.

Debian 8 Backports

certbot needs to be installed from the Debian 8 backports repository. First, enable the backports repository (if it's not already enabled).

Add the following line to the /etc/apt/sources.list file:

deb jessie-backports main

Update Backports and Install certbot

The following two commands will update the backports repository and install certbot.

$ sudo apt-get update
$ sudo apt-get install certbot -t jessie-backports

Get the Initial Certificate

Now that certbot is installed, it can be used to get the initial certificate.

The following certbot command communicates with the Let's Encrypt servers. It makes use of the nginx root directory (which is /var/www/html/ by default), so make sure nginx is running.

First, create the .well-known/ directory for certbot to use:

$ sudo mkdir /var/www/html/.well-known

Then, use certbot to actually get the certificate (you probably want to replace with your domain):

$ sudo certbot certonly --webroot -w /var/www/html -d -d

The cert gets saved in the /etc/letsencrypt/live/ directory. The /var/www/html/.well-known directory will be used for future certificate renewals, so do not delete it.

Automatic Renewals

Let's Encrypt certificates are only valid for 90 days. It is necessary to renew the certificate before it expires. The easiest way to do this is to setup a cron-like job that tries to renew the certificate once a day. It will run everyday, but it will only succeed when it gets close to the expiration time.

The easiest way to do this is to add a systemd service and timer for renewing certificates. This is similar to a cron job, but it decouples the command from the automatic daily running of the command.

First, add a "oneshot" systemd service to renew the certbot cert. Add the following to the file /etc/systemd/system/certbot.service:

Description=Let's Encrypt renewal

ExecStart=/usr/bin/certbot renew --quiet --agree-tos
ExecStartPost=/bin/systemctl reload nginx.service

Try running this with the following command:

$ sudo systemctl start certbot.service

If you check the status of running this service, it should be successful:

$ sudo systemctl status certbot.service
 certbot.service - Let's Encrypt renewal
 Main PID: 19373 (code=exited, status=0/SUCCESS)

Now, add a systemd timer that runs the certbot.service once a day:

Description=Daily renewal of Let's Encrypt's certificates



Enable and start the timer:

$ sudo systemctl enable certbot.timer
$ sudo systemctl start certbot.timer

Check to make sure the timer is actually enabled:

$ sudo systemctl list-timers
Wed 2016-11-02 00:00:00 UTC 11h left Tue 2016-11-01 00:00:01 UTC 12h ago certbot.timer


Keter supervises Haskell web applications. With Keter, it is easy to deploy new versions of a web application. Keter does the hard work of bringing up a new version of the web application, making sure everything it is working correctly, redirecting traffic from the old version to the new version, then spinning down and deleting the old version.

Installing Keter

I recommend installing Keter through a .deb generated with the script. Follow the at that link to build the .deb and install Keter.

Edit the /etc/keter.conf File

Edit the /etc/keter.conf configuration file to look like the following:

# Directory containing incoming folder, where to store logs,
# etc. Relative to the config file directory.
root: /var/www/keter
# Keter can listen on multiple ports for incoming connections.
# These ports can have HTTPS either enabled or disabled.
    # HTTP
    - host: "*4" # Listen on all IPv4 hosts
      port: 8000 # Could be used to modify port

# User to run applications as
setuid: www-data

# Get the user's IP address from x-forwarded-for. Useful
# when sitting behind a load balancer like Amazon ELB.
ip-from-header: true

It is important that ip-from-header is true (since Keter is running behind a reverse proxy). It is also important that listeners.port is NOT 80 or 443, since nginx will eventually be listening on those ports.

Start Keter

The following two commands can be used to enable and start Keter:

$ sudo systemctl enable keter
$ sudo systemctl start keter

nginx (Continued)

The nginx configuration has to be updated so that it knows about our certificates. It also has to be told about Keter.

Update Configuration

Add the following nginx site configuration as a file at /etc/nginx/sites-available/ This configuration file makes nginx act as a reverse proxy for Keter. It accepts requests from users over https. It also listens on http and redirects all requests to https. There are inline comments that explain each directive.

# Server section for all requests coming in on port 443.  This is
# the important section.
server {
  listen 443 ssl;

  # Path to our ssl certificate.
  ssl_certificate /etc/letsencrypt/live/;
  ssl_certificate_key /etc/letsencrypt/live/;

  # Turn on OCSP stapling as recommended at
  ssl_stapling on;
  ssl_stapling_verify on;

  # Set where our access logs will be stored.
  access_log /var/log/nginx/functor.tokyo_https.log combined;

  # The .well-known/ directory is used for certbot certificate
  # renewals.
  location /.well-known {
    alias /var/www/html/.well-known;

  # This section details what will happen to all requests coming
  # in to nginx.
  location / {
    # Add "Host", "X-Real-IP", and "X-Forwarded-For" headers.
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    # This is the url of the proxy server.  It should be the name
    # used in the upstream definition above.
    proxy_pass http://keter;

# Server section for all requests coming in on port 80.  Just
# redirect to port 443.
server {
  listen 80;
  access_log /var/log/nginx/functor.tokyo_http.log combined;
  rewrite     ^   https://$host$request_uri? permanent;

# This is the "upstream" section.  You can see what this is
# useful for in the following link:
upstream keter {
  # Make sure this port is the same as used in the
  # /etc/keter.conf file above.
  server localhost:8000;

The only thing that might need changing (other than the references to ""), is the port in the upstream section. This should be the same port as specified in the /etc/keter.conf file.

Enable the New nginx Configuration

The new nginx configuration file from above needs to enabled.

$ cd /etc/nginx/sites-enabled
$ sudo rm default # Delete "default" site installed by default.
$ sudo ln -sf ../sites-available/

Restart nginx

Now check that the configuration file is syntactically correct. If it is, restart nginx.

$ sudo nginx -t # check that the config is syntactically correct
$ sudo systemctl restart nginx

Changes to Yesod Web Application

If you are using a recent Yesod scaffold app, there are a couple changes that must be made to the setting files so that the web application works well from behind a reverse proxy.


Here is an example config/keter.yml:


  # Your Yesod application.
  - type: webapp

    # Name of your executable. You are unlikely to need to change this.
    exec: ../dist/bin/functortokyoblog

    # Environment variables that keter will set.
      APPROOT: ""
      IP_FROM_HEADER: "true"

      # You can specify one or more hostnames for your application
      # to respond to. The primary hostname will be used for
      # generating your application root.

    ssl: true

    # Enable to force Keter to redirect to https
    requires-secure: false

Make sure requires-secure is set to false. Keter should not redirect to https because nginx will handle that.

Make sure that ssl is set to true and env.APPROOT is set to the real application root (including https). This combination of settings seems to be necessary to get Yesod to make links include https, but serve the actual content over http.

Make sure that env.IP_FROM_HEADER is set to true. This makes sure Keter's logs show the actual client's IP address (instead of just the upstream IP address of nginx).


Here is an example config/settings.yml. The important options have an explanation inline:

static-dir:     "_env:STATIC_DIR:static"
host:           "_env:HOST:*4" # any IPv4 host
port:           "_env:PORT:3000" # Keter sets this through an
                                 # environment variable.
ip-from-header: "_env:IP_FROM_HEADER:false" # Set in
                                            # "config/keter.yml".
                                            # This defaults to
                                            # false so that `yesod
                                            # devel` will not
                                            # take the IP address
                                            # from the http header.
                                            # This keeps `yesod
                                            # devel` working like
                                            # normal.
approot:        "_env:APPROOT" # Set in "config/keter.yml".  A
                               # default value is not used here
                               # (unlike the other settings) so
                               # that the application root is
                               # determined by the request headers
                               # if not specified. This keeps
                               # `yesod devel` working like normal.

Launch The Web Application

The .keter bundle for the Yesod web application can be built like normal:

$ stack exec -- yesod keter

This produces a *.keter file, which can be copied to /var/www/keter/incoming/. Keter will take this *.keter file and launch the web application.

Extra (clean up old Keter log files)

Keter will automatically rotate its log files. However, it will not delete old log files. The easiest way to delete old log files is to create another systemd job, similar to the certbot systemd job.

First, add a "oneshot" systemd service to clean up old Keter log files under /var/www/keter/log/. Add the following to the file /etc/systemd/system/keter-delete-log-files.service:

Description=Delete old Keter log files

ExecStart=/usr/bin/find /var/www/keter/log/ -type f -name "*.log" -mtime +120 -delete

This deletes all files under /var/www/keter/log/ where the filename ends with .log and is older than 120 days. Try testing this out with the following command:

$ sudo systemctl start keter-delete-log-files.service

If you look in /var/www/keter/log/, you should not see any log files that are older than 120 days. If you check the status of running this service, it should be successful:

$ sudo systemctl status keter-delete-log-files.service
 keter-delete-log-files.service - Delete old Keter log files
 Main PID: 1014 (code=exited, status=0/SUCCESS)

In order to make this run once a day, a systemd "timer" is needed for this service. Add the following to the file /etc/systemd/system/keter-delete-log-files.timer:

Description=Daily deletion of old Keter log files



Enable and start the timer:

$ sudo systemctl enable keter-delete-log-files.timer
$ sudo systemctl start keter-delete-log-files.timer

Check to make sure the timer is actually enabled:

$ sudo systemctl list-timers
Thu 2016-11-10 00:00:00 UTC 18h left Wed 2016-11-09 00:00:01 UTC 8h 3min ago keter-delete-log-files.timer


This blog post lays out the steps for getting a Haskell web application running with Keter behind an nginx reverse proxy.

Ideally, it would be nice to turn these manual steps into an automatic process. This could be done through Docker, Puppet, Chef, Ansible, etc.

tags: haskell