A Craft CMS Development Workflow With Docker: Part 3 - Continuous Integration

As part of my Craft CMS Development Workflow With Docker series of articles I'll be covering, from start to finish, setting up a development workflow for working with Craft CMS in docker.

Git repo here.



After completing parts 1 and 2 of this series we can build our project into a set of reusable images and also run our buildchain in a codified, predictable way. However the actual commands that we use to perform the building, packaging and distribution of our project need to to executed manually which means we risk human error creeping into this process.

Continuous Integration will help us with this. We'll codify the methodology that we want to use to get our project ready for deployment and automate its execution so that whenever we push a commit, a freshly built set of images are created as a result.

As this is a series explaining my Craft dev process, I won't worry about being opinionated. So we'll be using GitLab to handle a lot of this for us. It's the simplest thing to use (and it isn't owned by Microsoft).

Git Init

If you haven't already, we'll need to make sure our project is being tracked using Git.

git init
git add .
git status #Always check your list of files before you commit :)
git commit -m "Initial commit"

GitLab

And now we need somewhere to push it to. Head over to gitlab.com and create an account if you don't already have one. Then create a new project and call it whatever you like. I'd suggest something like craft-in-docker.

Make sure you have an ssh key linked up to your GitLab account which will allow you to push and pull repos.

Run the following in the project folder in order to set the GitLab project as your local repo's remote:

git remote add origin git@gitlab.com:[your-username]/[project-name].git
git push -u origin --all

Container Registry

Before continuing I think I should spend a moment explaining what's about to happen.

Although we are currently able to build our images and run them locally with a couple of commands, we have no way to move them to another server once they have been built. The whole point in us using docker was to allow us to create and distribute an immutable representation of our project. Cloning our repo to any server upon which we want to deploy and rebuilding the image on each of them defeats this purpose.

To fix this we need some central location in which can can store different versions of our project images. Not only does this ensure that all of our deployment targets are using exactly the same image, it also allows each target to choose which version of our project it would like to run.

Docker provides a neat solution for this called Container Registries. In simple terms these are services to which you can push built images and subsequently pull them to other places. They also usually provide authentication, tagging and versioning functionality.

There are many container registry services in existence, the most popular being Docker Hub which is public and is the default used by all the docker tooling. If you want your images to remain private (you probably do for anything except open source) you can use Google's Container Registry, Docker Trusted Registry, AWS's ECR, Azure's Container Registry or GitLab's Container Registry.

We'll be using GitLab's offering as it seamlessly integrates with all of GitLab's other goodness and is free.

.gitlab-ci.yml

Part of GitLab's magic is how it brings all of the common features of a modern dev workflow into a single package. One of these features is the use of 'runners' - servers dedicated to executing predefined scripts which we can add to our projects.

These resources can be leveraged by simply creating a script which tells them what to do and adding it to the root of our project repo. Once we've added this file and pushed our project to GitLab it will immediately trigger one or more runners to pick up your project and execute the script you've defined in the context of your project repo.

We'll be creating a script which performs the following actions:

  1. Run our buildchain image in order to generate fresh compiled assets based on the latest repo commit
  2. Build our project's images
  3. Push these images to GitLab's container registry

This should all be pretty straight forward because we've already written Dockerfiles to cover the first two steps!

Add the following to .gitlab-ci.yml in the root of your project repo. We'll explain each bit afterwards.

image: docker:18.09

services:
  - docker:18.05-dind

stages:
- build

variables:
  DOCKER_DRIVER: overlay2
  PHP_CONTAINER_RELEASE_IMAGE: registry.gitlab.com/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/php:latest
  NGINX_CONTAINER_RELEASE_IMAGE: registry.gitlab.com/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/nginx:latest

before_script:
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
  - apk update
  - apk upgrade
  - apk add python python-dev py2-pip build-base curl gettext
  - pip install docker-compose==1.23.2

build:
  stage: build
  script:
    - docker-compose run --rm buildchain yarn run build
    - docker build -f docker-config/php/Dockerfile --pull -t $PHP_CONTAINER_RELEASE_IMAGE .
    - docker build -f docker-config/nginx/Dockerfile --pull -t $NGINX_CONTAINER_RELEASE_IMAGE .
    - docker push $PHP_CONTAINER_RELEASE_IMAGE
    - docker push $NGINX_CONTAINER_RELEASE_IMAGE

Let's go through this a step at a time.

image: docker:18.09

services:
  - docker:18.05-dind

When running our CI script on a runner, the execution is always performed inside a container. This is because runners are normally shared between projects and users so each build execution needs to be sandboxed so that it can't influence other tasks which are currently running.

GitLab allows us to chose some of the features of this container, giving us a the opportunity to set a sensible starting point for our CI task.

We're setting our container base image to docker and attaching a service called docker-in-docker. The combination of these will allow us to run docker commands inside our CI container which is exactly what we need to do.

stages:
- build

GitLab allows us to define multiple 'stages' as part of our CI process. These will run one after another and only execute if the previous stage completed successfully.

We're keeping things simple to begin with so we've just defined a single stage called 'build'.

variables:
  DOCKER_DRIVER: overlay2
  PHP_CONTAINER_RELEASE_IMAGE: registry.gitlab.com/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/php:latest
  NGINX_CONTAINER_RELEASE_IMAGE: registry.gitlab.com/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/nginx:latest

Here we are just setting some variables which we can use later on in our scripts. Specifically we're setting:

  • DOCKER_DRIVER: Tells Docker to use the overlay2 filesystem driver which is fast. I think this is the default these days but it does no harm being here.
  • PHP_CONTAINER_RELEASE_IMAGE: The location that we're going to push our built PHP image to. $CI_PROJECT_NAME is set by GitLab to the slug of your project's name.
  • NGINX_CONTAINER_RELEASE_IMAGE: The location that we're going to push our built nginx image to.

GitLab will allow you to push any images to your username's namespace or the namespace of any group that has provided you with permission to do so.

Notice that we've added a version tag onto the end of our image targets. Currently we're just setting it to latest, but you can use this in clever ways to build and store many different versions of your project within the registry.

before_script:
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
  - apk update
  - apk upgrade
  - apk add python python-dev py2-pip build-base curl gettext
  - pip install docker-compose==1.23.2

These commands are executed before any of the individual tasks defined in .gitlab-ci.yml are executed, so we can use it to do things that are common set-up steps.

In this example we're firstly telling docker to log in to the GitLab container registry. This will allow us to push and pull images later without needing to constantly supply login details. The $CI_BUILD_TOKEN is set by GitLab and contains an authentication token so you don't need to supply any of your own auth details.

After that we're doing a standard update and upgrade to get everything up to date. The docker container image is based on Alpine Linux so we use their package manager apk to do this.

Next we install python and a few other things that are required by docker-compose to run.

Finally we use pip to install docker-compose which will allow us to make use of our docker-compose.yml file if we chose to. The version of docker compose is fixed because all the versions after this break during installation. (This needs fixing by using a different installation mechanism.)

build:
  stage: build
  script:
    - docker-compose run --rm buildchain yarn run build
    - docker build -f docker-config/php/Dockerfile --pull -t $PHP_CONTAINER_RELEASE_IMAGE .
    - docker build -f docker-config/nginx/Dockerfile --pull -t $NGINX_CONTAINER_RELEASE_IMAGE .
    - docker push $PHP_CONTAINER_RELEASE_IMAGE
    - docker push $NGINX_CONTAINER_RELEASE_IMAGE

At last, we've reached the meat of our script (I'm veggie, maybe this should be jackfruit?).

Here we're defining the set of steps that tick off the requirements we set out earlier.

First we're executing our buildchain which will compile our source CSS and JS and output them into the src/web directory. We're using docker-compose to do this because it will handle our project volume mounting just like it did when we were using it locally.

Next we're building both of our images using the Dockerfiles that we made in Part 1 of this series. We're also tagging them with our target location which enables us to...

Lastly push the built images up to our container registry.

And that's it. It's pretty simple because we've done all the work of defining our image build steps when we were getting set up for local development.

Let's commit this file and push it up to GitLab and see what happens.

git add .
git status
git commit -m "Created CI script"
git push

Once that's done have a look in GitLab, you should see your CI buildchain get picked up by one of GitLab's runners and start executing.

You can drill down into the output of the task in order to track its progress or you can make yourself a coffee while it gets on with the hard work.

It'll take a while, but once it's done you'll get an awesome green tick in your GitLab project and you'll be able to see your PHP and nginx images in GitLab's container registry by clicking on 'Registry' in the left hand nav.

These images are now ready to be pulled to other servers where you can execute them using docker to spin up your project in an instant. We'll cover exactly how to go about doing that in Part 4.

Speeding Things Up

With this CI script GitLab currently takes 13 mins to complete for me. Let's review all of the things that it's doing that are taking up all this time:

  • Installing all the requirements in our before_script including docker-compose
  • Building our buildchain image, which includes
    • Downloading the node base image from docker hub
    • Adding our package.json to it
    • Running yarn install to create our node_modules directory
    • Executing our buildchain to compile assets
  • Building our PHP image, which includes
    • Downloading the pl2p-fpm base image from docker hub
    • Installing all of Craft's dependencies
    • Copying our files in
    • Running composer install
  • Building our nginx image, which includes
    • Downloading the nginx image from docker hub
    • Copying our files in
  • Uploading our built PHP and nginx images to the container registry

That a lot of downloading, uploading, installing and compiling!

Because our CI scripts are being executed in disposable containers each of these steps will run every time our CI task executes, even though the result will most likely be the same every time.

There are some things we can do to make this process a little quicker though. They all add a bit of complexity to our CI script which is why I left them our in our version above, but feel free to add them in so we can do some speedy releasing.

Using Images As A Layer Cache

Docker images are built up in stages, with each stage being a command in your Dockerfile. Each stage makes a change to the filesystem of the image and the cliff of this change is stored by Docker as a 'layer'. When Docker builds images it is able to look in its collection of layers and see if any match the operation that you're currently trying to perform - if so it'll just use its cached layer rather then re-do the operation.

In this way you can have two images that are based on the same set of operations share the same set of cached layers on your machine while their images are being built.

As a practical example think back to our PHP image's Dockerfile. At the top we installed all of Craft's dependencies. This step is likely to be identical in all of your Craft projects. Docker realises this and will check to see if it has that layer in its cache before re-running the installation process for all of these dependencies. As long as all of your Craft projects have identical commands at the top they'll all use the same base layers.

However, as soon as you include a command that hasn't been executed before, your stack of layers will diverge from those that are cached and the commands will have to be executed. This is why it's always sensible to COPY any custom files into an image as late as possible - as these custom files will certainly cause a new layer to be created.

Anyway, a little known feature of Docker is that it is able to use layers not only from its cache, but also from existing images. So we can take an image that we've built previously, give it to Docker and say "Please use any layers from this when building new images".

It's pretty neat.

Let's update our .gitlab-ci.yml build task to look like this:

build:
  stage: build
  script:
    - docker pull $PHP_CONTAINER_RELEASE_IMAGE || true
    - docker-compose run --rm buildchain yarn run build
    - docker build -f docker-config/php/Dockerfile --pull --cache-from $PHP_CONTAINER_RELEASE_IMAGE -t $PHP_CONTAINER_RELEASE_IMAGE .
    - docker build -f docker-config/nginx/Dockerfile --pull -t $NGINX_CONTAINER_RELEASE_IMAGE .
    - docker push $PHP_CONTAINER_RELEASE_IMAGE
    - docker push $NGINX_CONTAINER_RELEASE_IMAGE

We've added a lines which pulls our PHP image from the container registry if it exists. The || true just stops the script from erroring out if it doesn't exist in the registry yet.

We're then telling docker to use this images as a layer cache by providing the --cache-from flag when building our new PHP image.

There's no point in doing this for our nginx image as its Dockerfile only contains file COPY commands so there's no cachable layers to make use of.

Commit and push and see how much we've managed to speed up our script.

git add .
git status
git commit -m "Use previously built docker images as a layer cache to speed up CI"
git push

In my testing that's brought the total build time down to 6m 30s, almost a 50% time saving!

Preinstalled docker-compose

Every time we boot our buildchain we're spending time downloading and installing docker-compose. That's stupid.

Instead let's see if there's another image someone has made which has this already installed...

Indeed there is: https://hub.docker.com/r/tmaie...

All we need to do is swap our base image to this and remove the docker-compose install steps from our before_script:

image: tmaier/docker-compose:18.09

...

before_script:
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
  - apk update
  - apk upgrade

...

Commit and push.

For me, that's another 20s saved. Every little helps.

Storing The Buildchain Image

On every run we're also rebuilding our buildchain image unnecessarily. It would be much better if we just built this once and then reused it on each execution.

We can achieve this by pushing a copy of it to our project's container registry. On any subsequent executions we'll pull it and just use the cached version.

In .gitlab-ci.yml:

variables:
  DOCKER_DRIVER: overlay2
  PHP_CONTAINER_RELEASE_IMAGE: registry.gitlab.com/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/php:latest
  NGINX_CONTAINER_RELEASE_IMAGE: registry.gitlab.com/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/nginx:latest
  BUILDCHAIN_IMAGE: registry.gitlab.com/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/buildchain:latest

...

build:
  stage: build
  script:
    - docker pull $PHP_CONTAINER_RELEASE_IMAGE || true
    - docker pull $BUILDCHAIN_IMAGE || true
    - docker-compose run --rm buildchain yarn run build
    - docker-compose push buildchain
    - docker build -f docker-config/php/Dockerfile --pull --cache-from $PHP_CONTAINER_RELEASE_IMAGE -t $PHP_CONTAINER_RELEASE_IMAGE .
    - docker build -f docker-config/nginx/Dockerfile --pull -t $NGINX_CONTAINER_RELEASE_IMAGE .
    - docker push $PHP_CONTAINER_RELEASE_IMAGE
    - docker push $NGINX_CONTAINER_RELEASE_IMAGE

We'll also have to tell docker-compose where to push the built image to when we run docker-compose push buildchain. Upsettingly you can't provide this destination on the command line, but it's simple to add to our docker-compose.yml:

buildchain:
      image: registry.gitlab.com/[your-username]/[your-project-slug]/buildchain:latest
      build:
        context: .
        dockerfile: ./docker-config/buildchain/Dockerfile
      volumes:
          - node-modules:/project/node_modules
          - ./docker-config/buildchain/package.json:/project/package.json
          - ./src:/project/src
      command: yarn run watch

You know the drill by now. Commit, push, wait.

First run of this updated script took 7m 20s in which the buildchain image was built and pushed to the registry.

Hit the 'Retry' button to re-run the script. Now we're at 1m 50s 😍

Most of this remaining time is spent uploading and downloading images so there's not a lot more we can do. As you work on your project your CI tasks will begin to take a little longer to run because it'll be moving more data around and compiling more assets, but hopefully it'll stay within a threshold you can cope with.

Next Steps

With our images stored safely in the container registry we can distribute our project to anywhere we like. We'll cover how to go about doing that in Part 4.

Feedback

Noticed any mistakes, improvements or questions? Or have you used this info in one of your own projects? Please drop me a note in the comments below. 👌


Read Next



2024 Goals
Write Things