Developing With Craft CMS 3 in Docker

Recently I published an overview of my old Craft 2 in Docker project setup and I wanted to do the same for Craft 3. There are a lot of similarities but Craft's move to using Composer for package management requires us to change a few things around.

For a more in depth run through using Craft in Docker, also check out my Craft CMS Development Workflow With Docker series.

TL;DR: Minimal Craft 3 in Docker repo here. Read the permissions section below.

Aims

  • Create a reusable but flexible base for starting Craft 3 projects
  • Should be able to be executed by any developer working on the project
  • Minimise the number of elements that need installing and version controlling (node/npm 🙄) on the host machine
  • Allow production ready build assets to be generated

Docker

Using Docker for local development helps us to meet all of the above objectives as it allows us to define version controlled containers in which our application will run alongside our normal codebase versioning. This ensures that our application environment is always kept in sync with code updates and that it can be executed as expected in any environment which is able to run docker without installing any application specific software on the host.

I wrote about some Docker fundamentals in my previous Craft 2 post. I don't want to go into the same level of detail again but here are the highlights:

  • A Dockerfile is used to create an image which serves as the blueprint for our application and its executing environment.
  • An image can be used to create a container which is an actual running copy of our application.
  • A container can be tweaked at runtime by mounting files into it, adjusting the commands that run when it starts up or ensuring it has network connectivity to other containers.
  • Docker-compose can be used to organise and version control these tweaks.

My general approach to using Docker as a development tool is to first define my Dockerfiles which install all of the project's dependencies at the OS and PHP levels (if it's a PHP project) and then copy the application's files into the appropriate place within the image's filesystem. I intend the image built from the Dockerfile to be used as the production-ready copy of the application.

In order to make things work for local development we want to be able to edit files on our host system and have these alter the behaviour of the application within our running container. We can achieve this by mounting our host system's files into a running container - replacing those that exist within our built image. Once we're happy with the changes we've made we can rebuild the image (copy our host system's files into a new version of the container) in order to create a new, updated production ready version of the application.

Craft 2 vs Craft 3

There is a significant difference between craft 2 and 3 which influences our development environment somewhat.

Craft 3 uses composer to manage its own files as well as dependencies and plugins. This means there's actually very little that we need to include in our project's structure - composer will handle most of the files we need.

Because of this we only really need to supply the following assets in order to build our project:

  • Static assets (CSS, JS, images etc)
  • Twig template files
  • Config files
  • Any custom modules
  • Composer.json/lock
  • (Optional) The Craft command line executable

Everything else is managed by composer, stored in the database or is transient data (such as the contents of 'storage').

So how do we go about build an image from these assets?

The PHP Dockerfile

FROM php:7.1-fpm

RUN apt-get update && apt-get install -y \
        libfreetype6-dev libjpeg62-turbo-dev \
        libmcrypt-dev libpng-dev libbz2-dev \
        libssl-dev autoconf \
        ca-certificates curl g++ libicu-dev

RUN docker-php-ext-install \
        bcmath bz2 exif \
        ftp gd gettext mbstring opcache \
        shmop sockets sysvmsg sysvsem sysvshm \
        zip iconv mcrypt pdo_mysql intl

RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/

RUN apt-get install -y --no-install-recommends libmagickwand-dev && \
        pecl install imagick-3.4.3 && \
        docker-php-ext-enable imagick

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer
RUN composer global require hirak/prestissimo

RUN echo "upload_max_filesize = 10M" > /usr/local/etc/php/php.ini && \
    echo "post_max_size = 10M" >> /usr/local/etc/php/php.ini && \
    echo "max_execution_time = 300" >> /usr/local/etc/php/php.ini && \
    echo "memory_limit = 256M" >> /usr/local/etc/php/php.ini

COPY --chown=www-data:www-data ./src/config /var/www/html/config
COPY --chown=www-data:www-data ./src/modules /var/www/html/modules
COPY --chown=www-data:www-data ./src/templates /var/www/html/templates
COPY --chown=www-data:www-data ./src/web /var/www/html/web
COPY --chown=www-data:www-data ./src/composer.json /var/www/html/composer.json
COPY --chown=www-data:www-data ./src/craft /var/www/html/craft

RUN mkdir -p /var/www/html/storage/rebrand && \
    mkdir -p /var/www/html/storage/runtime/mutex && \
    mkdir -p /var/www/html/storage/logs && \
    chown -R www-data:www-data /var/www/html/storage

RUN composer install -d /var/www/html/ && \
    chown -R www-data:www-data /var/www/html/vendor && \
    chown -R www-data:www-data /var/www/html/composer.lock

The first four RUN statements are simply installing dependencies at the OS and PHP levels. This is essentially the same as my previous Craft 2 example but just updated in places and now includes imagemagick for better image processing.

We then install composer inside the image. There are a few bits of Craft functionality that assume composer is available on the system, such as plugin management and self-updates so we make it available in this way.

We're also installing prestissimo which, if you haven't used it before, is a composer plugin which decreases composer install times by a significant margin. A 60% reduction isn't unusual. It doesn't break anything so it's a no-brainer.

Next we set some PHP config options which just make things a little more forgiving if we're doing big upload jobs.

With all this set we're ready to actually get our application added to the image. We start by copying our application's files into the image in the location defined as PHPs base execution path. These are all the files we need to get our Craft project up and running.

The last bit of file management we need to do is to create some storage directories that Craft can write to. Craft likes these to be present and writable when it first boots but they don't need any content.

Finally we run 'composer install' to create our 'vendor' folder and fill it with Craft, its dependencies and any other plugins that we have defined in our composer.json. Craft also insists on having write access to the 'vendor' folder so that it can install and update plugins itself, we just need to 'chown' it as can be seen after the 'composer install'.

Once all these steps are complete we'll have an image built which can be used as the blueprint to spin up new containers which have our application and all of its requirements ready and prepared.

Docker Compose

Once built, our image should be considered atomic - a new image should be built in order to change any of its contents. However during development we don't want to have to rebuild for every single change we make, that would be tedious and your boss/clients would become concerned that perhaps Docker wasn't the efficiency boon you claimed it to be.

We can make this work by mounting files and folders into a running container using a docker-compose. For example:

version: '2'
services:
  nginx:
      build:
        context: .
        dockerfile: ./.docker-config/nginx/Dockerfile
      ports:
          - 80:80
      volumes:
          - ./src/web:/var/www/html/web

  php:
      build:
        context: .
        dockerfile: ./.docker-config/php/Dockerfile
      expose:
          - 9000
      volumes:
          - ./src/config:/var/www/html/config
          - ./src/templates:/var/www/html/templates
          - ./src/modules:/var/www/html/modules
          - ./src/web:/var/www/html/web
          - ./src/composer.json:/var/www/html/composer.json
      environment:
        ENVIRONMENT: dev
        DB_DRIVER: mysql
        DB_SERVER: database
        DB_USER: project
        DB_PASSWORD: project
        DB_DATABASE: project
        DB_TABLE_PREFIX: craft_
        SITE_URL: http://localhost
        SECURITY_KEY: sdalkfjalksjdflkajsdflkjasd

With this docker-compose.yaml excerpt we're setting up nginx reverse proxy and PHP-fpm containers with files and folders from our host system mounted as 'volumes'. If you compare the paths used for these volumes to our Dockerfile above you'll see that we're just replacing the files we copied into our image earlier with the same files.

Why bother replacing these files with copies of themselves? By mounting in this way the files inside the running container are now actually the ones which exist our host filesystem, so we can update them on our host and the changes will be visible inside the container immediately. This allows us to test our app using the container's environment but also arbitrarily change our application's code without having to rebuild the image each time.

Permissions

When mounting our host's files and folders into a running container it's important to note that the permissions of those files are also maintained inside the container.

If you create a file on your host system using a text editor it'll normally have its owner set to the user that you're currently logged in as. In my case that's 'matt'. It'll also probably have some default read/write permissions which allow any user to read, but only the owner to write.

Running containers have a completely separate set of users to your host - there is no user called 'matt' in there. So when a file owned by 'matt' is mounted, the users inside the container will not have write access to it.

The PHP process inside the default PHP-fpm images run as a user called www-data (hence us setting the owner of our files to www-data in our Dockerfile). In order to allow this container-based user to write to files that we mount into a container from our host we need to set appropriate permissions on those files.

In this case the easiest solution is to 'chmod -R 777' the following files in your host's project directory:

  • src/config
  • src/web/cpresources
  • src/composer.json
  • Any folders which you want to set as Asset Sources in Craft

If you miss any of these Craft will usually tell you with an error message that it is having problems writing to a specific location.

Conclusion

We now have a basic Craft 3 project template based on docker which achieves our original aims:

  • We've included the minimum required to reliably get a Craft 3 project running in docker.
  • The git repo contains both the application and a definition of the environment in which it's going to run. Anyone with access to this repo can boot it locally with no prior knowledge.
  • Only docker needs to be installed on the host platform. Nginx, PHP, Database and Craft versions are all fixed and version controlled.
  • Production ready images can be generated easily as part of the regular development process.

There is a lot more that you can do when using docker locally. One of my favourite use cases is running asset build chains in a container defined in the project repo - allowing it to be executed by any developer without installing any tooling. I've discussed this in my Craft 2 in Docker post if you're interested.

Things can get even more interesting when you start using Docker as part of your continuous integration flow - but that's a topic for another day...


Read Next



2024 Goals
Write Things