An on-premise „Docker Cloud“-like workflow from repository to Jenkins to test server

 
As a software company for individual software solutions we are developing software in highly diverse settings, in means of programming languages, databases and environments: Node.JS, PHP, C#, MySQL, MongoDB, MS SQL, Windows, Ubuntu, Debian, you name it. That makes it a challenging task to provide test servers or acceptance test servers for fellow developers, project managers and customers.
We used to solve this by spinning up multiple virtual machines or cloud servers. This became more and more complicated, extensive to maintain and resource consuming. Furthermore, it had several unwanted side effects:

  • A deployment was usually done on top of an existing installation, sometimes leading to side effects because of leftover installed items
  • The preparation of a virtual machine was usually done manually, making it difficult to repeat the exact same steps with the exact same versions on the final target machine

 
To face these shortcomings we completely redesigned our workflow for developing and distributing web applications by using a combination of our continuous integration server Jenkins, the containerization software Docker and several other open-source tools. The result is a workflow enabling us to start a web application including all its dependencies as a part of the build job without any prerequisites on the server. The configuration just takes 15 minutes per build job, the deployment to the target machine is done in about 5 minutes.
 

Our new workflow in a nutshell

  • Pushing code changes to the version control server will trigger a build on our continuous integration system Jenkins
  • After the usual build steps a new Docker image will be built containing the latest version of the application, which will then be pushed to an internal Docker registry running on a CoreOS server
  • A new Docker container will be started, based on the newly created image
  • The web application is fully up and running. It is accessible with a meaningful name and web address
  • The server provides an overview of the currently available test systems via a web page

With this procedure we always have fully working test environments for every web application, without the need to administrate the server or even logging in onto the host system.
 
The little extra effort we have to do is just the definition of the Dockerfile, the preparation of the docker-compose file and adding two shell script calls in a Jenkins job. Pretty neat, hu?
And the best: the shell scripts are open-source and available as open-source from our GitHub-Account.
 
Let’s dive into the details…
 

Creating the Docker image

As version control server, e.g. repository host, we use Phabricator. Phabricator will notify Jenkins about every push on the „default“ or „develop“ branch, which will trigger the execution of a Jenkins job. During this job a Docker image will be built by calling a shell script. This script is part of a collection of little helper tools, we call them the Dorie-Tools, because, you know, we speak whale… The dorie-tools can be freely downloaded from GitHub.
The script will automatically tag the image by certain tags:

  • "default" or "develop", based on the branch the current build is based on
  • a combination of the current date, build number and the branch the build was started upon; the result is something like "20161020_Build30_develop".
  • "latest" if the build is based on the „develop“ branch

 
After building the Docker image the script will also generate a docker-compose.override.yml file which only contains a reference to the just created image and tag:
 

version: '2'
services:
  web:
    image: dockerregistry:5000/onwerk/demopipeline:20161020_Build30_develop

 
This file will later be used to precisely identify which version of the Docker image should be used to start the container, thus avoiding the "latest" tag.
The newly created image will be pushed to a private Docker registry running on an internal server.
 

Starting the web application in a Docker container

After pushing the image the Jenkins job will call a shell script that copies several files to the server: the docker-compose file, optional additional deployment items (both checked in into version control) and the docker-compose.override.yml file which was created by the build job just a few steps before to the server. After the copy operation the script starts the multi-container Docker application described by the docker-compose.yml file. The additional override file is automatically loaded and respected by docker-compose and thus specifying the exact version of the Docker image to use.
The web application is up and running.
 

Accessing the web application

After these steps, the Docker container is up and running. Pretty good already.
But: We are working not only on one project but on multiple projects for different customers. We would like to have test systems for all of them. But if you specify an exposed port in the docker-compose file you would have to make sure that the external port is still available. Docker can start multiple containers using the same internal port, but it cannot start multiple containers using the same exposed (external) port. Tracking any used external ports would result in massive overhead of administrative work; of course we would like to avoid that.
 
This is solved by using a reverse proxy that will automatically be reconfigured as soon as a container is started or stopped.
We are using the great nginx-proxy by jwilder, which also runs in a Docker container (of course…) on the same server and is connected to the Docker daemon. The start or stop event of a container will trigger a reconfiguration of nginx. All Docker containers using the environment variable "VIRTUAL_HOST" specifying the virtual host name of the container will be available by that name for regular HTTP/HTTPS access; nginx will automatically forward any web request based on the used server name to the exposed port of the container. By using a reverse proxy we don’t need any external ports. That means we are avoiding any potential port collisions. A developer does not need to know which ports are already in use when creating a new web application and writing a new docker-compose file, that’s great!
 

DNS name resolution setup

nginx is responding to any server name that is specified via the environment variable in a docker-compose file. But a client would still not be able to open a website by using the server name since the web browser would not be able to do the DNS resolution determining the IP address of a given web address. This is solved by a wildcard entry on our DNS server: Any request to resolve *.testserver.ourdomain.local.de will result in returning the same IP address, directing all requests to the nginx reverse proxy.
To avoid the need to remember the web addresses for the web applications, we use another handy tool: texthtml/docker-vhosts, which will generate a small web site, listing all available Docker virtual hosts. docker-vhosts is running in a Docker container on the same server, too.
 

Summary

By this combination of multiple tools we have a nice test system delivery pipeline:

  • Developers writing code and defining the environment to run the application via Dockerfile and docker-compose.yml
  • Pushing the code to version control server will build the application, statically analyse the code and run unit tests
  • After the build step a Docker image will be generated using the well-defined environment; the image is stored in an internal Docker repository
  • A Docker container is started using the newly created image, bringing up a testable system
  • The test system is instantly available by a web address, which is easy to remember
  • A web based overview of the test systems is available

All the necessary steps and the complexity are wrapped in just two shell scripts. It only takes minutes to add these calls to a Jenkins job.
 

…and external deployments?

But it does not stop there… For security reasons we don’t want to expose the Docker registry to the whole world. But we also use external accessible cloud servers for staging and for acceptance tests for our customers. The previously described workflow would only cover our internal test server with access to the internal Docker registry. For external deployment we use a special Docker feature to save and load images to flat files. This is also done by several shell scripts. The Docker image is exported and zipped, transferred to the cloud server, extracted and imported; the container is also started. On the external server, we have no automatic configuration of nginx for security reasons, but it takes just 10 minutes to create a new virtual host.
This deployment to staging or acceptance server is usually done automatically on any commit to the „default“ branch.
 
Continuous deployment. With no cost and no effort. A workflow to love.
 
There is also a presentation for free download available at Speakerdeck. The shell scripts are available as open-source from our GitHub-Account.