Continuous Deployment with Docker – Container Camp presentation

I had the privilege of talking at the inaugural London Docker conference Container Camp along with a raft of industry-leading minds. Thanks to YLD! for the organisational skills – the conference was a tremendous success.

Video will follow, in the interim here are the slides from my talk: Continuous Deployment with Docker.

Deployment Pipelines: Disproving the Big Bang

Here are my slides from LNUG 08/2014 covering the basics tenets of setting up a continuous deployment pipeline and outlining some of the challenges.

To paraphrase the steps:

  • Start with tests
  • Build a pipeline that’s reliable
  • Deploy to an environment
  • Monitor everything
  • Use KPIs for a cluster immune system
  • Roll back if KPIs exceed tolerances
  • Halt the pipeline, analysis/5 whys?
  • Write a failing test case before fixing it
  • Re-enable the pipeline
  • Sleep well at night

Slides here.

Setting a time zone for Sauce Labs tests via Protractor

Sauce Labs servers run UTC. While this is good general practice, for tests with time-sensitive database fixtures or requirements, the developers’ time zone may be preferable.

Add a time-zone key to the capabilities object in your Protractor config file. Bear in mind that this is overriden by any multiCapabilities object you may have – if it exists, add them to that object instead.

Possible time-zone values are on Wikipedia – bear in mind only the latter portion is used (i.e. London, Samoa, Havana)

Using Protractor’s example config file as a base:

exports.config = {
  // get auth tokens from parent environment
  // i.e. if BASH: export SAUCE_USERNAME=Andy
  sauceUser: process.env.SAUCE_USERNAME,
  sauceKey: process.env.SAUCE_ACCESS_KEY,

  // Capabilities to be passed to the webdriver instance
  // - webdriver config can be extended with Sauce Labs's 
  // custom conf options from 
  capabilities: {
    'browserName': 'chrome',
    'time-zone': 'London'

  // Spec patterns are relative to the current working directly when
  // protractor is called.
  specs: ['example_spec.js'],

Fuzzing Docker containers with Trinity

Hypervisor breakouts are not as uncommon an occurrence as one would hope. In an effort to identify kernel problems that could lead to privilege escalation, we can “intelligently” fuzz for Docker container breakouts with Trinity:

Trinity is a system call fuzzer which employs some techniques to pass semi-intelligent arguments to the syscalls being called.

It passes illegal or unexpected parameters to various system calls in an attempt to crash the kernel. These attack vectors could they  be used as a basis for an exploit – in this case, a container breakout.

It comes with a health warning:

Warning: This program may seriously corrupt your files, including any of those that may be writable on mounted network file shares. It may create network packets that may cause disruption on your local network. Run at your own risk.

Eric Windisch has wrapped Trinity in a simple Dockerfile to test the container. Run with

docker run -u nobody ewindisch/trinity

This will generate a lot of output – you can leave Trinity running until it triggers some “interesting” behaviour.

This article on Trinity that provides some background:

Trinity can be used in a number of ways. One possibility is simply to leave it running until it triggers a kernel panic and then look at the child logs and the system log in order to discover the cause of the panic. Dave has sometimes left systems running for hours or days in order to discover such failures. New system calls can be exercised using the -c command-line option described above. Another possible use is to discover unexpected (or undocumented) failure modes of existing system calls: suitable scripting on the log files can be used to obtain summaries of the various failures of a particular system call.

Go forth and fuzz.

Dockercon14 Tear Down

The inaugural DockerCon was a rousing success, with Docker 1.0 being announced and released. Here is a selection of choice Docker links:

Ways of working in a Dockerfile

Two interesting and opposing takes on the single-process-per-container model of Docker deployment. The first contains some good general practices (especially concerning package updates), the second provides a more in-depth look at and mitigation of Docker’s process management and exectution:

Much has changed since my first Dockerfile best practices post. I’ll leave the original post up for posterity and this post will include what has change and what you should do now.

via Dockerfile Best Practices – take 2.

You just built a container which contains a minimal operating system, and which only runs your app. But the operating system inside the container is not configured correctly. A proper Unix system should run all kinds of important system services. You’re not running them, you’re only running your app.

via Baseimage-docker: A minimal Ubuntu base image modified for Docker-friendliness.