So you have built a docker image or two on your development machine and want to get them into production. This article will step you through the process of hosting containers in production with chef.

Huh? Chef? I thought I didn’t need it anymore? Well if you are using a dedicated docker host, you won’t. Your hosting provider does all the work at maintaining the underlying operating system. Even the bigger hosts like DigitalOcean are providing images with docker pre-installed. Still, you can find yourself doing other configuration to the underlying operating system. For those cases and provisioning a raw OS, I like to use Chef automation.

Without a doubt, writing custom chef cookbooks for fiddly application specific configuration was frustrating. I felt like existing cookbooks were letting me down. Dammit. I just want to define metadata about the server at the node level, and be done with it. With docker, I can now do fiddly application specific stuff there. Leaving just the base generic server configuration with Chef. Perfect.

I have two chef ‘tiers’ defined as roles. A base role, common to all machines in the cluster (running ubuntu 14.04). The base role covers some basic house keeping and security settings. Then upon that the application tier defines what containers to run on what machine. All the fiddly application specific stuff is shipped with the container. It is a much cleaner separation and is much easier to maintain than a chef only setup.

My base.rb role looks something like:

    name 'base'
    description 'base configuration for a server'

    default_attributes 'openssh' => { 'server' => { 'permit_root_login' => 'no',
                                                    'password_authentication' => 'no' }},
                       'users' => ['<your uname here>'],
                       'firewall' => { 'rules' => ['sshd' =>    {'port' => '22',
                                                                 'protocol' => 'tcp'},
                                                   'http' =>    {'port' => '80',
                                                                 'protocol' => 'tcp'},
                                                   'https' =>   {'port' => '443',
                                                                 'protocol' => 'tcp'}]},
                       'ntp' => { 'servers' => ['',

    run_list 'recipe[credentials]',
             # passwords encrypted in a databag -- see

             # Upgrade the apt-gets.

             # Yeah, just install security updates even if I am sleeping.

             # For port blocking and source address restrictions.

             # Can I haz access.

             # Correct time and date please.

             # Create a user account (used for deploying container updates)

             # Go away nasty people

             # install docker

             # text editing on the server. Rarely used. But you never know?

While an application tier role would look something like:

    name 'nginx'
    description 'configures server with simple app'

    default_attributes 'docker-images' => [{'name' => 'nginx', 'port' => '80:80'}]
    #Pull the nginx image, create a container named 'nginx' and expose port 80 to port 80.

    run_list 'recipe[docker-images]'
    #Create containers from images defined in attributes -- see

If i’m running private images from dockerhub I will create an encrypted data bag where I store my dockerhub username and password as attributes:

    knife solo data bag create credentials production --secret-file 'data_bag_key'
    knife solo data bag edit credentials production --secret-file 'data_bag_key'

        "id": "production",
        "docker_email": "",
        "docker_password": "password123",
        "docker_username": "your_username"

Then just create a node including the base role and desired application roles:

        "run_list": ["role[base]", "role[nginx]"]

Oh, and my Cheffile has the following dependancies:

    #!/usr/bin/env ruby
    #^syntax detection
    site ''

    cookbook 'apt'
    cookbook 'user', git: 'git://'
    cookbook 'credentials', git: 'git://'
    cookbook 'openssh'
    cookbook 'unattended-upgrades'
    cookbook 'ntp'
    cookbook 'fail2ban'
    cookbook 'ufw'
    cookbook 'docker'
    cookbook 'docker-images', git: 'git://'
    cookbook 'vim'

And cook:

    knife solo cook <ip of your server>

This will go off and do the base configuration of your machine, and install docker. While docker-images pulls your images from dockerhub, creates matching containers and performs host integration (creating init scripts so that your containers start when your operating system boots). For the above example, it will just have a default install of nginx running on the machine.

Deploying Container Updates

Deploying updates to your containers running on a server is pretty easy as well. When I create a new image, I push it to dockerhub with:

    docker push nginx

Then I have a simple bash script for performing deploys on the server:


    sudo docker pull nginx
    # Download the latest version of your application image.

    sudo service nginx stop
    # Stop the current container.

    sudo docker rm nginx
    # Remove the old container.

    sudo docker run --name=nginx -d -p 80:80 nginx
    # Create a new container from the latest version of the pulled image.

    sudo service nginx start
    # Make sure upstart/systemd has the latest PID.

It is a basic deploy process at the moment, with plenty of room for improvement. Like zero-downtime deploys and an easier way to upgrade multiple machines at once.

Related documentation on hosting docker containers:


You can join the conversation on Twitter or Instagram

Become a Patreon to get early and behind-the-scenes access along with email notifications for each new post.