How to automate multi server deployment using docker

Here’s my situation:

  • I have a project written in Go stored on Github
  • I have 3 app servers behind a load balancer (app1, app2, app3)
  • I have a Dockerfile as part of the project in git, which when used to build an image, knows how to install all my app dependencies (including Go) and get a working environment for my app
  • I have containers running on all 3 app servers and everything is working marvellously

Now I want to change some code and redeploy my changes to those 3 servers. I can think of 3 possible ways to facilitate the automation of this:

  • Docker won't publish selected ports
  • Mounting file system in docker fails sometimes
  • Docker-compose not passing environment variable to container
  • How to run gerrit cookbook inside docker containers?
  • Docker-compose extend not finding environment variables
  • Flask app doesn't retrieve data from same database as unit tests
    1. As part of my dockerfile I can add a step that pulls my code from Github and builds it. So to redeploy I need a script that logs into the 3 servers and rebuilds and runs the containers, thus pulling all new code in the process. At most all I ever need to push to a server is the Dockerfile.
    2. As part of my Dockerfile I can have an ADD command to bundle my code into the container. I would then need to deploy my entire project to each server using something like Capistrano or Fabric and then kill the old container, rebuild and run.
    3. I can use a nominated machine (or my dev environment) to build a new image based on the current source code. Then push this image to the registry. Then have a script which logs into my servers and pull the new image down, kill the old container and run the new one.

    Number 1 seems like the easiest but most other discussion I’ve read on Dockers leans towards a situation like 3 which seems rather long-winded to me.

    What is the best option here (or not here), I’m new to Docker so have I missed something? I asked someone who knows about Docker and their response was ‘you’re not thinking in the Docker way’, so what’s the Docker way?

  • Limiting docker logging
  • Query to dataframe very slow on Zeppelin
  • How is “lxd” different from lxc/docker?
  • Query docker embedded dns from host
  • Dockerize Logstash, Redis setup
  • Artifactory: using NetScaler as a reverse proxy for Docker
  • 2 Solutions collect form web for “How to automate multi server deployment using docker”

    I think the idea for option 3, is that you’re building the image only once, which means that all servers would run the same image. The other two may produce different images.

    E.g. in a slightly more involved scenario, the three builds can even pick different commits if you go with option 1.

    Combination of options 2 and 3 can be used with the Fabricio. It’s an extension to Fabric, thus for your project may look something like this:

    from fabricio import docker, tasks
    app = tasks.ImageBuildDockerTasks(
            options={'publish': '8000:8000'},
        hosts=['user@host1', 'user@host2', 'user@host3'],

    Using configuration definition above you can then type fab --list from the project root dir and see a list of available Fabricio commands:

    Available commands:
        app           prepare -> push -> backup -> pull -> migrate -> update
        app.deploy    prepare -> push -> backup -> pull -> migrate -> update
        app.prepare   prepare Docker image
        app.pull      pull Docker image from registry
        app.push      push Docker image to registry
        app.rollback  rollback Docker service to a previous version
        app.update    update service to a new version

    There is also a bunch of examples of how to use Fabricio, including Docker swarm mode which may be very useful for your configuration.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.