Docker deployment workflow with git

What is the best way to deploy a docker container to a production environment?

  • Add a Dockerfile to the git repository and run docker build on the production system
  • Commit changes to a container with docker commit and push it to a private Docker repository and then pull it with docker pull to the production system.

Should I run docker commit even if I don’t change the infrastructure but just the app code?

  • Docker & Python: Function that changes system time
  • How to access docker container service from outside world like from parent windows host machine
  • Python - tlsv1 alert protocol version error in Docker client connection
  • Docker deployment options
  • docker-machine ls shows state timeout
  • How to mimic a Docker registry down situation
  • I hope my questions are clear.

  • Check Resources Used by each Docker Container
  • mongodb replica set master “stateStr” : “REMOVED”
  • Docker-machine error in Linux
  • How to push single docker image layers at time?
  • Where does web server come into play in OpenStack - CloudFoundry stack
  • Certificate error in docker
  • 2 Solutions collect form web for “Docker deployment workflow with git”

    Ideally you would have some sort of registry server and your docker containers would be up there, your production environment would pull these in and use them. You shouldn’t have to update the docker container when your app code changes, ADD /project in your Dockerfile and share it with --volumes-from to other containers. You app should independent from the containers altogether.

    There are tools out there now like fig, which let you start up a development environment with docker containers. You would then take this further and deploy your app containers in a CoreOS cluster.

    Here is what I have for my productAPI, this just ADDs the project code into the container.

    You shouldn’t have to change the Dockerfile all that much unless your system dependencies change for the project.


    FROM phusion/baseimage
    MAINTAINER Alex Goretoy <>
    ENV DEBIAN_FRONTEND noninteractive
    ENV PRODUCT_API_PATH /opt/product_api
    RUN mkdir -p $PRODUCT_API_PATH/
    EXPOSE 8080
    CMD python $PRODUCT_API_PATH/ runserver

    apt-get update
    apt-get install -y git \
        wget \
        openssl \
        libssl-dev \
        libffi-dev \
        python-pip \
        python2.7-dev \
        postgresql-9.3 \
    apt-get update
    apt-get install -y libcurl4-openssl-dev
    apt-get update
    wget -O - | python
    pip install -r $PRODUCT_API_PATH/requirements.txt
    # Clean up APT when done.
    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

    I would say that in a production environment you just want to pull and run the last image(s) of your container(s).

    So the idea of having a private registry is good, and your delivery pipeline would be :

    1. Commit changes to your app or your Dockerfile(s)
    2. CI takes changes, compile your app, build your new image(s), and push it to the registry
    3. Pull new image(s) on the production
    4. Run it !

    On my side, I don’t use docker commit, i prefer tags with docker build ... -t .... I only use commit when i debug a container with an interactive shell.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.