Docker deployment workflow with git

What is the best way to deploy a docker container to a production environment?

  • Add a Dockerfile to the git repository and run docker build on the production system
  • Commit changes to a container with docker commit and push it to a private Docker repository and then pull it with docker pull to the production system.

Should I run docker commit even if I don’t change the infrastructure but just the app code?

  • How to view GUI apps from inside a docker container
  • removing what was added in previous layer in docker
  • How do I get vim to not ask for confirmation interaction?
  • Selecting different code branches when using a shared base image in Docker
  • Django on App Engine Managed VM
  • How to set up container using Docker Compose to use a static IP and be accessible outside of VM host?
  • I hope my questions are clear.

  • Docker swarm NFS volumes,
  • User-data script not executed when starting EC2 instance from AWS CLI
  • Unable to push image to docker hub
  • Issue with docker and bundler
  • Docker doesn't download images (connection problems to registry)
  • Automatically Stopping/Restarting a Phusion Passenger Docker container
  • 2 Solutions collect form web for “Docker deployment workflow with git”

    Ideally you would have some sort of registry server and your docker containers would be up there, your production environment would pull these in and use them. You shouldn’t have to update the docker container when your app code changes, ADD /project in your Dockerfile and share it with --volumes-from to other containers. You app should independent from the containers altogether.

    There are tools out there now like fig, which let you start up a development environment with docker containers. You would then take this further and deploy your app containers in a CoreOS cluster.

    Here is what I have for my productAPI, this just ADDs the project code into the container.

    You shouldn’t have to change the Dockerfile all that much unless your system dependencies change for the project.

    Dockerfile

    FROM phusion/baseimage
    MAINTAINER Alex Goretoy <alex@goretoy.com>
    
    ENV DEBIAN_FRONTEND noninteractive
    
    ENV PRODUCT_API_PATH /opt/product_api
    
    RUN mkdir -p $PRODUCT_API_PATH/
    
    ADD . $PRODUCT_API_PATH/
    
    RUN $PRODUCT_API_PATH/setup.sh
    
    EXPOSE 8080
    
    CMD python $PRODUCT_API_PATH/manage.py runserver
    

    setup.sh

    #!/bin/bash
    
    apt-get update
    
    apt-get install -y git \
        wget \
        openssl \
        libssl-dev \
        libffi-dev \
        python-pip \
        python2.7-dev \
        postgresql-9.3 \
        libpq-dev
    
    apt-get update
    
    apt-get install -y libcurl4-openssl-dev
    
    apt-get update
    
    wget https://bootstrap.pypa.io/ez_setup.py -O - | python
    
    pip install -r $PRODUCT_API_PATH/requirements.txt
    
    # Clean up APT when done.
    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
    

    I would say that in a production environment you just want to pull and run the last image(s) of your container(s).

    So the idea of having a private registry is good, and your delivery pipeline would be :

    1. Commit changes to your app or your Dockerfile(s)
    2. CI takes changes, compile your app, build your new image(s), and push it to the registry
    3. Pull new image(s) on the production
    4. Run it !

    On my side, I don’t use docker commit, i prefer tags with docker build ... -t .... I only use commit when i debug a container with an interactive shell.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.