Docker vs old approach (supervisor, git, your project)

I’m on Docker for past weeks and I can say I love it and I get the idea. But what I can’t figure out is how can I “transfer” my current set-up on Docker solution. I guess I’m not the only one and here is what I mean.

I’m Python guys, more specifically Django. So I usually have this:

  • how to let docker container work with sshuttle?
  • Docker mysql image - move internal container DB to the volume
  • Getting the manifest list (fat manifest) from docker registry
  • How to build Docker Image with Fake
  • How to use “Docker Quickstart Terminal” to start another docker-machine?
  • How to tell if a docker container run with -d has finished running its CMD
    • Debian installation
    • My app on the server (from git repo).
    • Virtualenv with all the app dependencies
    • Supervisor that handles Gunicorn that runs my Django app.

    The thing is when I want to upgrade and/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done.

    But what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process?

    Hope you get the confusion of old school dude. I bet Docker guys were thinking about that.


  • How to deploy docker image of artifactory-oss on marathon without permissions issues
  • Refresh net.core.somaxcomm (or any sysctl property) for docker containers
  • db.createUser is not creating any user in mongodb in docker setup
  • Mesos failing to deploy container with same spec after destroying initial application
  • Can't access flask app which is inside docker container
  • How to import path to php interpreter from docker container to $PATH Linux?
  • One Solution collect form web for “Docker vs old approach (supervisor, git, your project)”

    For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image.

    For production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production.

    Logging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.