Docker vs old approach (supervisor, git, your project)

I’m on Docker for past weeks and I can say I love it and I get the idea. But what I can’t figure out is how can I “transfer” my current set-up on Docker solution. I guess I’m not the only one and here is what I mean.

I’m Python guys, more specifically Django. So I usually have this:

  • When does Docker image cache invalidation occur?
  • Docker container for SQL Server Linux keeps exiting
  • cssnext not write file
  • Docker Swarm creating real cluster: Registered and Removed
  • Kubernetes, Java and Grafana - How to display only the running containers?
  • docker-compose postgresql implementation in python app
    • Debian installation
    • My app on the server (from git repo).
    • Virtualenv with all the app dependencies
    • Supervisor that handles Gunicorn that runs my Django app.

    The thing is when I want to upgrade and/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done.

    But what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process?

    Hope you get the confusion of old school dude. I bet Docker guys were thinking about that.

    Cheers!

  • How to pass docker arguments during docker run command or docker-compose up command
  • How to customize the configuration file of the official PostgreSQL Docker image?
  • Is there any way to make Docker download public images faster?
  • How do I access a server on localhost with nginx docker container?
  • Dockerfile Dynamic Registry for Image
  • Docker command to fetch dockerfile from registry
  • One Solution collect form web for “Docker vs old approach (supervisor, git, your project)”

    For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image.

    For production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production.

    Logging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.