Docker vs old approach (supervisor, git, your project)

I’m on Docker for past weeks and I can say I love it and I get the idea. But what I can’t figure out is how can I “transfer” my current set-up on Docker solution. I guess I’m not the only one and here is what I mean.

I’m Python guys, more specifically Django. So I usually have this:

  • Build .NET Applications with Docker possible ?
  • Understanding Docker Macvlan network
  • Docker: Scaling Python container, memory overhead
  • How can I tag a Docker image as I'm building it with stack?
  • How to import path to php interpreter from docker container to $PATH Linux?
  • Updating WordPress inside a contaner. No FTP access
    • Debian installation
    • My app on the server (from git repo).
    • Virtualenv with all the app dependencies
    • Supervisor that handles Gunicorn that runs my Django app.

    The thing is when I want to upgrade and/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done.

    But what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process?

    Hope you get the confusion of old school dude. I bet Docker guys were thinking about that.

    Cheers!

  • How to configure docker-compose for storing data of Elastic search out-side of docker container?
  • How to automatically remove old Docker images?
  • Not able to connect to mysql through java application in docker?
  • How do I know when my docker mysql container is up and mysql is ready for taking queries?
  • Cannot start container : [8] System error: exec: “up3”: executable file not found in $PATH
  • Unable to verify the Docker daemon is listening: Maximum number of retries (10) exceeded
  • One Solution collect form web for “Docker vs old approach (supervisor, git, your project)”

    For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image.

    For production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production.

    Logging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.