How to handle database storage/backup and application logs with many linked containers?

I’ve just created my first dockerized app and I am using docker-compose to start it on my clients server:

web:
  image: user/repo:latest
  ports:
    - "8080:8080"
  links:
    - db

db:
  image: postgres:9.4.4

It exposes REST API (node.js) over 8080 port. REST API makes use of Postgres database. It works fine. My idea is that I will give this file (docker-compose.yml) to my client and he will just run docker-compose pull && docker-compose up -d each time he want to pull fresh app code from a repo (assuming he has rights to access user/repo repo.

  • Re-running Docker only until a certain step using caches?
  • what is the use of nested containers and root privilege isolation
  • can't write ñ, ä, ë, ü in a ubuntu docker container
  • Unable to run docker or service docker
  • Should you install nginx inside docker? [closed]
  • WebSphere Docker OAuth
  • However I have to handle two tasks: database backups and log backups.

    1. How I can expose database to the host (docker host) system to for example define cron job that will make database dump and store it on S3?

    2. I’ve read some article about docker container storage and docker volumes. As I understand in my set up all database files will be stored in “container memory” that will be lost if container is removed from the host. So I should use a docker volume to hold database data on “host side” right? How I can do this with postgres image?

    3. In my app I log all info to stdout and stderr in case of errors. It would be coll (I think) if those logs were “streamed” directly to some file(s) on host system so they could be backed up to S3 for example (again by cron job?) – how I can do this? Or maybe there is a better aproach?

    Sorry for so many questions but I am new to docker-world and it’s really hard for me to understand how it actually works or how it’s supposed to work.

  • passing correct ip via linked docker containers from nginx to jetty
  • How to run cron in Docker container from Ruby image
  • Can a Docker Task in a Swarm know which instance number or how many total instances there are?
  • Is is possible to define memory and disk space for a Docker Container?
  • Is it possible to start a stopped container from another container
  • ZMQ crashes “randomly” in aiohttp web service
  • One Solution collect form web for “How to handle database storage/backup and application logs with many linked containers?”

    1. You could execute a command on a running container to create a backup, like docker exec -it --rm db <command> > sqlDump. I do not know much about postgres but in that case would create the dump on stdout and > sqlDump would redirect that to the file sqlDump which would be created on the hosts current working directory. Then you can include that created dump file into your backup. That could be done perfectly with a cronjob defined on the host. But a much better solution is linked in the next paragraph.

    2. If you run your containers as you described above, your volumes are deleted when you delete your container. You could go for a second approach to use volume containers as described here. In that case you could remove and re-create e.g. the db container without loosing your data. A backup could be created very easy then via a temporary container instance following these instructions. Assuming /dbdata is the place where your volume is mounted which contains the database data to be backed up:

      docker run --volumes-from dbdata -v $(pwd):/backup <imageId> tar cvf /backup/backup.tar /dbdata
      
    3. Since version 1.6 you can define a log driver for your container instance. With that you could interact e.g. with your syslog to have log entries in the hosts /var/log/syslog file. I do not know S3 but maybe that gives you some ideas:

      docker run --log-driver=syslog ...
      
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.