How to handle database storage/backup and application logs with many linked containers?
I’ve just created my first dockerized app and I am using docker-compose to start it on my clients server:
web: image: user/repo:latest ports: - "8080:8080" links: - db db: image: postgres:9.4.4
It exposes REST API (node.js) over 8080 port. REST API makes use of Postgres database. It works fine. My idea is that I will give this file (docker-compose.yml) to my client and he will just run
docker-compose pull && docker-compose up -d each time he want to pull fresh app code from a repo (assuming he has rights to access
However I have to handle two tasks: database backups and log backups.
How I can expose database to the host (docker host) system to for example define cron job that will make database dump and store it on S3?
I’ve read some article about docker container storage and docker volumes. As I understand in my set up all database files will be stored in “container memory” that will be lost if container is removed from the host. So I should use a docker volume to hold database data on “host side” right? How I can do this with postgres image?
In my app I log all info to stdout and stderr in case of errors. It would be coll (I think) if those logs were “streamed” directly to some file(s) on host system so they could be backed up to S3 for example (again by cron job?) – how I can do this? Or maybe there is a better aproach?
Sorry for so many questions but I am new to docker-world and it’s really hard for me to understand how it actually works or how it’s supposed to work.
One Solution collect form web for “How to handle database storage/backup and application logs with many linked containers?”
You could execute a command on a running container to create a backup, like
docker exec -it --rm db <command> > sqlDump. I do not know much about postgres but in that case would create the dump on stdout and
> sqlDumpwould redirect that to the file
sqlDumpwhich would be created on the hosts current working directory. Then you can include that created dump file into your backup. That could be done perfectly with a cronjob defined on the host. But a much better solution is linked in the next paragraph.
If you run your containers as you described above, your volumes are deleted when you delete your container. You could go for a second approach to use volume containers as described here. In that case you could remove and re-create e.g. the db container without loosing your data. A backup could be created very easy then via a temporary container instance following these instructions. Assuming
/dbdatais the place where your volume is mounted which contains the database data to be backed up:
docker run --volumes-from dbdata -v $(pwd):/backup <imageId> tar cvf /backup/backup.tar /dbdata
Since version 1.6 you can define a log driver for your container instance. With that you could interact e.g. with your syslog to have log entries in the hosts
/var/log/syslogfile. I do not know S3 but maybe that gives you some ideas:
docker run --log-driver=syslog ...