Container with app + DB

I’ve been making some tests with docker and so far I’m wondering why it’s considered a good practice to separate the DB and the app in two containers.

Having two containers seems to be cumbersome to manage and I don’t really see the value in it.
Whereas I like the idea of having a self sustainable container per app.

  • Post a Json file to elasticsearch running on Docker, Kitematic on Windows 10
  • Angular 2 build once, deploy anywhere
  • How to update Docker image when there is new image version?
  • DCOKER_CERT_PATH is not supported in docker 1.9
  • docker container naming pattern
  • Connect to mongo docker with authentication from other docker (node)
  • How to build a Docker container for JAVA web application
  • Build Docker image in Jenkins (in Docker image) - Cloud docker agent
  • How to deploy web service on Docker container
  • Jenkins Github Plugin can't choose my credentials
  • Does Kitematic 0.7.6 for Windows support Volumes?
  • docker network connect to host second interface
  • 2 Solutions collect form web for “Container with app + DB”

    One reason is the separation of data storage and application. If you you put both in their own container, you can update them independently. In my experience this is a common process, because usually the application will evolve faster than the underlying database.

    It also frees you to run the containers in different places, which might be a constraint in your operations. Or to run multiple containers from the same database image with different applications.

    Often it is also a good thing to be able to scale the UI from one instance to multiple instance, all connected to the same database (or cache instance or HTTP backend). This is mentioned briefly in the docker best practices.

    I also understand the urge to run multiple processes in one container. That’s why so many minimalist init systems/supervisors like s6 came up lately. I prefer this for demos of applications which require a couple things, like an nginx for frontend, a database and maybe a redis instance. But you could also write a basic docker-compose file and run the demo with multiple containers.

    It depends on what you consider your “DB”, is it the database application or the content.

    The latter is easy, the content needs to be persisted outside the lifetime of the application. The convention used to be to have a “data” container, which simplified linking it with the application (e.g. using the Docker Engine create command –volumes-from parameter). With Docker 1.9 there is a new volume API which has superceded the concept of “data” containers. But you should never store your data in the overlay filesystem (if not only for persistence, but for performance).

    If you are referring to a database application, you really enter a semi-religious debate with the microservices crowd. Docker is built to run single process. It is built for 12-factor apps. It is built for microservices. It is definitely possible to run more than one process in a container, but with it you have to consider the additional complexity of managing/monitoring these processes (e.g. using an init process like supervisord), dealing with logging, etc.

    I’ve delivered both. If you are managing the container deployment (e.g. you are hosting the app), it is actually less work to use multiple containers. This allows you to use Docker’s abstraction layers for networking and persistent storage. It also provides maximum portability as you scale the application (perhaps you may consider using convoy or flocker volume drivers or an overlay network for hosting containers across multiple servers). If you are developing a product for distribution, it is more convenient to deliver a single Docker Repository (with one Image). This minimizes the support costs as you guide customers through deployment.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.