How to consist the containers in Docker?

Now I am developing the new content so building the server.

On my server, the base system is the Cent OS(7), I installed the Docker, pulled the cent os, and establish the “WEB SERVER container” Django with uwsgi and nginx.

  • Application running within a docker container is not accessible?
  • Dockerfile expose port for node.js image
  • Docker Devmapper space issue - increase size
  • Docker Swarm Fault-tolerant Scheduling
  • Passing interpolated environment variables in Jenkinsfile
  • Docker: Get host username from UID via shared PID namespace
  • However I want to up the service, (Database with postgres), what is the best way to do it?

    1. Install postgres on my existing container (with web server)

    2. Build up the new container only for database.

    and I want to know each advantage and weak point of those.

  • Generate a random number in docker-compose file?
  • Find opened sockets in docker container
  • Should I use save/load or export/import for my application using Docker? And an additional uncertainty
  • supervisord: is it possible to redirect subprocess stdout back to supervisord?
  • Docker images as OS for multiple servers
  • Could number of docker containers be limited by file IO inside containers?
  • 2 Solutions collect form web for “How to consist the containers in Docker?”

    If you want to keep the data in the database after a restart, the database shouldn’t be in a container but on the host. I will assume you want the db in a container as well.

    Setting up a second container is a lot more work. You should find a way that the containers know about each other’s address. The address changes each time you start the container, so you need to make some scripts on the host. The host must find out the ip-adresses and inform the containers.

    The containers might want to update the /etc/hosts file with the address of the other container. When you want to emulate different servers and perform resilience tests this is a nice solution. You will need quite some bash knowledge before you get this running well.

    In about all other situations choose for one container. Installing everything in one container is easier for setting up and for developing afterwards. Setting up Docker is just the environment where you want to do your real work. Tooling should help you with your real work, not take all your time and effort.

    It’s idiomatic to use two separate containers. Also, this is simpler – if you have two or more processes in a container, you need a parent process to monitor them (typically people use a process manager such as supervisord). With only one process, you won’t need to do this.

    By monitoring, I mainly mean that you need to make sure that all processes are correctly shutdown if the container receives a SIGSTOP signal. If you don’t do this properly, you will end up with zombie processes. You won’t need to worry about this if you only have a signal process or use a process manager.

    Further, as Greg points out, having separate containers allows you to orchestrate and schedule the containers separately, so you can do update/change/scale/restart each container without affecting the other one.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.