How to consist the containers in Docker?

Now I am developing the new content so building the server.

On my server, the base system is the Cent OS(7), I installed the Docker, pulled the cent os, and establish the “WEB SERVER container” Django with uwsgi and nginx.

  • Get Docker Container Names
  • RTMP Streaming does not work from Docker container
  • Docker registry (distribution) token authentication setup
  • ECS agent can not successfully pull image from ECR
  • Is there a way to replicate pwd in a volume mount for docker in a boot2docker context?
  • Launching node docker container for debugging from VS Code
  • However I want to up the service, (Database with postgres), what is the best way to do it?

    1. Install postgres on my existing container (with web server)

    2. Build up the new container only for database.

    and I want to know each advantage and weak point of those.

  • Which device Docker Container writing to?
  • API server failed to start up
  • Consul and registrator to single host, it's possible?
  • Extra characters appearing in Flask JavaScript files
  • Run `docker-php-ext-install` FROM container other than php
  • Import qcow2 image to docker hub
  • 2 Solutions collect form web for “How to consist the containers in Docker?”

    If you want to keep the data in the database after a restart, the database shouldn’t be in a container but on the host. I will assume you want the db in a container as well.

    Setting up a second container is a lot more work. You should find a way that the containers know about each other’s address. The address changes each time you start the container, so you need to make some scripts on the host. The host must find out the ip-adresses and inform the containers.

    The containers might want to update the /etc/hosts file with the address of the other container. When you want to emulate different servers and perform resilience tests this is a nice solution. You will need quite some bash knowledge before you get this running well.

    In about all other situations choose for one container. Installing everything in one container is easier for setting up and for developing afterwards. Setting up Docker is just the environment where you want to do your real work. Tooling should help you with your real work, not take all your time and effort.

    It’s idiomatic to use two separate containers. Also, this is simpler – if you have two or more processes in a container, you need a parent process to monitor them (typically people use a process manager such as supervisord). With only one process, you won’t need to do this.

    By monitoring, I mainly mean that you need to make sure that all processes are correctly shutdown if the container receives a SIGSTOP signal. If you don’t do this properly, you will end up with zombie processes. You won’t need to worry about this if you only have a signal process or use a process manager.

    Further, as Greg points out, having separate containers allows you to orchestrate and schedule the containers separately, so you can do update/change/scale/restart each container without affecting the other one.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.