Single versus multiple containers

I have 2 processes P1 and P2 in my System that communicate very frequently with each other over TCP. For this reason they are both hosted on the same VM. I am thinking of eliminating the VM and instead hosting my System in containers on the physical machine. If I dockerize my System, I have 2 options:

  1. Container 1 contains P1, Container 2 contains P2. The 2 containers will be linked. The communication between P1 and P2 will be across container boundary.
  2. One single Container will contain P1 and P2. The communication will remain within the container.

Kindly guide me on the merits and demerits of the above 2 approaches.
What is the overhead involved in terms of communication latency in approach 1?

  • Seeing protocol error with ln for mounted volume inside docker
  • Docker bash'ing with find
  • Error while trying to build custome postgres Docker image
  • Why is docker looking in /simple for python packages?
  • Docker run ignore stderr error messages
  • MySQL 5.5.44 “native” 32% faster than dockerized MySQL 5.5.44
  • Unable to migrate boot2docker to docker-machine
  • Application deployment using docker in clustered environment
  • How to save transfer cache to a different host from a docker-compose build
  • How to mount private SSH key to Docker for Windows container?
  • Docker registry and index
  • Docker refusing to run bash
  • 2 Solutions collect form web for “Single versus multiple containers”

    The main issue with several processes in one container is signal management: how do you (cleanly) stop all your processes?

    That is the “PID 1 zombie reaping issue”, which is why, whenever you have to manage multiple process, a base image like phusion/baseimage-docker can help.

    The more general issue is a microservice decoupling one: if both P1 and P2 are stateful and depends on one another, keeping them in the same container makes sense.

    What is the overhead involved in terms of communication latency

    It depends on the type of process involved, but the overhead is minimal as both processes are running on the same docker host (even if they are in separate containers)

    It’s an issue of scaling too. If you want to auto-scale P1 when let’s say usage for P1 crosses a certain threshhold (Heap, Throughput), with a single container approach you would be duplicating P2 too although that may not be required.

    Thus, one container one process, scales better and provides fine grained management(orchestration) control.

    As far as Latency is concerned, it really depends on your deployment architecture for the containers. If both the containers are hosted on the same Machine, Latency is going to be insignificant while at the same time, if they are in let’s say 2 different AWS zones, it starts having an impact.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.