Single versus multiple containers

I have 2 processes P1 and P2 in my System that communicate very frequently with each other over TCP. For this reason they are both hosted on the same VM. I am thinking of eliminating the VM and instead hosting my System in containers on the physical machine. If I dockerize my System, I have 2 options:

  1. Container 1 contains P1, Container 2 contains P2. The 2 containers will be linked. The communication between P1 and P2 will be across container boundary.
  2. One single Container will contain P1 and P2. The communication will remain within the container.

Kindly guide me on the merits and demerits of the above 2 approaches.
What is the overhead involved in terms of communication latency in approach 1?

  • Docker workflow design with Jenkins on production
  • Authenticating into Mongodb docker container
  • Error trying to create a scheduled task within Windows 2016 Core container
  • one compatibility issue about docker
  • How to enable AUFS on Debian?
  • package.json file won't persist in docker container
  • Running GUI from a container on a mac resolve in “libGL error: No matching fbConfigs or visuals found”
  • Kubernetes not restarting failed pods on another docker node
  • Change Docker for windows to use another VM besides MobylinuxVM
  • How to put this command in a Makefile?
  • Docker swarm and ec2, how to advertise external ip address
  • Installing packages for Rstudio Docker
  • 2 Solutions collect form web for “Single versus multiple containers”

    The main issue with several processes in one container is signal management: how do you (cleanly) stop all your processes?

    That is the “PID 1 zombie reaping issue”, which is why, whenever you have to manage multiple process, a base image like phusion/baseimage-docker can help.

    The more general issue is a microservice decoupling one: if both P1 and P2 are stateful and depends on one another, keeping them in the same container makes sense.

    What is the overhead involved in terms of communication latency

    It depends on the type of process involved, but the overhead is minimal as both processes are running on the same docker host (even if they are in separate containers)

    It’s an issue of scaling too. If you want to auto-scale P1 when let’s say usage for P1 crosses a certain threshhold (Heap, Throughput), with a single container approach you would be duplicating P2 too although that may not be required.

    Thus, one container one process, scales better and provides fine grained management(orchestration) control.

    As far as Latency is concerned, it really depends on your deployment architecture for the containers. If both the containers are hosted on the same Machine, Latency is going to be insignificant while at the same time, if they are in let’s say 2 different AWS zones, it starts having an impact.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.