Single versus multiple containers

I have 2 processes P1 and P2 in my System that communicate very frequently with each other over TCP. For this reason they are both hosted on the same VM. I am thinking of eliminating the VM and instead hosting my System in containers on the physical machine. If I dockerize my System, I have 2 options:

  1. Container 1 contains P1, Container 2 contains P2. The 2 containers will be linked. The communication between P1 and P2 will be across container boundary.
  2. One single Container will contain P1 and P2. The communication will remain within the container.

Kindly guide me on the merits and demerits of the above 2 approaches.
What is the overhead involved in terms of communication latency in approach 1?

  • Go & Docker: I'm able to run a go web server when using stdlib, when I use custom packages errors occur
  • Resize docker container after freeing space inside
  • Craft CMS Manager Craftman - No such file or directory: '//docker-compose.yml'
  • Service X mounts volumes from Y, which is not the name of a service or container
  • Fail to install Starman inside Docker container
  • Docker: understanding ENTRYPOINT and CMD instructions
  • How to programmatically know if I am building with -prod flag (ng build -prod)
  • Vagrant up --no-parallel flag meaning
  • Windows cmd docker command piping
  • docker-compose caches run results
  • How to get readline working with python in alpine
  • How to send http request from Docker to localhost or Virtual Machine
  • 2 Solutions collect form web for “Single versus multiple containers”

    The main issue with several processes in one container is signal management: how do you (cleanly) stop all your processes?

    That is the “PID 1 zombie reaping issue”, which is why, whenever you have to manage multiple process, a base image like phusion/baseimage-docker can help.

    The more general issue is a microservice decoupling one: if both P1 and P2 are stateful and depends on one another, keeping them in the same container makes sense.

    What is the overhead involved in terms of communication latency

    It depends on the type of process involved, but the overhead is minimal as both processes are running on the same docker host (even if they are in separate containers)

    It’s an issue of scaling too. If you want to auto-scale P1 when let’s say usage for P1 crosses a certain threshhold (Heap, Throughput), with a single container approach you would be duplicating P2 too although that may not be required.

    Thus, one container one process, scales better and provides fine grained management(orchestration) control.

    As far as Latency is concerned, it really depends on your deployment architecture for the containers. If both the containers are hosted on the same Machine, Latency is going to be insignificant while at the same time, if they are in let’s say 2 different AWS zones, it starts having an impact.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.