Single versus multiple containers

I have 2 processes P1 and P2 in my System that communicate very frequently with each other over TCP. For this reason they are both hosted on the same VM. I am thinking of eliminating the VM and instead hosting my System in containers on the physical machine. If I dockerize my System, I have 2 options:

  1. Container 1 contains P1, Container 2 contains P2. The 2 containers will be linked. The communication between P1 and P2 will be across container boundary.
  2. One single Container will contain P1 and P2. The communication will remain within the container.

Kindly guide me on the merits and demerits of the above 2 approaches.
What is the overhead involved in terms of communication latency in approach 1?

  • Localhost doesn't forward requests to oracle docker container
  • Multi Docker Elastic Beanstalk: upload .ebextensions
  • Why is my docker machine not working under OSX?
  • Linux based docker images on Windows Server 2016 TP5
  • Publishing docker swarm mode port only to localhost
  • Copy files from host system to docker image before starting the container
  • How come that `docker-compose up` throws an error where `docker run` works fluently
  • docker-machine fails to start host
  • nginx config for using plesk in docker environment
  • Export RabbitMQ Docker image with vhost and queues
  • how does consul work with redshift?
  • Unable to access gitlab repository in docker
  • 2 Solutions collect form web for “Single versus multiple containers”

    The main issue with several processes in one container is signal management: how do you (cleanly) stop all your processes?

    That is the “PID 1 zombie reaping issue”, which is why, whenever you have to manage multiple process, a base image like phusion/baseimage-docker can help.

    The more general issue is a microservice decoupling one: if both P1 and P2 are stateful and depends on one another, keeping them in the same container makes sense.

    What is the overhead involved in terms of communication latency

    It depends on the type of process involved, but the overhead is minimal as both processes are running on the same docker host (even if they are in separate containers)

    It’s an issue of scaling too. If you want to auto-scale P1 when let’s say usage for P1 crosses a certain threshhold (Heap, Throughput), with a single container approach you would be duplicating P2 too although that may not be required.

    Thus, one container one process, scales better and provides fine grained management(orchestration) control.

    As far as Latency is concerned, it really depends on your deployment architecture for the containers. If both the containers are hosted on the same Machine, Latency is going to be insignificant while at the same time, if they are in let’s say 2 different AWS zones, it starts having an impact.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.