In Docker Swarm mode is there any point in replicating a service more than the number of hosts available?

I have been looking into the new Docker Swarm mode that will be available in Docker 1.12. In this Docker Swarm Mode Walkthrough video, they create a simple Nginx service that is composed of a single Nginx container. In the video, they have 4 nodes in the Swarm cluster. During the scaling demonstration, they increase the replication factor to 10, thus creating 10 copies of the Nginx container across all 4 machines in the cluster.

I get that the video is just a demonstration, but in the real world, what is the point of creating more replicas of a container (or service) than there are nodes in the Swarm cluster? It seems to be pointless since two containers on the same machine would be sharing that machines finite computing resources anyway. I don’t get what the benefit is.

  • Upgrading gcc in dockerfile
  • Running multiple instances of an image with docker-compose fails
  • How will a Docker application with ubuntu as base image work on Windows?
  • Copying data from and to Docker containers
  • Can't install pip packages inside a docker container with Ubuntu
  • Set up mapping in Elasticsearch during Docker run
  • So my question is, is there any real world benefit to replicating a Docker service or container beyond the number of nodes in the Swarm cluster?

    Thanks

  • Docker: Unable to connect to container from host
  • How docker manages machine configuration
  • Nginx not redirect to https docker ports
  • Wordpress & Nginx with Docker: Static files not loaded
  • In docker-compose how to create an alias / link to localhost?
  • Remove or detach network phys interface attached to the LXC instance
  • 4 Solutions collect form web for “In Docker Swarm mode is there any point in replicating a service more than the number of hosts available?”

    It depends on how the application handles threading and multiple requests. A single threaded application, or job that only handles one request at a time, may use a fraction of the OS resources and benefit from running multiple instances on a single host. An application that’s been tuned to process requests concurrently and which fully utilizes the OS will see no benefit and will in fact incur a penalty of taking away resources to run multiple instances of the application.

    One advantage can be performing live zero-downtime software updates. See the Docker 0.12rc2 Swarm tutorial on rolling updates

    You have a RabbitMQ or other Queue System with a high load on data. You can start more Containers with workers than nodes to handle the high data load on your RabbitMQ.

    Hardware resource constrain is not the only thing one needs to consider when you have your services replicated.

    A simple example would be if you are having a service to provide security details. The resource consumption by this service will be low (read a record from Db/Cache and send it out). However if there are 20 or 30 requests to be handled by the same service the requests will be queued up.

    Yes there are better ways to implement my example but I believe is good enough to illustrate why one might replicate a service on the same host/node.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.