Is there a way to make docker containers truly fault-tolerant?

I mean fault tolerance as in running a backup container and constantly synchronizing the active containers state (memory) to the passive one, so in case of failure i can failover to the other container without losing active network connections or state of the application running inside the container?

I know docker swarm exist, that can make my containers highly available by restarting them on other nodes in case of node failure, or even making the service fault-tolerant by replication, but this will only work if my service is either stateless or saves it’s state in a shared network storage or database.

  • Flocker docker plugin is unable to mount Dataset
  • Docker push on OS/X very slow - and seems to push layers that have already been pushed
  • Docker Node JS Installation
  • Docker compose (v2) UnkownHostException
  • Issues Launching Image with Data Volume using CoreOS and Fleet
  • How to get true or false as a response from docker if the service is running or not
  • I’m looking for a solution like vSphere FT just for docker.

  • Add and delete links to a docker container dynamically
  • Launch Docker Container using ansible-playbook from a locally available docker image
  • sharing of OS image resources across multiple docker containers
  • Figure out cgroup cpu reservation based historical data
  • Connect docker container
  • Docker Ignores limits.conf (trying to solve “too many open files” error)
  • Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.