Controlling access to multiple Docker containers on the same host

I’ve been tasked with setting up multiple isolated, client-specific instances of a web application on a single Amazon EC2 instance using Docker. The base application is fundamentally the same for each instance, but has been customized for each client.

The goal is as follows:

  • How can I pull the official images of IBM Container Service?
  • Docker + MariaDB connection
  • Unable to connect to dockerized mysql db remotely
  • Dockerized Neo4j ignoring previous database
  • How to ship python application through docker
  • Docker error The command non-zero code: 1 python
  • 1) Each container would be secured and “sandboxed” such that no one container could affect the others or the host. It’s my understanding Docker does this anyway, but I just want to be sure.

    2) Each container would be “complete” and have its own users, database, access keys, etc.

    3) A user associated with one container should have no way to access any aspect of any other container or the host.

    I’ve searched the similar questions and some seem to touch on the idea but not completely answer my question.

    I know this likely goes against the Docker philosophy, but that aside, is this feasible, and what would be the best approach to achieve this? We have used SSH tunnels to access relevant ports in the past when there was only one client per host, but is there a safe way to do this with multiple clients on the same host? Would this setup be better served by a reverse proxy like Nginx or Apache? I should specify here that we are currently looking at only having one domain to access this host.

    I guess the question boils down to, how do I restrict remote access on a per-container level when running multiple client containers on a single host?

    Any help is much appreciated.

  • Best practices for managing SSH keys with Azure container service deployments
  • How can I use GPU in Docker image with Theano launched from Windows host?
  • Certificate error while running boot2Docker
  • docker metrics reside in different location for different environment or versions
  • 502 Bad Gateway when accessing virtual host via Jwilder's Nginx proxy
  • Docker exec - Write text to file in container
  • One Solution collect form web for “Controlling access to multiple Docker containers on the same host”

    It is feasible but the solution is too big to contain in a typical Stack Overflow answer. This is part of the value that PaaS providers like dotCloud, Heroku, and others provide.

    You can try to roll your own multi-tenant solution, maybe by starting with something like Deis, but they warn against it.

    Security is hard, especially when you are trying to make things easy for your customers.

    You may find this series in the dotCloud blog helpful:

    • Episode 1: Kernel Namespaces (docker helps)
    • Episode 2: cgroups (docker helps)
    • Episode 3: AUFS (docker helps)
    • Episode 4: GRSEC (your kernel, up to you)
    • Episode 5: Distributing routing (your network, up to you, though Docker Swarm may help eventually)
    • Episode 6: Memory Optimization (up to your users)

    notable bias: I used to work for dotCloud and now Docker.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.