launching adhoc docker instances: Is it recommended to launch a docker instance per request?

Is it recommended to launch a docker instance per request?

I have either lighttpd or Nginx running on my web server as a reverse proxy. I support a number of subdomains with very low usage. When a request for the subdomain arrives I want to start the docker instance. Preferable I’d like to launch them dynamically so that if more than one user arrives that I would launch one per user… and/or a shared instance (determined by configuration)

  • How to create a bash script file in Docker?
  • Installing Jenkins plugin (mercurial) in Docker shows in plugins folder but not in Jenkins itself
  • Accessing Site Behind nginx-proxy via IP Address
  • docker host rebooted automatically when running docker build/run
  • Can't find Docker data volume on host
  • How to access JIRA Software files in a docker image?
  • Docker MySQL can't connect to socket
  • How to achieve build isolation and caching simultaneously?
  • how to execute docker commands through Java program
  • Cannot install Docker on Debian Jessie
  • Why other unrelated processes are killed when one process is killed in docker container
  • Start mysql automatically when running a Docker container
  • One Solution collect form web for “launching adhoc docker instances: Is it recommended to launch a docker instance per request?”

    Originally I said this should work well for low traffic sites, but upon further thought, no, this is a bad idea.

    Each time you launch a Docker container, it adds a read-write layer to the image. Even if there is very little data written, the layer exists, and each request will generate one. When a single user visits a website, rendering the page will generate 10’s to 1000’s of requests, for CSS, for javascript, for each image, for fonts, for AJAX, and each of these would create those read-write layers.

    Right now there is no automatic cleanup of the read-write layers — they persist even after the Docker container has exited. By default, nothing is lost.

    So, even for a single low traffic site, you would find your disk use growing steadily over time. You could add your own automated cleanup.

    Then there is the second problem: anything uploaded to the website would not be available to any other requests unless it was written to some out-of-container shared storage. That’s pretty easy to do with S3 or a separate and persistent database service, but it does start showing the weakness in the “one new Docker container per request” approach. If you’re going to have some persistent services, why not make the Docker containers more persistent and run them longer?

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.