launching adhoc docker instances: Is it recommended to launch a docker instance per request?

Is it recommended to launch a docker instance per request?

I have either lighttpd or Nginx running on my web server as a reverse proxy. I support a number of subdomains with very low usage. When a request for the subdomain arrives I want to start the docker instance. Preferable I’d like to launch them dynamically so that if more than one user arrives that I would launch one per user… and/or a shared instance (determined by configuration)

  • Get the containerID, docker
  • Docker NGINX Proxy not Forwarding Websockets
  • Unable to modify files in container from docker
  • Error: unknown shorthand flag: 'r' in -r
  • Upgrade docker container as part of docker-compose
  • impossible to delete a container
  • How can I run 2 long commands in a docker container
  • Pip install -e packages don't appear in Docker
  • How to persist data using a postgres database, Docker, and Kubernetes?
  • Is it best practice to daemonize a process within docker?
  • Why is crond failing to run a non-root crontab on alpine linux?
  • fastcgi-mono-server4 and nginx with docker
  • One Solution collect form web for “launching adhoc docker instances: Is it recommended to launch a docker instance per request?”

    Originally I said this should work well for low traffic sites, but upon further thought, no, this is a bad idea.

    Each time you launch a Docker container, it adds a read-write layer to the image. Even if there is very little data written, the layer exists, and each request will generate one. When a single user visits a website, rendering the page will generate 10’s to 1000’s of requests, for CSS, for javascript, for each image, for fonts, for AJAX, and each of these would create those read-write layers.

    Right now there is no automatic cleanup of the read-write layers — they persist even after the Docker container has exited. By default, nothing is lost.

    So, even for a single low traffic site, you would find your disk use growing steadily over time. You could add your own automated cleanup.

    Then there is the second problem: anything uploaded to the website would not be available to any other requests unless it was written to some out-of-container shared storage. That’s pretty easy to do with S3 or a separate and persistent database service, but it does start showing the weakness in the “one new Docker container per request” approach. If you’re going to have some persistent services, why not make the Docker containers more persistent and run them longer?

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.