launching adhoc docker instances: Is it recommended to launch a docker instance per request?

Is it recommended to launch a docker instance per request?

I have either lighttpd or Nginx running on my web server as a reverse proxy. I support a number of subdomains with very low usage. When a request for the subdomain arrives I want to start the docker instance. Preferable I’d like to launch them dynamically so that if more than one user arrives that I would launch one per user… and/or a shared instance (determined by configuration)

  • force refresh of docker image when updated in registry / kubernetes
  • Chrome not captured when using karma with Docker
  • Update hosts file in all containers
  • docker private registry change from v1 to v2 while pulling an image
  • Docker configure physical location
  • Tensorflow Object Detection Killed before starting
  • Docker-compose: always receive Cannot locate specified Dockerfile
  • How to run Google Datalab locally?
  • How to install app from git repository via docker?
  • Docker Swarm with Consul - Manager not electing primary
  • Using dotnet from docker to power Visual Studio C# extension (OmniSharp)
  • composer install fails when unable to see mysql database
  • One Solution collect form web for “launching adhoc docker instances: Is it recommended to launch a docker instance per request?”

    Originally I said this should work well for low traffic sites, but upon further thought, no, this is a bad idea.

    Each time you launch a Docker container, it adds a read-write layer to the image. Even if there is very little data written, the layer exists, and each request will generate one. When a single user visits a website, rendering the page will generate 10’s to 1000’s of requests, for CSS, for javascript, for each image, for fonts, for AJAX, and each of these would create those read-write layers.

    Right now there is no automatic cleanup of the read-write layers — they persist even after the Docker container has exited. By default, nothing is lost.

    So, even for a single low traffic site, you would find your disk use growing steadily over time. You could add your own automated cleanup.

    Then there is the second problem: anything uploaded to the website would not be available to any other requests unless it was written to some out-of-container shared storage. That’s pretty easy to do with S3 or a separate and persistent database service, but it does start showing the weakness in the “one new Docker container per request” approach. If you’re going to have some persistent services, why not make the Docker containers more persistent and run them longer?

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.