launching adhoc docker instances: Is it recommended to launch a docker instance per request?

Is it recommended to launch a docker instance per request?

I have either lighttpd or Nginx running on my web server as a reverse proxy. I support a number of subdomains with very low usage. When a request for the subdomain arrives I want to start the docker instance. Preferable I’d like to launch them dynamically so that if more than one user arrives that I would launch one per user… and/or a shared instance (determined by configuration)

  • Registrator not listening to docker events
  • Package Laravel Application Into Docker Image
  • Docker container connect to host DocumentDB emulator
  • Modify nginx.conf to support n number of wildfly instances
  • Do I need my own server if I move my Elastic Beanstalk app to use Docker?
  • Docker swarm mode mesh routing is not working at all
  • Can multiple Docker containers run using the same host/port?
  • Dart lang app with open stack / docker / vagrant
  • How background a process to docker
  • How to change port mapping when linking Docker containers
  • Build docker ubuntu image by Dockerfile
  • What is the correct way to resolve warnings at Redis startup when using Docker?
  • One Solution collect form web for “launching adhoc docker instances: Is it recommended to launch a docker instance per request?”

    Originally I said this should work well for low traffic sites, but upon further thought, no, this is a bad idea.

    Each time you launch a Docker container, it adds a read-write layer to the image. Even if there is very little data written, the layer exists, and each request will generate one. When a single user visits a website, rendering the page will generate 10’s to 1000’s of requests, for CSS, for javascript, for each image, for fonts, for AJAX, and each of these would create those read-write layers.

    Right now there is no automatic cleanup of the read-write layers — they persist even after the Docker container has exited. By default, nothing is lost.

    So, even for a single low traffic site, you would find your disk use growing steadily over time. You could add your own automated cleanup.

    Then there is the second problem: anything uploaded to the website would not be available to any other requests unless it was written to some out-of-container shared storage. That’s pretty easy to do with S3 or a separate and persistent database service, but it does start showing the weakness in the “one new Docker container per request” approach. If you’re going to have some persistent services, why not make the Docker containers more persistent and run them longer?

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.