Launch Docker containers to handle HTTP requests

I’m building a web application where users manage files in projects. Each user can have multiple projects, and each project can have multiple files. I’ve implemented this using Docker, where each project is a Docker volume. When the user clicks a button in the webapp interface to modify files in their project, the web server configures and launches a worker (which is another Docker instance) to modify the files in the Docker volume. This all works pretty well so far.

However, now I want to serve out these project files over HTTP. The strategy I have in mind is:

  • Container command '/start.sh' not found or does not exist, entrypoint to container is shell script
  • Docker push to AWS ECR private repo failing with malformed JSON
  • Why do I get unauthorized: authentication required from a docker pull from docker hub?
  • Is there whoami analog for Docker?
  • How can one influence the order of RUN commands in a Dockerfile?
  • How to build a docker container with nix?
    1. A web server (like nginx) accepts an incoming HTTP request from the user
    2. The web server inspects the incoming request to determine which project is being requested. For example, if the URL is sparkle-pony.myapp.com, then we know that the sparkle-pony project is being requested. If this project doesn’t exist, nginx responds with a 404 Not Found response.
    3. The web server also checks if the user is logged in, and if that logged in user has permission to view the project. If not, the web server responds with a 403 Forbidden HTTP response.
    4. The web server configures and launches a new Docker container, probably another nginx process. Part of this configuration includes mounting the correct Docker volume onto the new container. We’ll call this newly launched container the “inner” container, and the existing container the “outer” container.
    5. The outer container either hands off this HTTP request to the inner container, or acts as a proxy for the inner container’s response.
    6. The inner container, with access to the correct Docker volume for the project and secure in the knowledge that the requesting user has the right permissions, checks the URL path and serves up the correct project file from the Docker volume. After the request has been suitably handled, the inner container shuts down.

    So, with all that being said, I have three questions:

    1. Is this a reasonable strategy? It does involve launching a new Docker container for every incoming HTTP request, but I think that’s OK…
    2. What is the best way to hand off the HTTP request from one container to another? Or does the outer container have to proxy the response from the inner container?
    3. Can someone provide some pointers or examples of how to set up a project like this? There are probably some tools or techniques that I don’t yet know about.

    Thank you!

  • Using Docker and haproxy in order to load balance requests to multiple instances of embedded tomcat
  • Mounting local volumes to docker container
  • Create Docker container from image without starting it
  • Run gulp with an arbitrary gulpfile name (not gulpfile.js)
  • Docker container bridged interface to assign IP address
  • fig up: docker containers start synchronisation
  • Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.