Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.

My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:

  • Adding a new NIC to a Docker container in a specific order
  • Docker container crashes on start: No route to host
  • How to pass supervisor an environment variable from the docker run command
  • Can't access storage account
  • Docker Swarm: difference between --swarm-discovery and cluster-store
  • Disable Spark master's check for hostname equality
  • sudo docker run -p 80:80 -t -i <yourname>/<imagename>

    This way I can do from my computers terminal:

    curl http://hostIP:80/foobar

    But how to handle this with multiple containers and multiple web servers?

  • Docker-compose ps not showing any output
  • How can I see output from docker-compose services when running interactive command?
  • How to set up Cassandra Docker cluster in Marathon with BRIDGE network?
  • NPM install fails inside Docker container but runs on host w/ corporate proxy
  • Allowing Wordpress to be hosted on a subfolder within an already existing site
  • How to customize virtualbox configuration using docker-machine?
  • 2 Solutions collect form web for “Multiple docker containers as web server on a single IP”

    You can either expose multiple ports, e.g.

    docker run -p 8080:80 -t -i <yourname>/<imagename>
    docker run -p 8081:80 -t -i <yourname1>/<imagename1>

    or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.


    The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config

    RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
    RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]

    you may run your containers like this:

    docker run --name api1 <yourname>/<imagename>
    docker run --name api2 <yourname1>/<imagename1>
    docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>

    This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : . The later link also shows proxying with nginx.

    Update II:

    In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.

    Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the “-P” attribute.

    sudo docker run -P -t -i <yourname>/<imagename>

    You can use the “docker port” and “docker inspect” commands to see the actual port number allocated to each container.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.