Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.

My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:

  • Connection Refused Docker Run
  • Using network_mode='host' in docker-compose break run: host type networking can't be used with links
  • Docker. Error: Cannot start container: port has already been allocated
  • Deploying an Angular app using Docker/Dokku
  • Debug Symfony2 in Docker with PhpStorm and Xdebug
  • Docker Conditional build image
  • sudo docker run -p 80:80 -t -i <yourname>/<imagename>
    

    This way I can do from my computers terminal:

    curl http://hostIP:80/foobar
    

    But how to handle this with multiple containers and multiple web servers?

  • Docker fails in Visual Studio 2017, how do i get docker to run in VS2017
  • Containerization of OpenStack services with Kubernetes?
  • Secure Docker Daemon with wild card domain certificate
  • docker-compose setup doesn't allow container to connect to redis
  • Boot2Docker: Can't create directory: Protocol Error
  • Get file from private repo into dockerfile
  • 2 Solutions collect form web for “Multiple docker containers as web server on a single IP”

    You can either expose multiple ports, e.g.

    docker run -p 8080:80 -t -i <yourname>/<imagename>
    docker run -p 8081:80 -t -i <yourname1>/<imagename1>
    

    or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.

    Update:

    The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config

    RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
    RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
    

    you may run your containers like this:

    docker run --name api1 <yourname>/<imagename>
    docker run --name api2 <yourname1>/<imagename1>
    docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
    

    This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.

    Update II:

    In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.

    Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the “-P” attribute.

    sudo docker run -P -t -i <yourname>/<imagename>
    

    You can use the “docker port” and “docker inspect” commands to see the actual port number allocated to each container.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.