Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.

My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:

  • How do I mount --bind inside a Docker container?
  • Accessing Docker Node API from Digital Ocean
  • what's the best way to let kubenetes pods communicate with each other?
  • How can I increase gitlab CE lfs file size limitation as to not get 500 server errors?
  • Oracle database on VirtualBox VM or Docker?
  • Docker error at higher core counts on a multi core machine
  • sudo docker run -p 80:80 -t -i <yourname>/<imagename>
    

    This way I can do from my computers terminal:

    curl http://hostIP:80/foobar
    

    But how to handle this with multiple containers and multiple web servers?

  • Is it possible to perform Docker-call from a container into the host?
  • Docker shutdown hook or support for graceful exit
  • Docker with `/bin/bash -c` running mysql deamon
  • Docker /bin/bash: nodemon: command not found
  • How to disable Nginx caching when running Nginx using Docker
  • Fail to build MongoDB replcaset with Auth on Swarm
  • 2 Solutions collect form web for “Multiple docker containers as web server on a single IP”

    You can either expose multiple ports, e.g.

    docker run -p 8080:80 -t -i <yourname>/<imagename>
    docker run -p 8081:80 -t -i <yourname1>/<imagename1>
    

    or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.

    Update:

    The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config

    RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
    RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
    

    you may run your containers like this:

    docker run --name api1 <yourname>/<imagename>
    docker run --name api2 <yourname1>/<imagename1>
    docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
    

    This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.

    Update II:

    In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.

    Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the “-P” attribute.

    sudo docker run -P -t -i <yourname>/<imagename>
    

    You can use the “docker port” and “docker inspect” commands to see the actual port number allocated to each container.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.