Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.

My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:

  • How can I connect Sendmail MTA and PHP-FPM docker containers
  • Bash unable to create directory
  • PHP - how precisely should I define version
  • Docker-compose dinamic hostname for instances
  • Dynamically pick the user GUI and UID who's running Docker at the host from entrypoint
  • Docker Image installing php modules
  • sudo docker run -p 80:80 -t -i <yourname>/<imagename>

    This way I can do from my computers terminal:

    curl http://hostIP:80/foobar

    But how to handle this with multiple containers and multiple web servers?

  • How to configure autoscaling on docker swarm?
  • Got an error when running rake db:create in Docker
  • How to RUN and Login to a container to debug ?
  • ERR_EMPTY_RESPONSE in process of Sentry installation with Docker inside VirtualBox
  • docker networking: How to know which ip will be assigned?
  • How to forbid attempts to use inter-container communication via ingress network in docker swarm?
  • 2 Solutions collect form web for “Multiple docker containers as web server on a single IP”

    You can either expose multiple ports, e.g.

    docker run -p 8080:80 -t -i <yourname>/<imagename>
    docker run -p 8081:80 -t -i <yourname1>/<imagename1>

    or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.


    The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config

    RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
    RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]

    you may run your containers like this:

    docker run --name api1 <yourname>/<imagename>
    docker run --name api2 <yourname1>/<imagename1>
    docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>

    This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : . The later link also shows proxying with nginx.

    Update II:

    In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.

    Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the “-P” attribute.

    sudo docker run -P -t -i <yourname>/<imagename>

    You can use the “docker port” and “docker inspect” commands to see the actual port number allocated to each container.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.