How to make a container visible to the outside network, and handle I.P addresses in production

I have:

  • a Windows server on bare metal with Hyper-V
  • Ubuntu server running in Hyper-V
  • a Docker container with an NGINX web application running in Ubuntu server

Every time I run a Docker image it gets a new I.P. address on the Docker0 network interface. For production, I don’t know how to make the Docker container visible to the external network. I also don’t know how to handle the fact that the I.P address changes every time the image is run.

  • Docker intermittently failing when building image
  • Docker: Expose a range of ports
  • Docker Swarm vs. Docker Cluster
  • linking kibana with elasticsearch
  • Looking for ways and options to stop the worker process after task completion in celery?
  • Merge container files when using docker-sync
  • What’s the correct way to:

    • make a Docker container visible to the external network?
    • handle Docker container I.P. addresses in a repeatable way in production?

  • Mono TLS1.2 issues - btls-cert-sync “command not found”
  • Jenkins Job Builder not creating publishers
  • Restrict docker engine to a single GPU
  • Using Ansible docker_service module to deploy service to swarm
  • mocha not return anything when using docker
  • How to use setfacl within a Docker container?
  • One Solution collect form web for “How to make a container visible to the outside network, and handle I.P addresses in production”

    When you run your Docker container with docker run, you should use the -p switch to forward ports, for example:

    docker run -p 80:80 nginx

    This would route port 80 from the Ubuntu server to port 80 within the Nginx container.

    You should check the Docker documentation on this at

    When you have multiple containers and links, you should use EXPOSE in the Dockerfile as documented here:

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.