Docker containers in real life

I have been following the tutorials and been experimenting with Docker for a couple of days but I can’t find any “real-world” usage example..

How can I communicate with my container from the outside?

  • Docker: clean up all stopped containers except data-only containers
  • Hot swap port mappings on docker containers
  • How can I run container with process.argv in Docker?
  • Discourse run on nginx got 403 forbidden
  • Node Docker Container - Pulls from Git Repo for Node App Source Upon Running
  • Docker for traditional web application with load balancing?
  • All examples I can find ends up with 1 or more containers, they can share ports with their-others, but no-one outside the host gets access to their exposed ports.

    Isn’t the whole point of having containers like this, that at least 1 of them needs to be accessible from the outside?

    I have found a tool called pipework ( which probably will help me with this. But is this the tool everyone testing out Docker for production what they are using?

    Is a “hack” necessary to get the outside to talk to my container?

  • How to get container and image name when using fluentd for docker logging?
  • Docker with php built-in server
  • Docker container not able to ping my remote machine
  • Docker service failed to build : return a non-zero code 1
  • How to access Docker container from another machine on lan
  • Docker linked containers, Docker Networks, Compose Networks - how should we now 'link' containers
  • 2 Solutions collect form web for “Docker containers in real life”

    You can use the argument -p to expose a port of your container to the host machine.

    For example:

      sudo docker run -p80:8080 ubuntu bash

    Will bind the port 8080 of your container to the port 80 of the host machine.

    Therefore, you can then access to your container from the outside using the URL of the host:

      http://you.domain -> losthost:80 -> container:8080

    Is that what you wanted to do? Or maybe I missed something

    (The parameter -expose only expose port to other containers (not the host))

    This ( blog post explains the problem and the solution.

    Basicly, it looks like pipeworks ( is the way to expose container ports to the outside as of now… Hope this gets integrated soon..

    Update: In this case, iptables was to blame, and there was a rule that blocked forwarded traffic. Adding -A FORWARD -i em1 -o docker0 -j ACCEPT solved it..

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.