Docker containers in real life

I have been following the tutorials and been experimenting with Docker for a couple of days but I can’t find any “real-world” usage example..

How can I communicate with my container from the outside?

  • Docker syslog driver with multline parsing in logstash
  • Docker doesn't save data in host mounted volume
  • How can I gracefully recover from an attached Docker container terminating?
  • Root user in Elasticsearch 2.4.0 in Docker container
  • How to link docker containers on build?
  • Write logs in PHP outside Docker container
  • All examples I can find ends up with 1 or more containers, they can share ports with their-others, but no-one outside the host gets access to their exposed ports.

    Isn’t the whole point of having containers like this, that at least 1 of them needs to be accessible from the outside?

    I have found a tool called pipework (https://github.com/jpetazzo/pipework) which probably will help me with this. But is this the tool everyone testing out Docker for production what they are using?

    Is a “hack” necessary to get the outside to talk to my container?

  • How to speed up CI build times when using docker?
  • Docker reloading certificate
  • Window version of Docker not able to mount local directory to ubuntu image on container?
  • container monitoring not enabled by default
  • docker-compose image export and import
  • Forward Docker log to logstash using syslog driver
  • 2 Solutions collect form web for “Docker containers in real life”

    You can use the argument -p to expose a port of your container to the host machine.

    For example:

      sudo docker run -p80:8080 ubuntu bash
    

    Will bind the port 8080 of your container to the port 80 of the host machine.

    Therefore, you can then access to your container from the outside using the URL of the host:

      http://you.domain -> losthost:80 -> container:8080
    

    Is that what you wanted to do? Or maybe I missed something

    (The parameter -expose only expose port to other containers (not the host))

    This (https://blog.codecentric.de/en/2014/01/docker-networking-made-simple-3-ways-connect-lxc-containers/) blog post explains the problem and the solution.

    Basicly, it looks like pipeworks (https://github.com/jpetazzo/pipework) is the way to expose container ports to the outside as of now… Hope this gets integrated soon..

    Update: In this case, iptables was to blame, and there was a rule that blocked forwarded traffic. Adding -A FORWARD -i em1 -o docker0 -j ACCEPT solved it..

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.