Why can't I curl one docker container from another via the host

I really don’t understand what’s going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host’s public ip, on a published port.

Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.

  • Docker - Using Composer inside PHP container
  • After docker build . the container is not displayed, why?
  • Docker client not able to connect to docker host on VM on Windows 7
  • How to see image tag name in my private docker registry
  • “nginx-proxy” docker image socket volume not mounted
  • How can I pull a Docker image from a private Docker Hub repo remotely?
  • DEV-MACHINE
    
    HOST (Public IP = 111.222.333.444)
      CONT_A (Publish 3000)
      CONT_B
    

    enter image description here

    On my dev machine (a completely different machine)

    I can curl without any problems

    curl http://111.222.333.444:3000 --> OK
    

    When I SSH into the HOST

    I can curl without any problesm

    curl http://111.222.333.444:3000 --> OK
    

    When I execute inside CONT_B

    Not possible, just timeout. Ping is fine though…

    docker exec -it CONT_B bash
    $ curl http://111.222.333.444:3000 --> TIMEOUT
    $ ping 111.222.333.444 --> OK
    

    Why?

    Ubuntu 16.04, Docker 1.12.3 (default network setup)

  • Docker disk usage
  • Docker Compose Image Failing
  • Ansible 1.9.0.1 docker module picks random port by default
  • A Docker workflow for a developers team
  • Should servlet running under tomcat in a container exit when it is not able to reach DB container?
  • Cannot push to remote repo using the (spotify) Docker maven plugin
  • One Solution collect form web for “Why can't I curl one docker container from another via the host”

    I know this isn’t strictly answer to the question but there’s a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:

    //create network    
    docker network create --driver overlay --subnet=10.0.9.0/24 my-net
    //Start Container A
    docker run -d --name=A --network=my-net producer:latest
    //Start Container B
    docker run -d --name=B --network=my-net consumer:latest
    
    //Magic has occured
    docker exec -it B /bin/bash
    > curl A:3000 //MIND BLOWN!
    

    Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)

    If you’re not keen on using Docker swarm you can still use Docker legacy links as well:

    docker run -d --name B --link A:A consumer:latest
    

    which would link any exposed (not published) ports in your A container.

    And finally, if you start moving to production…forget about links & overlay networks altogether…use Kubernetes 🙂 Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that’s just my personal opinion.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.