Why can't I curl one docker container from another via the host

I really don’t understand what’s going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host’s public ip, on a published port.

Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.

  • Dockerfile: mkdir and COPY commands run fine but I can't see the directory and file
  • Set up mapping in Elasticsearch during Docker run
  • Why this SSH command on Nexpect does not work?
  • Why doesn't Linux seek the command in all the directories in PATH environment?
  • How to connect Redis instance host in docker Kitematic in WIN7?
  • Running Boot2docker behind proxy, getting FATA[0020] Forbidden for any interaction with Docker hub
  • DEV-MACHINE
    
    HOST (Public IP = 111.222.333.444)
      CONT_A (Publish 3000)
      CONT_B
    

    enter image description here

    On my dev machine (a completely different machine)

    I can curl without any problems

    curl http://111.222.333.444:3000 --> OK
    

    When I SSH into the HOST

    I can curl without any problesm

    curl http://111.222.333.444:3000 --> OK
    

    When I execute inside CONT_B

    Not possible, just timeout. Ping is fine though…

    docker exec -it CONT_B bash
    $ curl http://111.222.333.444:3000 --> TIMEOUT
    $ ping 111.222.333.444 --> OK
    

    Why?

    Ubuntu 16.04, Docker 1.12.3 (default network setup)

  • Errno::EACCES creating rails project using Docker Rails image
  • Accessing Hue on Cloudera Docker QuickStart
  • Docker failover: Redis, MySQL and Nginx
  • Load balancing is not working properly
  • What special precautions must I make for docker apps running as pid 1?
  • Mount OpenAFS host volume in GitLab-CI runner to make it accesible in Docker
  • One Solution collect form web for “Why can't I curl one docker container from another via the host”

    I know this isn’t strictly answer to the question but there’s a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:

    //create network    
    docker network create --driver overlay --subnet=10.0.9.0/24 my-net
    //Start Container A
    docker run -d --name=A --network=my-net producer:latest
    //Start Container B
    docker run -d --name=B --network=my-net consumer:latest
    
    //Magic has occured
    docker exec -it B /bin/bash
    > curl A:3000 //MIND BLOWN!
    

    Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)

    If you’re not keen on using Docker swarm you can still use Docker legacy links as well:

    docker run -d --name B --link A:A consumer:latest
    

    which would link any exposed (not published) ports in your A container.

    And finally, if you start moving to production…forget about links & overlay networks altogether…use Kubernetes 🙂 Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that’s just my personal opinion.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.