Communication between two Docker containers on macOS 10.12

I’m working with Docker 1.12.5 on macOS 10.12, and am setting up a development environment with with I have an application image, and a shared redis image which has some pre-populated configuration variables.

Even after following a few tutorials (and reading about how docker0 isn’t available on Mac) I’m struggling to connect the two containers.

  • Using bower inside a docker container with a private repo dependency
  • Ambari 2.2 - exiting with non-zero status code on Ubuntu 14.04 Docker container
  • inotifywait with Docker command and variable
  • How to edit a file dynamically in a running docker container
  • Debugging Unicorn server remotely with RubyMine
  • Container process on host machine
  • I start my redis image using:

    docker run -d -p 6379:6379 (IMAGE ID)

    In my redis image, I have:

    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
    dffb89854618        d59                 ""   20 seconds ago      Up 19 seconds>6379/tcp   drunk_williams

    And from my Mac I can successfully connect via the redis-cli command without issue.

    However when I start a simple ubuntu image, I can’t seem to connect to this separate redis image:

    root@2d4eda315f4f:/# ifconfig
    eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:03
              inet addr:  Bcast:  Mask:
              inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:20707 errors:0 dropped:0 overruns:0 frame:0
              TX packets:11515 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:28252929 (28.2 MB)  TX bytes:635848 (635.8 KB)
    lo        Link encap:Local Loopback
              inet addr:  Mask:
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:12 errors:0 dropped:0 overruns:0 frame:0
              TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1
              RX bytes:680 (680.0 B)  TX bytes:680 (680.0 B)
    root@2d4eda315f4f:/# telnet localhost 6379
    Trying ::1...
    telnet: Unable to connect to remote host: Connection refused
    root@2d4eda315f4f:/# telnet 6379
    telnet: Unable to connect to remote host: Connection refused

    Is this a result of not having the docker0 interface available in the host? Is there some straightforward workaround for allowing these containers to communicate (when being run on the same host) in a development environment?

    Update: Attempting to use named containers, I still can’t connect.

    docker run -d --name redis_server redis

    Results in:

    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
    5d05820aa985        redis               ""   43 hours ago        Up 1 seconds        6379/tcp                 redis_server

    But then if I start a new Ubuntu container:

    root@e92b47419bc4:/# redis-cli -h redis_server
    Could not connect to Redis at redis_server:6379: Name or service not known

    I’m not sure how to find/connect to the first redis_server container.

  • Relationship between docker0, Docker Bridge Driver and Containers
  • Connect to docker-machine using 'localhost'
  • Pull docker image from AWS ECR using remote API
  • “Empty reply from server” when trying to run webpack-dev-server inside a docker container with docker-compose on windows running docker-toolbox
  • Use encrypted client certificate to connect to a docker host
  • docker network link to 2 or multiple containers
  • One Solution collect form web for “Communication between two Docker containers on macOS 10.12”

    Each container has its own localhost

    Each service runs in its own container. From the perspective of the Ubuntu container, redis is not listening on localhost.

    Use Docker networks

    To get your containers to communicate, they should be on the same Docker network. This consists of three steps:

    1. Create a Docker network
    2. Give your containers names
    3. Attach your containers to the network you created

    With this done, the containers can talk to each other using their names as if they were hostnames.

    There’s more than one way to skin this cat… I will look at two in this answer, but there are probably a few other ways to do it that I am not familiar with (like using Kubernetes or Swarm, for instance).

    Doing it by hand

    You can create a network for this application using docker network commands.

    # Show the current list of networks
    docker network ls
    # Create a network for your app
    docker network create my_redis_app

    When you run the redis container, make sure it has a name, and is on this network. You can expose the ports externally (to macOS) if you want to (using -p), but that is not necessary just for other containers to talk to redis.

    docker run -d -p 6379:6379 --name redis_server --network my_redis_app <IMAGE ID>

    Now run your Ubuntu container. You can name it as well if you like, but I won’t bother in this example because this one isn’t running any services.

    docker run -it --network my_redis_app ubuntu bash

    Now from inside the Ubuntu container, you should be able to reach redis using the name redis_server, as if it were a DNS name.

    Doing it using Compose

    I tend to build setups like this using Compose, because it’s easier to write it into a YAML file (IMO). Here’s an example of the above, re-written in docker-compose.yml form:

    version: '2'
        image: <IMAGE ID>
          - my_redis_app
        ports: 6379:6379
        image: ubuntu:latest
          - my_redis_app
        driver: bridge

    With this in place, you can run docker-compose up -d redis and have your redis service online using a specific Docker network. Compose will create the network for you, if it doesn’t already exist.

    It makes less sense to run the Ubuntu container that way… it is interactive, of course. But I assume once you have redis going, you will add some kind of application container, and perhaps a web proxy like nginx… just put the others under services as well, and you can manage them all together.

    Since ubuntu is interactive, you can run it interactively:

    # without -d, container is run interactively
    docker-compose run ubuntu bash

    And now in Ubuntu, you should be able to connect to redis using its name, which in this example is simply redis.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.