Access a docker container in docker multi-host network

I have created a Docker multi-host network using Docker Overlay network with 4 nodes: node0, node1, node2, and node3. Node0 act as key-value store which shares the information of nodes while node1, node2, and node3 are bound to the key-value store.

enter image description here

  • Docker GitLab image that update itself
  • Restarting host from docker container
  • Docker: HTTP code 403 while uploading metadata: “invalid character '<' looking for beginning of value”
  • How to update /etc/hosts file in Docker image during “docker build”
  • Passing ES_JAVA_OPTS variable with spaces when using docker compose
  • while running a kubernetes application
  • Here are node1 networks:

    user@node1$ docker network ls
    NETWORK ID          NAME                DRIVER
    04adb1ab4833        RED                 overlay             
     [ . . ]

    As for node2 networks:

    user@node2$ docker network ls
    NETWORK ID          NAME                DRIVER
    04adb1ab4833        RED                 overlay             
     [ . . ]

    container1 is running on node1, that hosts the RED-named network.

    user@node1$ docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                    PORTS               NAMES
    f9bacac3c01d        ubuntu              "/bin/bash"         3 hours ago         Up 2 hours                                    container1

    Docker added an entry to /etc/hosts for each container that belongs to the RED overlay network.

    user@node1$ docker exec container1 cat /etc/hosts  d82c36bc2659   localhost
     [ . . ]  container2  container2.RED

    From node2, I’m trying to access the container1 running on node1. I tried to run container1 using command below but it returns error.

    `user@node2$ docker docker exec -i -t container1 bash`
    Error response from daemon: no such id: container1

    Any suggestion?


  • Can't install minimesos on Vagrant — Failed to run command 'up'. null
  • Can I setup a remote spark server inside docker so that I can communicate with it locally?
  • Docker Rails Tutorial generated files not exists
  • Using volume directory in build process
  • AWR ALB health check fails when redirecting http request to https on django nginx server
  • Docker: listening in container, not answering outside
  • One Solution collect form web for “Access a docker container in docker multi-host network”

    The network is shared only for the containers.

    While the network is shared among the containers across the multi-hosts overlay, the docker daemons cannot communicate between them as is.

    The user@_node2_$ docker exec -i -t container1 bash doest not work because, indeed, no such id: container1 are running from node2.

    Accessing remote Docker daemon

    Docker daemons communicate through socket. UNIX socket by default, but it is possible to add an option, --host to specify other sockets the daemon should bind to.

    See the docker daemon man page:

       -H, --host=[unix:///var/run/docker.sock]: tcp://[host:port] to bind or unix://[/path/to/socket] to use.
         The socket(s) to bind to in daemon mode specified using one or more
         tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.

    Thus, it is possible to access from any node a docker daemon bind to a tcp socket.

    The command user@node2$ docker -H tcp://node1:port exec -i -t container1 bash would work well.

    Docker and Docker cluster (Swarm)

    I do not know what you are trying to deploy, maybe just playing around with the tutorials, and that’s great! You may be interested to look into Swarm that deploys a cluster of docker. In short: you can use several nodes as it they were one powerful docker daemon access through a single node with the whole Docker API.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.