Access a docker container in docker multi-host network

I have created a Docker multi-host network using Docker Overlay network with 4 nodes: node0, node1, node2, and node3. Node0 act as key-value store which shares the information of nodes while node1, node2, and node3 are bound to the key-value store.

enter image description here

  • Docker Django 404 for web static files, but fine for admin static files
  • How to launch docker image with --entrypoint and arguments?
  • docker container in AWS VPC, good idea or not? [closed]
  • Error cannot find -lz building MariaDB on a debian based container
  • Docker/virtualisation and HDFS
  • Share SSH Key to Docker Machine
  • Here are node1 networks:

    user@node1$ docker network ls
    NETWORK ID          NAME                DRIVER
    04adb1ab4833        RED                 overlay             
     [ . . ]

    As for node2 networks:

    user@node2$ docker network ls
    NETWORK ID          NAME                DRIVER
    04adb1ab4833        RED                 overlay             
     [ . . ]

    container1 is running on node1, that hosts the RED-named network.

    user@node1$ docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                    PORTS               NAMES
    f9bacac3c01d        ubuntu              "/bin/bash"         3 hours ago         Up 2 hours                                    container1

    Docker added an entry to /etc/hosts for each container that belongs to the RED overlay network.

    user@node1$ docker exec container1 cat /etc/hosts  d82c36bc2659   localhost
     [ . . ]  container2  container2.RED

    From node2, I’m trying to access the container1 running on node1. I tried to run container1 using command below but it returns error.

    `user@node2$ docker docker exec -i -t container1 bash`
    Error response from daemon: no such id: container1

    Any suggestion?


  • Setting up docker agents in TFS
  • What's the diff between Empty and No Value for Docker Volumes From?
  • In Docker, do I need to publish ports if I set network to host?
  • Oracle Java in Docker Container cannot resolve hostname after /etc/nsswitch.conf changed
  • Running docker commands as a build step in Jenkins
  • Bcrypt: invalid ELF header with Docker and Sails.JS
  • One Solution collect form web for “Access a docker container in docker multi-host network”

    The network is shared only for the containers.

    While the network is shared among the containers across the multi-hosts overlay, the docker daemons cannot communicate between them as is.

    The user@_node2_$ docker exec -i -t container1 bash doest not work because, indeed, no such id: container1 are running from node2.

    Accessing remote Docker daemon

    Docker daemons communicate through socket. UNIX socket by default, but it is possible to add an option, --host to specify other sockets the daemon should bind to.

    See the docker daemon man page:

       -H, --host=[unix:///var/run/docker.sock]: tcp://[host:port] to bind or unix://[/path/to/socket] to use.
         The socket(s) to bind to in daemon mode specified using one or more
         tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.

    Thus, it is possible to access from any node a docker daemon bind to a tcp socket.

    The command user@node2$ docker -H tcp://node1:port exec -i -t container1 bash would work well.

    Docker and Docker cluster (Swarm)

    I do not know what you are trying to deploy, maybe just playing around with the tutorials, and that’s great! You may be interested to look into Swarm that deploys a cluster of docker. In short: you can use several nodes as it they were one powerful docker daemon access through a single node with the whole Docker API.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.