Access a docker container in docker multi-host network

I have created a Docker multi-host network using Docker Overlay network with 4 nodes: node0, node1, node2, and node3. Node0 act as key-value store which shares the information of nodes while node1, node2, and node3 are bound to the key-value store.

enter image description here

  • Dockerfile vs docker-compose.yml
  • Need to run docker run command inside python script
  • neo4j in Dockerfile - “service neo4j-service start” not detaching
  • Restrict published port to a specific container with Docker
  • Deploy my application to cluster of multiple vms using orchestration tools
  • Docker: what are negative impacts of running container with memory limit enabled and memory overcommit disabled?
  • Here are node1 networks:

    user@node1$ docker network ls
    NETWORK ID          NAME                DRIVER
    04adb1ab4833        RED                 overlay             
     [ . . ]
    

    As for node2 networks:

    user@node2$ docker network ls
    NETWORK ID          NAME                DRIVER
    04adb1ab4833        RED                 overlay             
     [ . . ]
    

    container1 is running on node1, that hosts the RED-named network.

    user@node1$ docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                    PORTS               NAMES
    f9bacac3c01d        ubuntu              "/bin/bash"         3 hours ago         Up 2 hours                                    container1
    

    Docker added an entry to /etc/hosts for each container that belongs to the RED overlay network.

    user@node1$ docker exec container1 cat /etc/hosts
    
    10.10.10.2  d82c36bc2659
    127.0.0.1   localhost
     [ . . ]
    10.10.10.3  container2
    10.10.10.3  container2.RED
    

    From node2, I’m trying to access the container1 running on node1. I tried to run container1 using command below but it returns error.

    `user@node2$ docker docker exec -i -t container1 bash`
    Error response from daemon: no such id: container1
    

    Any suggestion?

    Thanks.

  • Docker changing /var/lib/docker/aufs/diff location
  • How to add PRE deployment script to AWS Elastic Beanstalk Docker EC2 instance, custom AMI not working
  • Docker not installing composer dependencies after image built
  • traefik hostname works for web apps but not for mongodb
  • Docker error: client and server don't have same version
  • Golang connect to docker connect to my sql docker getsockopt: connection refused
  • One Solution collect form web for “Access a docker container in docker multi-host network”

    The network is shared only for the containers.

    While the network is shared among the containers across the multi-hosts overlay, the docker daemons cannot communicate between them as is.

    The user@_node2_$ docker exec -i -t container1 bash doest not work because, indeed, no such id: container1 are running from node2.

    Accessing remote Docker daemon

    Docker daemons communicate through socket. UNIX socket by default, but it is possible to add an option, --host to specify other sockets the daemon should bind to.

    See the docker daemon man page:

       -H, --host=[unix:///var/run/docker.sock]: tcp://[host:port] to bind or unix://[/path/to/socket] to use.
         The socket(s) to bind to in daemon mode specified using one or more
         tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.
    

    Thus, it is possible to access from any node a docker daemon bind to a tcp socket.

    The command user@node2$ docker -H tcp://node1:port exec -i -t container1 bash would work well.

    Docker and Docker cluster (Swarm)

    I do not know what you are trying to deploy, maybe just playing around with the tutorials, and that’s great! You may be interested to look into Swarm that deploys a cluster of docker. In short: you can use several nodes as it they were one powerful docker daemon access through a single node with the whole Docker API.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.