Docker containers cannot connect through overlay networks
I’ve got multiple hosts running Docker, I use Consul as the key-value store. I am able to create overlay networks, containers can see each other’s hostname and IP, /etc/hosts is nicely updated when containers are created/destroyed. However, the containers that are on different hosts can’t actually connect to each other (the ones on the same host can).
I’ve kept investigating logs, and Docker daemon logs contain these:
[INFO] serf: EventMemberJoin: vagrant-ubuntu-trusty-64 192.168.57.103 [ERR] memberlist: Conflicting address for vagrant-ubuntu-trusty-64. Mine: 192.168.57.103:7946 Theirs: 192.168.57.102:7946 [ERR] serf: Node name conflicts with another node at 192.168.57.102:7946. Names must be unique! (Resolution enabled: true)
Should the Docker daemons identify themselves somehow? It looks that Serf is confused because each daemon just uses the hostname as identifier.
2 Solutions collect form web for “Docker containers cannot connect through overlay networks”
Answer: cluster members need to have unique hostnames, because Docker daemons are identified base on the hostname (by default). Why on Earth did Docker leave this out from their tutorials?
Check hostname for the machines on which docker is running. To check hostname run this on terminal.
It should be different for all nodes.