Consul docker – advertise flag ignored

Hi i have configured a cluster with two nodes (two vm into virtualbox), cluster start correctly but advertise flag seems to be ignored by consul

  • vm1 (app) ip 192.168.20.10
  • vm2 (web) ip 192.168.20.11

docker-compose vm1 (app)

  • Docker workflow for development- running containers build up
  • Bidirectional UDP Port for docker container
  • Use gcloud with docker
  • Docker java:7 image apt-get update cyclic dependencies
  • If images in Docker are untagged, are they used at all or just orphans?
  • Installing sensu, sensu-dashboard does not install
  • version: '2'
    services:
        appconsul:
            build: consul/
            ports:
                - 192.168.20.10:8300:8300
                - 192.168.20.10:8301:8301
                - 192.168.20.10:8301:8301/udp
                - 192.168.20.10:8302:8302
                - 192.168.20.10:8302:8302/udp
                - 192.168.20.10:8400:8400
                - 192.168.20.10:8500:8500
                - 172.32.0.1:53:53/udp
            hostname: node_1
            command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui
            networks:
                net-app:
    
        appregistrator:
            build: registrator/
            hostname: app
            command: consul://192.168.20.10:8500
            volumes:
                - /var/run/docker.sock:/tmp/docker.sock
            depends_on:
                - appconsul
            networks:
                net-app:
    networks:
        net-app:
            driver: bridge
            ipam:
                config:
                    - subnet: 172.32.0.0/24
    

    docker-compose vm2 (web)

    version: '2'
    services:
        webconsul:
            build: consul/
            ports:
                - 192.168.20.11:8300:8300
                - 192.168.20.11:8301:8301
                - 192.168.20.11:8301:8301/udp
                - 192.168.20.11:8302:8302
                - 192.168.20.11:8302:8302/udp
                - 192.168.20.11:8400:8400
                - 192.168.20.11:8500:8500
                - 172.33.0.1:53:53/udp
            hostname: node_2
            command: -server -advertise 192.168.20.11 -join 192.168.20.10
            networks:
                net-web:
    
        webregistrator:
            build: registrator/
            hostname: web
            command: consul://192.168.20.11:8500
            volumes:
                - /var/run/docker.sock:/tmp/docker.sock
            depends_on:
                - webconsul
            networks:
                net-web:
    networks:
        net-web:
            driver: bridge
            ipam:
                config:
                    - subnet: 172.33.0.0/24
    

    After start i not have error about advertise flag but the services has registered with private ip of internal network instead with IP declared in advertise (192.168.20.10 and 192.168.20.11), any idea?

    Attach log of node_1, but they are the same as node_2

    appconsul_1       | ==> WARNING: Expect Mode enabled, expecting 2 servers
    appconsul_1       | ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
    appconsul_1       | ==> Starting raft data migration...
    appconsul_1       | ==> Starting Consul agent...
    appconsul_1       | ==> Starting Consul agent RPC...
    appconsul_1       | ==> Consul agent running!
    appconsul_1       |          Node name: 'node_1'
    appconsul_1       |         Datacenter: 'dc1'
    appconsul_1       |             Server: true (bootstrap: false)
    appconsul_1       |        Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
    appconsul_1       |       Cluster Addr: 192.168.20.10 (LAN: 8301, WAN: 8302)
    appconsul_1       |     Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
    appconsul_1       |              Atlas: <disabled>
    appconsul_1       | 
    appconsul_1       | ==> Log data will now stream in as it occurs:
    appconsul_1       | 
    appconsul_1       |     2017/06/13 14:57:24 [INFO] raft: Node at 192.168.20.10:8300 [Follower] entering Follower state
    appconsul_1       |     2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1 192.168.20.10
    appconsul_1       |     2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1.dc1 192.168.20.10
    appconsul_1       |     2017/06/13 14:57:24 [INFO] consul: adding server node_1 (Addr: 192.168.20.10:8300) (DC: dc1)
    appconsul_1       |     2017/06/13 14:57:24 [INFO] consul: adding server node_1.dc1 (Addr: 192.168.20.10:8300) (DC: dc1)
    appconsul_1       |     2017/06/13 14:57:25 [ERR] agent: failed to sync remote state: No cluster leader
    appconsul_1       |     2017/06/13 14:57:25 [ERR] agent: failed to sync changes: No cluster leader
    appconsul_1       |     2017/06/13 14:57:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
    appconsul_1       |     2017/06/13 14:57:48 [ERR] agent: failed to sync remote state: No cluster leader
    appconsul_1       |     2017/06/13 14:58:13 [ERR] agent: failed to sync remote state: No cluster leader
    appconsul_1       |     2017/06/13 14:58:22 [INFO] serf: EventMemberJoin: node_2 192.168.20.11
    appconsul_1       |     2017/06/13 14:58:22 [INFO] consul: adding server node_2 (Addr: 192.168.20.11:8300) (DC: dc1)
    appconsul_1       |     2017/06/13 14:58:22 [INFO] consul: Attempting bootstrap with nodes: [192.168.20.10:8300 192.168.20.11:8300]
    appconsul_1       |     2017/06/13 14:58:23 [WARN] raft: Heartbeat timeout reached, starting election
    appconsul_1       |     2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Candidate] entering Candidate state
    appconsul_1       |     2017/06/13 14:58:23 [WARN] raft: Remote peer 192.168.20.11:8300 does not have local node 192.168.20.10:8300 as a peer
    appconsul_1       |     2017/06/13 14:58:23 [INFO] raft: Election won. Tally: 2
    appconsul_1       |     2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Leader] entering Leader state
    appconsul_1       |     2017/06/13 14:58:23 [INFO] consul: cluster leadership acquired
    appconsul_1       |     2017/06/13 14:58:23 [INFO] consul: New leader elected: node_1
    appconsul_1       |     2017/06/13 14:58:23 [INFO] raft: pipelining replication to peer 192.168.20.11:8300
    appconsul_1       |     2017/06/13 14:58:23 [INFO] consul: member 'node_1' joined, marking health alive
    appconsul_1       |     2017/06/13 14:58:23 [INFO] consul: member 'node_2' joined, marking health alive
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_solr_1:8983'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302:udp'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8500'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8300'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'consul'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_mysql_1:3306'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8400'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:53:udp'
    appconsul_1       |     2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301:udp'
    

    Thanks for any reply

    UPDATE:

    I have tried to remove networks section from compose file but have same problem, i resolved using compose v1, this configuration works:

    compose vm1 (app)

    appconsul:
        build: consul/
        ports:
            - 192.168.20.10:8300:8300
            - 192.168.20.10:8301:8301
            - 192.168.20.10:8301:8301/udp
            - 192.168.20.10:8302:8302
            - 192.168.20.10:8302:8302/udp
            - 192.168.20.10:8400:8400
            - 192.168.20.10:8500:8500
            - 172.32.0.1:53:53/udp
        hostname: node_1
        command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui
    
    appregistrator:
        build: registrator/
        hostname: app
        command: consul://192.168.20.10:8500
        volumes:
            - /var/run/docker.sock:/tmp/docker.sock
        links:
            - appconsul
    

    compose vm2 (web)

    webconsul:
        build: consul/
        ports:
            - 192.168.20.11:8300:8300
            - 192.168.20.11:8301:8301
            - 192.168.20.11:8301:8301/udp
            - 192.168.20.11:8302:8302
            - 192.168.20.11:8302:8302/udp
            - 192.168.20.11:8400:8400
            - 192.168.20.11:8500:8500
            - 172.33.0.1:53:53/udp
        hostname: node_2
        command: -server -advertise 192.168.20.11 -join 192.168.20.10
    
    webregistrator:
        build: registrator/
        hostname: web
        command: consul://192.168.20.11:8500
        volumes:
            - /var/run/docker.sock:/tmp/docker.sock
        links:
            - webconsul
    

  • docker akka and scala, app start and stop just after without reason
  • Expose a docker port on Mac Osx to other computer
  • Port exposed by Docker not reachable
  • connecting to docker with curl
  • Docker Swarm with data: shared volume vs clustering vs single instance
  • How can I terminate a Docker instance from within itself?
  • One Solution collect form web for “Consul docker – advertise flag ignored”

    The problem is version of compose file, v2 and v3 have same problem, work only with compose file v1

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.