Docker swarm and service discovery

I’m moving away from docker-compose files to using docker swarm but I just can’t figure this out.

I have two services – a nginx proxy, and a website both running just fine in docker swarm (which has three nodes)

  • How to create a table in PostgreSQL using pyscopg2?
  • apache marathon: my docker image keeps failing
  • Cannot stop a running docker container
  • Consul running on Docker HTTP health check returns “connection reset by peer”
  • Why is crond failing to run a non-root crontab on alpine linux?
  • Docker 1.9.1 - ERROR 2005 (HY000): Unknown MySQL server host
  • The issue I’ve got is I need to configure nginx to proxy_pass to the backend website. Currently the only way I can get this to work is by specifying an ip address of one of the nodes.

    My services are created as follows:

    docker service create --mount type=bind,source=/../nginx.conf,target=/etc/nginx/conf.d/default.conf \
    -p 443:443 \
    --name nginx \


    docker service create --name ynab \
    -p 5000:5000 \
    --replicas 2 \

    I’ve tried using the service name but that just doesn’t work.

    Really I don’t think I should have to even expose the ynab service ports (at least that would work when I used docker-compose)

    In one of the nginx containers I have tried the following:

    root@5fabc5611264:/# curl http://ynab:5000/ping
    curl: (6) Could not resolve host: ynab
    root@5fabc5611264:/# curl http://nginx:5000/ping
    curl: (6) Could not resolve host: nginx
    root@5fabc5611264:/# curl
    curl: (7) Failed to connect to port 5000: Connection refused
    root@5fabc5611264:/# curl http://localhost:5000/ping
    curl: (7) Failed to connect to localhost port 5000: Connection refused

    Using the process list I tried connecting to the running instances id, and name:

    root@5fabc5611264:/# curl http://ynab.1:5000/ping
    curl: (6) Could not resolve host: ynab.1
    root@5fabc5611264:/# curl http://pj0ekc6i7n0v:5000/ping
    curl: (6) Could not resolve host: pj0ekc6i7n0v

    But I can only get it to work if I use the nodes public ip addresses:

    root@5fabc5611264:/# curl
    root@5fabc5611264:/# curl
    root@5fabc5611264:/# curl

    I really don’t want to use a public ip in case that node goes down. I’m sure I must just be doing something wrong!

  • Dockerizing Flask web application that uses google oauth2
  • Why do Docker official images not use USER as required by “best practices”
  • How to deploy application to docker container on Linux server?
  • Can't access docker containers when there more than 2 containers
  • Cannot invoke gdb on docker images (on OSX)
  • I am running 2 images with Docker Compose and I am having trouble hitting the localhost from my Mac. I am exposing ports 3000. Am I missing something?
  • One Solution collect form web for “Docker swarm and service discovery”

    The services need to be connected to the same network for this to work.

    $ docker network create -d overlay fake-internet-points
    $ docker service create --name service_one --network fake-internet-points [...]
    $ docker service create --name service_two --network fake-internet-points [...] 
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.