Docker-compose internal communication using endpoints

Working on getting two different services running inside a single docker-compose.yml to communicate w. each other within docker-compose.

The two services are regular NodeJS servers (app1 & app2). app1 receives POST requests from an external source, and should then send a request to the other NodeJS server, app2 w. information based on the initial POST request.

  • Crypt32.dll not found for awssdk in .net core running in docker
  • Connecting to a specific shell instance in a docker container?
  • npm install with a docker-compose project
  • How to manage many hosts with shipyard
  • Script to clone/snapshot Docker Containers including their Data?
  • Docker container cannot run
  • The challenge that I’m facing is how to make the two NodeJS containers communicate w. each other w/o hardcoding a specific container name. The only way I can get the two containers to communicate currently, is to hardcode a url like: http://myproject_app1_1, which will then direct the POST request from app1 to app2 correctly, but due to the way Docker increments container names, it doesn’t scale very well nor support potential container crashing etc.

    Instead I’d prefer to send the POST request to something along the lines of http://app2 or a similar way to handle and alias a number of containers, and no matter how many instances of the app2 container exists Docker will pass the request one of the running app2 containers.

    Here’s a sample of my docker-compose.yml file:

    version: '2'
        image: 'mhart/alpine-node:6.3.0'
        container_name: app1
        command: npm start
        image: 'mhart/alpine-node:6.3.0'
        container_name: app2
        command: npm start
      # databases [...]

    Thanks in advance.

  • boot2docker can't access the outside world
  • Transmit Heroku environement variables to Docker instance
  • Change default instance disk size in the kubernetes-jenkins plugin
  • Use git in jenkins pipeline with docker agent
  • docker does not create folder in nginx html root
  • Mongodb on samba shared directory
  • 2 Solutions collect form web for “Docker-compose internal communication using endpoints”

    When you run two containers from one compose file, docker automatically sets up an “internal dns” that allows to reference other containers by their service name defined in the compose file (assuming they are in the same network). So this should work when referencing http://app2 from the first service.

    See this example proxying requests from proxy to the backend whoamiapp by just using the service name.


    server {
        listen 80;
        location / {
            proxy_pass http://whoamiapp;


    version: "2"
        image: nginx
            - ./default.conf:/etc/nginx/conf.d/default.conf:ro
        - "80:80"
        image: emilevauge/whoami

    Run it using docker-compose up -d and try running curl <dockerhost>.

    This sample uses the default network with docker-compose file version 2. You can read more about how networking with docker-compose works here:

    Probably your configuration of the container_name property somehow interferes with this behaviour? You should not need to define this on your own.

    Ok. This is two questions.

    First: how to don`t hardcode container names.
    you can use system environment variables like:

    nodeJS file:

    app2Address = process.env.APP2_ADDRESS; 
    response = http.request(app2Address);

    docker compose file:

      image: 'mhart/alpine-node:6.3.0'
      container_name: app1
      command: npm start
       - APP2_ADDRESS: ${app2_address}
      image: 'mhart/alpine-node:6.3.0'
      container_name: app2
      command: npm start
       - HOSTNAME: ${app2_address}

    and .env file like:

    also you can use wildcard application config file. And when container starts you need to substitute real hostname.
    for this action you need create and use “sed” like:

    sed -i '/sAPP_2HOSTNAME_WILDCARD/${app2_address}/g /app1/congig.js

    Second. how to make a transparent load balancing:

    you need use http load balancer like

    • haproxy
    • nginx as load balancer

    There is hello-world tutorial how to make a load balancing with docker

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.