Docker compose share environment variables

docker-compose.yml

version: '2.1'
services:
  db1:
    [...]
    healthcheck: ..

  db2:
    [...]
    healthcheck: .. 

  service1:
    [...]
    links:
      - db1:dbname 
      - db2:dbname
    depends_on:
      db1:
        condition: service_healthy
      db2:
        condition: service_healthy

However, the main service service1 fails because it’s looking for the environment variables from the databases,

${env.DBNAME_PORT_3306_TCP_ADDR}
${env.DBNAME_PORT_3306_TCP_PORT}
${env.DBNAME_ENV_MYSQL_DATABASE}

I know the compose docs state “Environment variables are no longer the recommended method for connecting to linked services. Environment variables will only be populated if you’re using the legacy version 1 Compose file format.” but there’s not much I can do here without them.

  • can i create instances in Openstack using vagrant or docker?
  • Amazon EC2 Image creation breaks docker folder if “no-reboot” option is used
  • Is zap docker generate .xml report?
  • Can not connect from golang to docker postgres container
  • How to connect to the external server from docker container?
  • Configure CodeSniffer on PhpStorm using Docker
  • What is best practice here? Thanks!

  • How to install docker-engine using docker binary without internet connection
  • Work with docker and IDE
  • Where is the Docker daemon log?
  • install/access executable for existing docker container
  • Post a Json file to elasticsearch running on Docker, Kitematic on Windows 10
  • Deploy NoneJS application with Docker
  • 2 Solutions collect form web for “Docker compose share environment variables”

    A few thoughts on this one:

    • Links are relatively deprecated in favor of DNS based service discovery. You’ll still want to use depends_on to maintain startup order.

    • Environment variables have only gone away in a limited aspect with docker stack deploy. Since you’re on version 2.1, it’s clear you’re using docker-compose. Even then, the part of environment variables that is going away for docker stack deploy is the ability to expand environment variables from the host into the yml file, not from the yml file into the container. And even with that expansion gone, you can use docker-compose config as a preprocessor to generate a yml that can be used with docker stack deploy. That’s a long way of saying don’t change your design for this limitation.

    • Since I recommended against linking, a good replacement for the two databases coming in with the same name on the network is a network alias.

    • I’m assuming you defined two databases like this to map to separate volumes/filesystems or constrain to separate hosts, etc. Otherwise, scaling the instance would let you make a single definition.

    The result would look like:

    version: '2.1'
    services:
      db1:
        [...]
        healthcheck: ..
        networks:
          default:
            aliases:
             - dbname
    
      db2:
        [...]
        healthcheck: .. 
        networks:
          default:
            aliases:
             - dbname
    
      service1:
        [...]
        environment:
         - DBNAME_PORT_3306_TCP_ADDR: dbname
         - DBNAME_PORT_3306_TCP_PORT: 3306
         - DBNAME_ENV_MYSQL_DATABASE: yourdb
        depends_on:
          db1:
            condition: service_healthy
          db2:
            condition: service_healthy
    

    I think you got something wrong about understanding how does the docker-compose.yml file work.

    The first issue I see is that you have added the same alias dbname to two links, db1 and db2 in your links section of the service1 service. The part after the semicolon is the alias. You do not have to use it.

    Also, when you use linking there is no need to add the depends_on section, linking includes it implicitly. Further more, the condition is removed from version 3 of the file format.

    As for your question, here is what I usually do when linking containers.

    First, in your application, make your connection string is parametrized based on your environment variables. Depending on your programming language/tools this might be simple or not. For the sake of simplicity, imagine you can parametrize your connection string like this (example from Play Framework):

    connectionString=jdbc:postgresql://${DB_HOST}:${DB_PORT}/{DB_NAME}

    In Play Framework this means that the DB_HOST, DB_PORT andDB_NAME should be read from the environment variables. Your framework probably has similar or alternative constructs.

    Now, properly link the services in the docker-compose file

    version: '2.1'
    services:
      db1:
        [...]
        healthcheck: ..
    
      service1:
        [...]
        links:
          - db1:database1
    

    However, this is not enough. It would be if you have written database1 instead of DB_HOST and hardcoded the port and database name in the application configuration. That is also acceptable, but I like to make my application configuration as independent as possible from the platform. That’s why I set up parts of the configuration to be read from the environment variables (not to mention, this approach plays nice with Docker 🙂 .

    Some environment variables take the value of a link alias or link itself, like the DB_HOST: database1. This might look redundant, but keeps the application separated from the deployment, IMHO.

    In the end, my service would look something like this:

    version: '2.1'
    services:
      db1:
        [...]
        healthcheck: ..
    
      service1:
        [...]
        links:
          - db1:database1
        environment:
          - DB_HOST: database1
          - DB_PORT: 5432
          - DB_NAME: accounting
    

    You can use the same approach with multiple databases.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.