docker containers communication on dev machine

I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch. I am confused as to how I can create a container that can be used in production and on my local machine (mac). How are people providing configuration like this these days?

So far I have come up with having my process take environmental variables as arguments which I can pass to the container with docker run -e. It seems unlikely that I would be doing this type of thing in production.

  • File locks support in Docker volumes of NFS4 shares
  • ssh executing nsenter as remote command with interactive shell in golang to debug docker container
  • Docker : How To Dockerize And Deploy multiple instances of a LAMP Application
  • what's the best way to let kubenetes pods communicate with each other?
  • How to use kvm in a Centos 6 docker container via docker machine
  • How to not build gradle subproject using docker plugin if docker not available?
  • MongoDB auto reconnect with docker + node.js + mongodb
  • Install an sql dump file to a docker container with mariaDB
  • Docker / congifparser error in requirements.txt
  • Not able to connect to network inside docker container
  • How to reduce time required by docker image to install dependencies?
  • Is there an Docker Maven Plugin in jar file?
  • 3 Solutions collect form web for “docker containers communication on dev machine”

    I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch

    If elasticsearch is running in its own container on the same host (managed by the same docker daemon), then you can link it to your own container (at the docker run stage) with the --link option (which sets environment variables)

    docker run --link elasticsearch:elasticsearch --name <yourContainer> <yourImage>
    

    See “Linking containers together”

    In that case, your container config can be static and known/written in advance, as it will refer to the search machine as ‘elasticsearch‘.

    How about writing it into the configuration file of your application and mount the configuration directory onto your container with -v?

    To make it more organized, I use Ansible for orchestration. This way you could have a template of the configuration file for your application while the actually parameters are in the variable file of the corresponding Ansible playbook at a centralized location. Ansible will be in charge of copying the template over to the desired location and do variable substitution for you. It also recently enhanced its Docker support.

    Environment variables are absolutely fine (we use them all the time for this sort of thing) as long as you’re using service names, not ip addresses. Even with ip addresses you’d have no problem as long as you only have one ES and you’re willing to restart your service every time the ES ip address changes.

    You should really ask someone who knows for sure how you resolve these things in your production environments, because you’re unlikely to be the only person in your org who has had this problem — connecting to a database poses the same problem.

    If you have no constraints at all then you should check out something like Consul from Hashicorp. It’ll help you a lot with this problem; if you are allowed to use it.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.