How to setup group of docker containers with the same addresses?

I am going to install distributed software inside docker containers. It can be something like:

container1: – management node

  • distributed wide and deep with tf.contrib.learn api example stuck on k8s
  • docker: how to get veth bridge interface pair easily?
  • Docker on CentOS with bridge to LAN network
  • Docker Swarm discovery is still relevant?
  • Docker: how to connect two bridges
  • Unable to set net.bridge.bridge-nf-call-iptables within Docker container
  • container2: – database node

    container3: – UI node

    I know how to manage containers as a group and how to link them between each other, however the problem is that ip information is located in many places (database etc), so when you deploy containers from such image ip are changed and infrastructure is broken.

    The most easy way I see is to use several virtual networks on the host so, containers will be with the same addresses but will not affect each other. However as I understand it is not possible for docker currently, as you cannot start docker daemon with several bridges connected to one physical interface.

    The question is, could you advice how to create such infrastructure? thanks.

  • docker-machine install fails due to 'Couldn't read CA cert' error
  • Why am i forced to run docker with sudo on OS X?
  • docker swarm access service
  • How to map the IP address of docker container to another container dynamically
  • change PATH for building docker image
  • Any hook for Docker killed by out of memory
  • 2 Solutions collect form web for “How to setup group of docker containers with the same addresses?”

    Don’t do it this way.

    Containers are ephemeral, they come and go and will be assigned new IPs. Fighting against this is a bad idea. Instead, you need to figure out how to deal with changing IPs. There are a few solutions, which one you should use is entirely dependent on your use case.

    Some suggestions:

    • You may be able to get away with just forwarding through ports on your host. So your DB is always HOST_IP:88888 or similar.

    • If you can put environment variables in your config files, or dynamically generate config files when the container starts, you can use Docker links which will put the IP of the linked container into an environment variable.

    If those don’t work for you, you need to start looking at more complete solutions such as the ambassador pattern and consul. In general, this issue is known as Service Discovery.

    Adrian gave a good answer. But if you cannot use this approach you could do the next thing:

    • create ip aliases on hosts with docker (it could be many docker hosts)
    • then you run container map ports for this address.


    docker run --name management --restart=always -d -p management
    docker run --name db --restart=always -d -p db
    docker run --name ui --restart=always -d -p ui

    Now you could access your containers by fixed address and you could move them to different hosts (together with ip alias) and everything will continue to work.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.