How to setup group of docker containers with the same addresses?

I am going to install distributed software inside docker containers. It can be something like:

container1: 172.0.0.10 – management node

  • Error response from daemon: network bridge not found
  • docker: how to get veth bridge interface pair easily?
  • How to set up container network by connman in container environment?
  • How to forward source IPs to Docker containers without letting Docker mess with iptables
  • External static IP with Docker
  • Mesos, Docker and GRE Tunneling
  • container2: 172.0.0.20 – database node

    container3: 172.0.0.30 – UI node

    I know how to manage containers as a group and how to link them between each other, however the problem is that ip information is located in many places (database etc), so when you deploy containers from such image ip are changed and infrastructure is broken.

    The most easy way I see is to use several virtual networks on the host so, containers will be with the same addresses but will not affect each other. However as I understand it is not possible for docker currently, as you cannot start docker daemon with several bridges connected to one physical interface.

    The question is, could you advice how to create such infrastructure? thanks.

  • Docker cloud repository push: access to the requested resource is not authorized on Fedora 23
  • Docker registry on EC2
  • Docker Pull taking unexpectedly long time via Ansible
  • Docker machine using generic driver: x509: certificate signed by unknown authority when trying to add remote host to local machine
  • How to access docker host filesystem from privileged container
  • Correct way to detach from a container without stopping it
  • 2 Solutions collect form web for “How to setup group of docker containers with the same addresses?”

    Don’t do it this way.

    Containers are ephemeral, they come and go and will be assigned new IPs. Fighting against this is a bad idea. Instead, you need to figure out how to deal with changing IPs. There are a few solutions, which one you should use is entirely dependent on your use case.

    Some suggestions:

    • You may be able to get away with just forwarding through ports on your host. So your DB is always HOST_IP:88888 or similar.

    • If you can put environment variables in your config files, or dynamically generate config files when the container starts, you can use Docker links which will put the IP of the linked container into an environment variable.

    If those don’t work for you, you need to start looking at more complete solutions such as the ambassador pattern and consul. In general, this issue is known as Service Discovery.

    Adrian gave a good answer. But if you cannot use this approach you could do the next thing:

    • create ip aliases on hosts with docker (it could be many docker hosts)
    • then you run container map ports for this address.

    .

    docker run --name management --restart=always -d -p 172.0.0.10:NNNN:NNNN management
    docker run --name db --restart=always -d -p 172.0.0.20:NNNN:NNNN db
    docker run --name ui --restart=always -d -p 172.0.0.30:NNNN:NNNN ui
    

    Now you could access your containers by fixed address and you could move them to different hosts (together with ip alias) and everything will continue to work.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.