Docker app and database on different containers

I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.

The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.

  • docker-machine on mac does not work with docker-engine protected by self-create CA
  • Docker container will automatically stop after “docker run -d”
  • Can a docker image use executable from the host?
  • container monitoring not enabled by default
  • watchman not found on phusion/baseimage Docker image with ember-cli
  • Is there an alternative to Dockerfile Env Instruction?
  • I also know the containers will be on the same bridged network, and using the “curl” command I found out they’re connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the “localhost” specification in the code.

    Is there a way to forward these ports? Perhaps in my app’s dockerfile using iptables? I want the container of my app to be able to access mongoDB using “localhost:27017”, for example, even though they’re in separate containers.

    I’m using Docker for Mac (V 1.13.1). In production we’ll use Docker on an Ubuntu server.

    I’m somewhat of a noob. Thank you for your help.

  • Wait for job/pod completion in Kubernetes or Google Container Engine
  • Docker containers and Node.js clusters
  • Is there any way I can automate configuring Nexus 3?
  • How do I provision a Dockerfile from Vagrant
  • Relationship between docker0, Docker Bridge Driver and Containers
  • Building images on docker
  • One Solution collect form web for “Docker app and database on different containers”

    Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:

    • You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)…

    • You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT –> HOST_IP_IN_DOCKER0:SOME_PORT

    • As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
      Supposing that script is wrappen in a command named getip, you could run it like this:

      $ docker run -e DOCKER_HOST=$(getip) ...
      

      and then inside the container use the env var named DOCKER_HOST to connect your services.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.