Setting up docker containers with nat

I am setting up two docker containers

     container1                 container2
     |        |                     |
    eth0     eth1                   |
     |        |                    eth1
   docker0   docker1<----------------

docker0 and docker1 are the bridges.

  • Docker, php and python
  • Node Prerender Working on First Page Load - 500 Error on Subsequent Loads
  • Docker commit issue
  • Docker + Yesod Linking Failure
  • application performance decrease a lot in docker
  • Docker: how to open ports to the host machine?
  • I have ip forwarding to 1 in both host and in containers.
    I have setup

    iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE in container 1

    Still i am not able to ping anything from container 2 to internet. I can see that packets are being received at eth1 of container 1.

    OS: ubuntu 13.10
    docker version: 0.11.1, build fb99f99

    Am i missing some configuration?

    Steps to reproduce:

    SERV=$(docker run --privileged=true -i -d -t -v ~/Projects/code/myproject/build:/build:ro debian:7.4 /bin/bash)
    CLI=$(docker run --privileged=true -i -d -t -v ~/Projects/code/myproject/build:/build:ro debian:7.4 /bin/bash)
    sudo pipework br1 $SERV
    sudo pipework br1 $CLI 

    In $SERV:
    iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE

    In $CLI
    Disable the interface eth0. Set default route to eth1 interface.

    Now ping is happening to from $CLI but not to the internet.

  • Docker is not correctly binding ports on windows
  • Dockerizing PostgreSQL
  • Can't install scipy
  • Get IP address of host where docker engine is running
  • What alternatives are there for Docker on Solaris 11.3?
  • Docker database files: inside or outside the container?
  • 2 Solutions collect form web for “Setting up docker containers with nat”

    Hm, it should work as you described. Maybe the default route is not configured correctly.
    This is what I did:

    SERV=$(docker run -i --privileged -d -t debian:7.4 /bin/bash)
    CLI=$(docker run --privileged -i -d -t debian:7.4 /bin/bash)
    docker exec -ti $CLI ping  # Internet up
    docker exec -ti $CLI ip link set eth0 down
    docker exec -ti $CLI ping  # Internet down
    pipework br1 $SERV
    pipework br1 $CLI
    docker exec -ti $SERV apt-get install -y iptables
    docker exec -ti $SERV iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE
    docker exec -ti $CLI ip route add default via dev eth1
    docker exec -ti $CLI ping  # Internet up
    docker exec -ti $CLI apt-get install -y traceroute
    docker exec -ti $CLI traceroute

    The only way iptables is changed is when executed from Docker host on a containers run with


    Here is a script:

    iptables along with a couple of tools are installed during the image build (Dcokerfile)
    inetutils-traceroute iputils-tracepath iptables

    Here I use “phusion-dockerbase”, you can use whatever image you want:

    ### ==> Install & configure iptable during build
    #RUN sudo apt-get install -y inetutils-traceroute iputils-tracepath iptables
    # Build the image 
    #sudo docker build -t mybimage -f phusion-dockerbase .
    ### container1
    C1=$(docker run --privileged -i -d -t mybimage /bin/bash)
    sudo docker exec -ti $C1 iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE
    sleep 2
    sudo pipework br6 -i eth1 $C1
    ### container2
    lxterminal -e "sudo docker run -ti --name c2name mybimage /bin/bash"
    sleep 2
    C2="$(sudo docker ps | grep c2name | awk '{ print $1; }')"
    sudo pipework br6 -i eth1 $C2



    From the Container1 (I use lxterminal to open it in a new window):

    From container1

    Note that, as soon as you stop container1, corresponding pipework and iptable modification are lost and even when restart the stopped container, you will need to reissue the commands:

    pipework br6 -i eth1 52b95d6052f7
    docker exec 52b95d6052f7 iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE

    for container1 to act again like nat box.

    Even commiting the running container1 to a new image and run a new container from it, doesn’t help.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.