Access host docker-machine from within container

I have an image that I’m using to run my CI/CD builds (using GitLab CE). I’d like to deploy my app doing something like this from within the container:

eval "$(docker-machine env manager)"
sudo docker stack deploy --compose-file docker-stack.yml web

However, I’d like the docker-machine to access machines defined on the host system since the container will be destroyed and I don’t want to include access details in the image.

  • Error in configuring multiple networks using weave network driver plugin for docker
  • docker issue with namespaces - Container ID 110090219 cannot be mapped to a host ID
  • Get docker run command for container
  • openwhisk postdeploy fails on single node ubuntu virtual machine
  • Addressing multiple non docker-swarm services on LAN from within swarm
  • Running docker without sudo on Ubuntu 14.04 [closed]
  • I’ve tried a few things

    Accessing the Remote Host via docker-machine

    • Create the docker-machine on the host and mount the MACHINE_STORAGE_PATH so that it is available to the container
    • Connect to the remote docker-machine manually from within the container and setting the MACHINE_STORAGE_PATH equal to a mounted volume
    • Mounting the docker socket

    In both cases, I can see the machine storage is persisted, but whenever I create a new container and run docker-machine ls none of the machines are listed.

    Accessing the Remote Host via DOCKER_HOST

    • Forward the remote machine docker port to the host docker port docker-machine ssh manager-1 -N -L 2376:localhost:2376
    • export DOCKER_HOST=:2376
    • Tell docker to use the same certs that are used by docker-machine: export DOCKER_TLS_VERIFY=1 and export DOCKER_CERT_PATH=/Users/me/.docker/machine/machines/manager-‌​1
    • Test with docker info

    This gives me error during connect: Get https://localhost:2376/v1.26/info: x509: certificate signed by unknown authority

    Any ideas on how I can perform a remote deployment from within a container?



    Here is a diagram to try and help better communicate the scenario.


  • How to separate ffmpeg and php container in docker
  • How memory allocation is happening in docker
  • Files changes not reflected in Docker image after rebuild
  • Mounting local volumes to remote docker container, Possible?
  • Why does GCR's container registry ignore the _catalog pagination parameters
  • Choosing docker project structure
  • 2 Solutions collect form web for “Access host docker-machine from within container”

    Don’t use docker-machine for this.

    Docker-machine stores files in $HOME/.docker/machine, so when you restart with a fresh copy of this folder, all previously defined machines will be removed. You could store this folder as a volume, but there’s a much easier way for your purposes.

    The solution is to mount the docker socket, and either as root or from a user with the same gid as the docker socket (note that group names themselves inside and outside the container may not match, so gid is important), run your docker ... commands as normal. You can skip the docker-machine eval completely since you are running the commands against the local docker socket.

    If you need to run commands remotely, I find it easier to define the DOCKER_HOST and DOCKER_TLS_VERIFY variables manually rather than using docker-machine.

    In case you want to communicate from your CI container to the Docker host you can simply mount the Docker socket when starting the CI container:

    docker run -v /var/run/docker.sock:/var/run/docker.sock <gitlab-image>

    Now you can run docker commands on the host from within the CI container.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.