Access Docker socket within container

I am attempting to create a container that can access the host docker remote API via the docker socket file (host machine – /var/run/docker.sock).

The answer here suggests proxying requests to the socket. How would I go about doing this?

  • Allowing WordPress to be hosted on a subfolder within an already existing site
  • docker-machine env default | eval “$(docker-machine env default)” | unable to run command docker
  • Non-homogeneous nodes
  • Cannot connect to Docker PostgreSQL container in AWS EC2 instance
  • Firefox in a docker container accessible from selenium in another
  • Docker --link not adding two entries to hosts file
  • How does one remove an image in Docker?
  • How do I get environment variables to persist into an imported Docker image
  • java code for producer/consumer not able to connect kafka in docker setup
  • Git init with symlinks
  • How do i execute script in supervisor?
  • Docker: execute a program that requires tty
  • 2 Solutions collect form web for “Access Docker socket within container”

    I figured it out. You can simply pass the the socket file through the volume argument

    docker run -v /var/run/docker.sock:/container/path/docker.sock
    

    As @zarathustra points out, this may not be the greatest idea however. See: https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html

    If one intends to use Docker from within a container, he should clearly understand security implications.

    Accessing Docker from within container is simple:

    1. Expose unix socket to container
    2. Bind mount docker client binary to container, so that one may avoid having to install docker package inside container

    That’s why

    docker run -v /var/run/docker.sock:/var/run/docker.sock \
           -v $(which docker):/bin/docker \
           -ti ubuntu
    

    should do the trick and be almost free from side effects.

    Considerations:

    1. All host containers will be accessible to container, so it can stop them, delete, run any commands as any user inside top-level Docker containers.
    2. All created containers are created in a top-level Docker.
    3. Of course, you should understand that if container has access to host’s docker, it has privileged access to host system. Depending on container and system (AppArmor) configuration, it may be less or more dangerous
    4. Other warnings here dont-expose-the-docker-socket

    Other approaches like exposing /var/lib/docker to container are likely to cause data corruption. See do-not-use-docker-in-docker-for-ci for more details.

    Note for users of official Jenkins CI container

    In this container (and probably in many other) jenkins process runs as a non-root user. That’s why it has no permission to interact with docker socket. So quick&dirty solution is running

    docker exec -u root ${NAME} /bin/chmod -v a+s $(which docker)
    

    after starting container. That allows all users in container to run docker binary with root permissions. Better approach would be to allow running docker binary via passwordless sudo, but official Jenkins CI image seems to lack the sudo subsystem.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.