Docker – copy file from container to host

I’m thinking of using docker to build my dependencies on a CI server, to that I don’t have to install all the runtimes and libraries on the agents themselves. To achieve this I would need to copy the build artefacts that are built inside the container back into the host.

Is that possible?

  • Dockerize wordpress
  • Starting services at container startup
  • how to pull docker network configuration and containers
  • Clean docker environment: devicemapper
  • Daemon started with “docker run daemon” is not working
  • How to create a multi container web application in docker
  • How to run pip3+git from behind proxy with docker?
  • Automate creation of a VM with Docker containers running inside
  • How to drop `django_admin_log` with docker-compose?
  • ECS updating service with same tasks but different docker image
  • Cloud9 - is there a way to use ssh workspaces with standalone?
  • “Connection Refused” on Docker node container
  • 8 Solutions collect form web for “Docker – copy file from container to host”

    In order to copy a file from a container to the host, you can use the command

    docker cp <containerId>:/file/path/within/container /host/path/target

    Mount a “volume” and copy the artifacts into there:

    mkdir artifacts
    docker run -i -v ${PWD}/artifacts:/artifacts ubuntu:14.04 sh << COMMANDS
    # ... build software here ...
    cp <artifact> /artifacts
    # ... copy more artifacts into `/artifacts` ...

    Then when the build finishes and the container is no longer running, it has already copied the artifacts from the build into the artifacts directory on the host.


    CAVEAT: When you do this, you may run into problems with the user id of the docker user matching the user id of the current running user. That is, the files in /artifacts will be shown as owned by the user with the UID of the user used inside the docker container. A way around this may be to use the calling user’s UID:

    docker run -i -v ${PWD}:/working_dir -w /working_dir -u $(id -u) \
        ubuntu:14.04 sh << COMMANDS
    # Since $(id -u) owns /working_dir, you should be okay running commands here
    # and having them work. Then copy stuff into /working_dir/artifacts .

    Mount a volume, copy the artifacts, adjust owner id and group id:

    mkdir artifacts
    docker run -i --rm -v ${PWD}/artifacts:/mnt/artifacts centos:6 /bin/bash << COMMANDS
    ls -la > /mnt/artifacts/ls.txt
    echo Changing owner from \$(id -u):\$(id -g) to $(id -u):$(id -u)
    chown -R $(id -u):$(id -u) /mnt/artifacts

    I am posting this for anyone that is using Docker for Mac.
    This is what worked for me:

     $ mkdir mybackup # local directory on Mac
     $ docker run --rm --volumes-from <containerid> \
        -v `pwd`/mybackup:/backup \  
        busybox \                   
        cp /data/mydata.txt /backup 

    Note that when I mount using -v that backup directory is automatically created.

    I hope this is useful to someone someday. 🙂

    As a more general solution, there’s a CloudBees plugin for Jenkins to build inside a Docker container. You can select an image to use from a Docker registry or define a Dockerfile to build and use.

    It’ll mount the workspace into the container as a volume (with appropriate user), set it as your working directory, do whatever commands you request (inside the container).
    You can also use the docker-workflow plugin (if you prefer code over UI) to do this, with the image.inside() {} command.

    Basically all of this, baked into your CI/CD server and then some.


    $ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF
    chown $(id -u):$(id -g) my-artifact.tar.xz
    cp -a my-artifact.tar.xz /host-volume


    docker run with a host volume, chown the artifact, cp the artifact to the host volume:

    $ docker build -t my-image - <<EOF
    > FROM busybox
    > WORKDIR /workdir
    > RUN touch foo.txt bar.txt qux.txt
    > EOF
    Sending build context to Docker daemon  2.048kB
    Step 1/3 : FROM busybox
     ---> 00f017a8c2a6
    Step 2/3 : WORKDIR /workdir
     ---> Using cache
     ---> 36151d97f2c9
    Step 3/3 : RUN touch foo.txt bar.txt qux.txt
     ---> Running in a657ed4f5cab
     ---> 4dd197569e44
    Removing intermediate container a657ed4f5cab
    Successfully built 4dd197569e44
    $ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF
    chown -v $(id -u):$(id -g) *.txt
    cp -va *.txt /host-volume
    changed ownership of '/host-volume/bar.txt' to 10335:11111
    changed ownership of '/host-volume/qux.txt' to 10335:11111
    changed ownership of '/host-volume/foo.txt' to 10335:11111
    'bar.txt' -> '/host-volume/bar.txt'
    'foo.txt' -> '/host-volume/foo.txt'
    'qux.txt' -> '/host-volume/qux.txt'
    $ ls -n
    total 0
    -rw-r--r-- 1 10335 11111 0 May  7 18:22 bar.txt
    -rw-r--r-- 1 10335 11111 0 May  7 18:22 foo.txt
    -rw-r--r-- 1 10335 11111 0 May  7 18:22 qux.txt

    This trick works because the chown invocation within the heredoc the takes $(id -u):$(id -g) values from outside the running container; i.e., the docker host.

    The benefits over docker cp are:

    • you don’t have to docker run --name your container before
    • you don’t have to docker container rm after

    If the container is running on any Linux distribution then you can copy the the file from the host to container and vice versa.

    For this, you can use Linux scp command to copy a file from one host machine to another host machine. The fact is, you need to know the IP address of the container and a bridge between the host.

    A simple example is given here:

    Copy a file from a container to the host machine.

    scp username_container@ip_container:~/Image/xxxx.yzx ~/Desktop/Images

    Copy a file from local machine to the container

    scp ~/Image/xxxx.yzx username_container@ip_container:~/Desktop/Images

    Create a data directory on the host system (outside the container) and mount this to a directory visible from inside the container. This places the files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files

    docker run -d -v /path/to/Local_host_dir:/path/to/docker_dir docker_image:tag
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.