How can I make a host directory mount with the container directory's contents?

What I am trying to do is set up a docker container for ghost where I can easily modify the theme and other content. So I am making /opt/ghost/content a volume and mounting that on the host.

It looks like I will have to manually copy the theme into the host directory because when I mount it, it is an empty directory. So my content directory is totally empty. I am pretty sure I am doing something wrong.

  • what's the best way to use command line applications in a docker environment?
  • Why is inter container communication in Docker the default?
  • why I can't ping my docker container?
  • Is it possible to mount a docker image and then access it (RO) as a normal directory or mounted device?
  • Howto modify an docker image that was created from an existing one
  • An upset touble in docker when I use bash
  • I have tried a few different variations including using ADD with default themes folder, putting VOLUME at the end of the Dockerfile. I keep ending up with an empty content directory.

    Does anyone have a Dockerfile doing something similar that is already working that I can look at?

    Or maybe I can use the docker cp command somehow to populate the volume?

    I may be missing something obvious or have made a silly mistake in my attempts to achieve this. But the basic thing is I want to be able to upload a new set of files into the ghost themes directory using a host-mounted volume and also have the casper theme in there by default.

    This is what I have in my Dockerfile right now:

    FROM ubuntu:12.04
    MAINTAINER Jason Livesay ""
    RUN apt-get install -y python-software-properties
    RUN add-apt-repository ppa:chris-lea/node.js
    RUN echo "deb precise main universe" > /etc/apt/sources.list
    RUN apt-get -qq update
    RUN apt-get install -y sudo curl unzip nodejs=0.10.20-1chl1~precise1
    RUN curl -L > /tmp/
    RUN useradd ghost
    RUN mkdir -p /opt/ghost
    WORKDIR /opt/ghost
    RUN unzip /tmp/
    RUN npm install --production
    # Volumes
    RUN mkdir /data
    ADD run /usr/local/bin/run
    ADD config.js /opt/ghost/config.js
    ADD content /opt/ghost/content/
    RUN chown -R ghost:ghost /opt/ghost
    ENV NODE_ENV production
    EXPOSE 2368
    CMD ["/usr/local/bin/run"]
    VOLUME ["/data", "/opt/ghost/content"]

  • Scripts within Python on current directory
  • Is it possible to modify docker run option without removing and recreating the container?
  • I can not access my Container Docker Image by HTTP
  • Syslog driver in Logstash docker image causing port “timed out” error
  • Error checking TLS connection with Docksal
  • Access filesystem from Node app in Docker
  • 4 Solutions collect form web for “How can I make a host directory mount with the container directory's contents?”

    As far as I know, empty host-mounted (bound) volumes still will not receive contents of directories set up during the build, BUT data containers referenced with –volumes-from WILL.

    So now I think the answer is, rather than writing code to work around non-initialized host-mounted volumes, forget host-mounted volumes and instead use data containers.

    Data containers use the same image as the one you are trying to persist data for (so they have the same directories etc.).

    docker run -d --name myapp_data mystuff/myapp echo Data container for myapp

    Note that it will run and then exit, so your data containers for volumes won’t stay running. If you want to keep them running you can use something like sleep infinity instead of echo, although this will obviously take more resources and isn’t necessary or useful unless you have some specific reason — like assuming that all of your relevant containers are still running.

    You then use --volumes-from to use the directories from the data container:

    docker run -d --name myapp --volumes-from myapp_data

    You need to place the VOLUME directive before actually adding content to it.

    My answer is completely wrong! Look here it seems there is actually a bug. If the VOLUME command happens after the directory already exists in the container, then changes are not persisted.

    The Dockerfile should always end with a CMD or an ENTRYPOINT.


    My solution would be to ADD files in the container home directory, then use a shell script as an entry point in which I’ll copy the file in the shared volume and do all the other tasks.

    I’ve been looking into the same thing. The problem I encountered was that I was using a relative local mount path, something like:

    docker run -i -t -v ../data:/opt/data image

    Switching to an absolute local path fixed this up for me:

    docker run -i -t -v /path/to/my/data:/opt/data image

    Can you confirm whether you were doing a relative path, and whether this helps?

    Docker V1.8.1 preserves data in a volume if you mount it with the run command. From the docker docs:

    Volumes are initialized when a container is created. If the container’s 
    base image contains data at the specified mount point, that existing 
    data is copied into the new volume upon volume initialization.

    Example: An image defines the


    as a volume and populates it with the data of a web application. Your docker hosts provides a mount directory


    You start the image by

    docker run -v /my/host/dir:/var/www/html image

    then you will get all the data from /var/www/html in the hosts /my/host/dir
    This data will persist even if you delete the container or the image.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.