Docker build not using cache when running through ansible

Below are the steps to reproduce:

  1. Run the ansible script with to build docker image in remote server
  2. Run the script second time without changing anything ==> the build cache was not used at all while it is expected. Try this step several times, the cache never used.
  3. SSH to remote server and run docker build by hand ==> the build cache was applied well
  4. Go back to run the ansible script again ==> The build cache was applied this time and after
  5. Try changing something in the Dockerfile or modify file to invalidate the cache and re-run the script ==> The build cache was not applied, which is expected.
  6. Re-run the script again, the build cache was well applied!!

So the build cache won’t be applied unless I run it by hand first in the remote server. It seems so strange to me

  • docker-compose keeps using old image content
  • Jenkins docker cache not working
  • App running on docker consuming page cache of host machine.Is that anti-pattern ?
  • Ansible roles task failing docker swarm node join
  • Is it desirable to use Ansible for creating a docker image
  • Ansible how to push local image to private registry
  • This is the simple ansible script that I run

    - name: BUILD - Build docker image with prefix 'build' in tag name
      command: /usr/bin/docker build -t {{ registry }}/{{ project_id }}/{{ repo_name }}:build-{{ build }} .
        chdir: "{{ git_clone_dir }}/{{ repo_name }}"

    I’m using ansible and Docker version 1.9.1, build a34a1d5 in ubuntu 14.04. I have tested with previous version of docker, same issue.


    Below is the Dockerfile of my base docker image:

    FROM node:4.1.2
    # Add private key
    RUN mkdir -p /root/.ssh
    ADD id_rsa /root/.ssh/id_rsa
    RUN chmod 600 /root/.ssh/id_rsa
    # Create known_hosts
    RUN touch /root/.ssh/known_hosts
    # Add bitbuckets and github key
    RUN ssh-keyscan >> /root/.ssh/known_hosts
    RUN ssh-keyscan >> /root/.ssh/known_hosts
    # Create application directory
    RUN mkdir -p /usr/src/app
    WORKDIR /usr/src/app
    # Set npm loglevel to avoid printing too many logs
    ONBUILD COPY package.json /usr/src/app/package.json
    ONBUILD RUN npm install --loglevel=warn
    ONBUILD COPY . /usr/src/app
    # Remove private key
    ONBUILD RUN rm -rf /root/.ssh/id_rsa id_rsa
    # Expose the ports that your app uses
    EXPOSE 80

    and below is the Dockerfile of the image I wanna build that inherited from the image above:

    FROM {{ docker_base_image }}
    CMD ["npm", "start"]
    # Expose the ports that your app uses:
    EXPOSE 80


    One thing I also observed: I cannot find the intermediate images in the list by command docker images -a if I run docker build through the script.

  • Can not see files from Docker in Zeppelin
  • Ansible task timeout max length
  • Docker: how to automatically accept “Do you really want to push to public registry?”
  • how to solve the certification issues in puppet
  • How to automatically remove old Docker images?
  • How to dynamically set environment variables of linked containers?
  • Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.