Does hub.docker.com use “–no-cache” for automated builds?

I am analysing some slightly strange behaviour in our automated build processes, which lead me to ask:

Does hub.docker.com use the --no-cache option when performing automated builds?

  • Is it possible to pass on additional options to docker start like (port forwarding, capabilities)
  • Running Sidekiq inside Docker - Handle termination of Docker container?
  • Creating docker images with java application [closed]
  • Is it correct to run a private Docker registry like a container?
  • Monitor Docker container with Zabbix
  • Separate clock process from sidekiq workers on Docker
  • Docker swarm on AWS - swarm services cannot access internet
  • docker- compose error: service 'build' must be a mapper not a string error
  • Local development and swarm service image update
  • Docker as replacement for standalone multihost webserver
  • Can't get docker running on Cloud9 IDE
  • Docker opencv3 Cmake errors
  • 2 Solutions collect form web for “Does hub.docker.com use “–no-cache” for automated builds?”

    Yes. The build process is currently:

    1. git clone --recursive --depth 1 -b branch $URL
    2. Extract Readme and Dockerfile
    3. docker build -t tagname --nocache
    4. Tar and upload the build context to S3 bucket
    5. Push image (with all layers) to Registry
    6. Worker or Builder cleans up build residue (mounted volumes, etc)

    Unfortunately, this was not the case for me. I had to end up rebuilding the image with the –no-cache flag. Then push the image up to docker hub.
    Admittedly the dockerfile used was not with best practice as it involved a “git pull”. Oh well!

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.