How to remove old docker images in continuous integration to save disk space

When building docker images in a continuous integration environment, you quickly run out of disk space and need to remove old images. However, you can’t remove all the old images, including intermediate images, because that breaks caching.

How do you avoid running out of disk space on your build agents, without breaking caching?

  • How to dockerize maven project? and how many ways to accomplish it?
  • Docker: How to connect to Postgresql container without exposing port
  • unable to login in private docker registry
  • Transitioning from other local development tools to Docker
  • Enabling debug on Wildfly domain mode in Docker - port already in use
  • Fleet can't launch Docker registry container
  • Running docker in subprocess on Python PyCharm
  • Isolation in Vagrant vs. Docker
  • Docker: Permissions for a mounted volume
  • Docker Configuration´╝Üdocker.service vs daemon.json
  • Can I mount same volume to multiple docker containers
  • bluemix “docker images” results in “json: cannot unmarshal string into Go value of type int”
  • 2 Solutions collect form web for “How to remove old docker images in continuous integration to save disk space”

    My solution is to remove the previous version of the image after building the new one. This ensures that cached images are available to speed up the build, but avoids old images piling up and eating your disk space. This method relies on each version of the image having a unique tag.

    This is my script (gist here):

    #!/usr/bin/env bash
    
    usage(){
    # ============================================================
    echo This script removes all images of the same repository and
    echo older than the provided image from the docker instance.
    echo
    echo This cleans up older images, but retains layers from the
    echo provided image, which makes them available for caching.
    echo
    echo Usage:
    echo
    echo '$ ./delete-images-before.sh <image-name>:<tag>'
    exit 1
    # ============================================================
    }
    
    [[ $# -ne 1 ]] && usage
    
    IMAGE=$(echo $1 | awk -F: '{ print $1 }')
    TAG=$(echo $1 | awk -F: '{ print $2 }')
    
    FOUND=$(docker images --format '{{.Repository}}:{{.Tag}}' | grep ${IMAGE}:${TAG})
    
    if ! [[ ${FOUND} ]]
    then
        echo The image ${IMAGE}:${TAG} does not exist
        exit 2
    fi
    
    docker images --filter before=${IMAGE}:${TAG} \
        | grep ${IMAGE} \
        | awk '{ print $3 }' \
        | xargs --no-run-if-empty \
        docker --log-level=warn rmi --force || true
    

    A tool we use to handle this is docker custodian (dcgc).

    It is suggested to keep a list of images that you want to keep around and never clean up and pass that to --exclude-image (If you’re using puppet or some other resource management system, it may be more useful to write a file to disk that contains the image patterns and instead use --exclude-image-file)

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.