Speed up Gitlab CI reusing docker machine for stages

Gitlab CI pulls docker image every time for every task (stage). This operation wastes much time. I want to optimize if possible.

I see two places to work with:
1. explicitly configure CI stages to reuse the same docker machine.
2. use the docker machine from previous commit when building next commit? (If no changes in configuration file was done).

  • Using jenkins docker image
  • Minimizing the number of layers in Dockerfile
  • ssh key generation using dockerfile
  • jenkins 'execute script' build step Error: /bin/docker: Permission denied
  • Running MariaDB in a docker container using docker stack
  • Multiple library version management with Docker
  • What's the difference between a stack file and a Compose file?
  • Docker Toolbox setup fails on Windows 8.1
  • Docker run -d <private image> gives fatal. On other hosts it's ok?
  • Jenkins on GCE not building
  • PM2 Won't Start Inside Docker
  • How to add filter in fluentd
  • One Solution collect form web for “Speed up Gitlab CI reusing docker machine for stages”

    This kind of configuration can be specified trough the pull_policy on the runner itself.

    As Jakub highlighted in the comments to the question, on shared runners on Gitlab.com the policy is set to always, therefore it will always download a new copy of the image, also if there is the same copy locally.

    This due security reasons.

    You can have a confirmation of that in the doc.

    This pull policy should be used if your Runner is publicly available
    and configured as a shared Runner in your GitLab instance. It is the
    only pull policy that can be considered as secure when the Runner will
    be used with private images.

    The security implication is that if the runner checks first a local image, a non authorized user can get a private docker image guessing its name

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.