Is there any issues with creating lots of Docker tags for Continuous Delivery?
Currently we have Jenkins jobs configured for releasing our services and others to deploy it to test, stage and prod. As a part of the release job we create a Docker image containing the service binary (and all its dependencies) that is pushed to our private Docker repository (a new tag is created with the new version). So far this works great since it’s easy to deploy this (or older) versions from Jenkins to our different environments or just pull a certain version of the service and run it locally. My question is if this approach works well if we move to continuous delivery? The idea is that after each successful build we create a new Docker tag and push to our private repo and the image is deployed to the test environment. My concern is that if this is done several times a day there will be a lot of Docker tags/layers that has to be downloaded. I’m afraid that over time it’ll take a long time to pull the image (and perhaps there are other issues that I’m not aware of)?
So to rephrase the question more clearly:
- Is it fine to create a lot Docker tags or is this something that
should be avoided?
- Would it be better to have a base image (OS etc), then build the service binary from a tag in the source control system, and then build an image that is then transferred to the different environments on deploy?
- Other suggestions?
2 Solutions collect form web for “Is there any issues with creating lots of Docker tags for Continuous Delivery?”
- The public registry copes fine with multiple tags. Think of the 1000s of images being published on a regular basis. There’s no tag syncing required between docker client and the registry, just make sure your registry is powerful enough to cope. I’m assuming as part of your test process you pull down a specific image and tag?
- Base images are always preferably over a brand new build every time. It’ll speed up your build time too.
Docker Registry v1 (python) is terrible at scaling on tags. This will bite you specifically if you are using distributed filesystem storage backends (like GCS or S3). See https://github.com/docker/docker-registry/issues/614 for context.
Now, the new protocol (v2) and (golang) registry should be better at that.
- there is nothing wrong with the approach you describe, except the v1 protocol/registry itself…
- you should bet on docker 1.6 (release candidates are out), and v2 golang registry (which is here: https://github.com/docker/distribution )