Is there any issues with creating lots of Docker tags for Continuous Delivery?

Currently we have Jenkins jobs configured for releasing our services and others to deploy it to test, stage and prod. As a part of the release job we create a Docker image containing the service binary (and all its dependencies) that is pushed to our private Docker repository (a new tag is created with the new version). So far this works great since it’s easy to deploy this (or older) versions from Jenkins to our different environments or just pull a certain version of the service and run it locally. My question is if this approach works well if we move to continuous delivery? The idea is that after each successful build we create a new Docker tag and push to our private repo and the image is deployed to the test environment. My concern is that if this is done several times a day there will be a lot of Docker tags/layers that has to be downloaded. I’m afraid that over time it’ll take a long time to pull the image (and perhaps there are other issues that I’m not aware of)?

So to rephrase the question more clearly:

  • Spring Cloud Eureka Client
  • Citrus-Framework: How to wait for container to be healthy?
  • How to designate specific versions of e.g. Ubuntu containers?
  • Docker pull failed with request canceled while waiting for connection
  • How make docker apear to new computer on network
  • docker 1.7 multiple port mapping at run failure
    1. Is it fine to create a lot Docker tags or is this something that
      should be avoided?
    2. Would it be better to have a base image (OS etc), then build the service binary from a tag in the source control system, and then build an image that is then transferred to the different environments on deploy?
    3. Other suggestions?

  • how to build Emacs from source in Docker Hub? gap between BSS and heap
  • How does one use Apache in a Docker Container and write nothing to disk (all logs to STDIO / STDERR)?
  • Profiling a C# dot net application running on a docker container
  • How to start redis-server and a node.js app on the same container?
  • How to solve classcastexception of same class for ehcache
  • Not able to access files in xampp htdocs installed in ubuntu docker container
  • 2 Solutions collect form web for “Is there any issues with creating lots of Docker tags for Continuous Delivery?”

    1. The public registry copes fine with multiple tags. Think of the 1000s of images being published on a regular basis. There’s no tag syncing required between docker client and the registry, just make sure your registry is powerful enough to cope. I’m assuming as part of your test process you pull down a specific image and tag?
    2. Base images are always preferably over a brand new build every time. It’ll speed up your build time too.

    Docker Registry v1 (python) is terrible at scaling on tags. This will bite you specifically if you are using distributed filesystem storage backends (like GCS or S3). See https://github.com/docker/docker-registry/issues/614 for context.

    Now, the new protocol (v2) and (golang) registry should be better at that.

    Bottom-line:

    • there is nothing wrong with the approach you describe, except the v1 protocol/registry itself…
    • you should bet on docker 1.6 (release candidates are out), and v2 golang registry (which is here: https://github.com/docker/distribution )
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.