Docker “Sharing Dependencies”

Along the readings with Docker, I stopped couple of times with the fact that Docker containers not only share the host kernel, but If possible they share common binaries and libraries.

What I understand from that, is that If I’m running the same docker Image twice on the same host, and this image is using some files x,y,z (say libraries / bins .. anything). These files will also be shared among the 2 lunched containers ? What is even more is that if I’m running two different images, they still could share these common dependencies. What I’m asking for is just two things …

  • Allow non-docker group to launch containers
  • What is the relationship between a docker image ID and the IDs in the manifests?
  • Security measures with unsafe code on Docker
  • docker dead but pidfile exists
  • Traefik : Let's Encrypt + other certificate
  • Jenkins inheritance plugin jobs are run in infinite loop
  • 1- Verification / Explanation –> Is that true / false + explanation (how does that happen)
    2- If true –> Is there a practical example, that I can run 2 containers (of the same / diff images) and verify they are seeing the same files / libs.

    I hope my question is clear and someone has an answer 🙂

  • Compose two Docker image into one : NodeJS and MongoDB
  • How can I edit files in a docker container when it's down/not-started
  • Docker images with visual(X) support? Or should I use VBox instead in this case?
  • How to access ports in docker custom bridge networks
  • Enable Thrift in Cassandra Docker
  • Running gunicorn on ubuntu based image using docker-compose
  • One Solution collect form web for “Docker “Sharing Dependencies””

    Yes, answer is “true” to both questions.
    If you start 2 (or more) containers on the same host, all using the same base image, the whole content of the base image will be shared.

    What is called as an “image” is, in fact, multiple images called “layers” with parent-child relationships, stacked together.

    Now, If you start multiple containers with different images, it may happen that these images share some common layers, depending on how they were built.

    At the system level, Docker mounts each image layer on top of the other up to the final/top image. each layer overwrites its parent content if it overlaps. To do that, it uses what is called an “union filesystem” (Aufs), or even volume snapshots. More information here.

    The images are never modified, they are read-only. On top of the last/upper image, an extra, writeable layer, is added, it will contain changes/additions made by the running container.

    That means that this writeable layer can also be turned into an image layer, and you can start other containers from this new image.

    To see layers sharing “with your own eyes”, just run the following examples:

    docker run ubuntu:trusty /bin/bash


    docker run ubuntu-upstart:trusty /bin/bash

    Docker will tell you that it already has some layers and will thus download them all.

    Check the documentation about writing a Dockerfile (image build script), that should give you a good vision about how all this works.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.