Linking docker containers to combine different libraries

Docker containers can be linked. Most examples involve linking a Redis container with an SQL container. The beauty of linking containers is that you can keep the SQL environment separate from your Redis environment, and instead of building one monolithic image one can maintain two nicely separate ones.

I can see how this works for server applications (where the communication is transmitted through ports), but I have troubles replicating a similar approach for different libraries. As a concrete example, I’d like to use a container with Ipython Notebook together with the C/C++-library caffe (which exposes a Python interface through a package in one of its subfolders) and an optimisation library such as Ipopt. Containers for Ipython and Caffe readily exist, and I am currently working on a separate image for Ipopt. Yet how do I link the three together without building one giant monolithic Dockerfile? Caffe, Ipython and Ipopt each have a range of dependencies, making a combined maintenance a real nightmare.

  • How are docker image names parsed?
  • Docker Remote API on windows native
  • Getting 502 Bad Gateway while running the Django application within a docker container?
  • Dockerfile COPY / ADD fails for file in build context
  • Non-root user in Docker
  • Selenium Only local connections allowed and NoSuchSessionException
  • How do I pass the host's IP address to my container in Docker
  • what is a docker image? what is it trying to solve?
  • How to make an Azure VM & configure containers to use Azure File Storage via docker CLI / quickstart terminal?
  • docker-compose image export and import
  • Trying to execute easy_install or pip on docker build says command not found
  • How to map docker ports using vagrant 1.6 and the docker provider
  • 2 Solutions collect form web for “Linking docker containers to combine different libraries”

    My view on docker containers is that each container typically represents one process. E.g. redis or nginx. Containers typically communicates with each other using networking or via shared files in volumes.

    Each container runs its own operating system (typically specified in the FROM-section in your Dockerfile). In your case, you are not running any specific processes but instead you simply wish to share libraries. This is not what docker was designed for and I am not even sure that it is doable but it sure seems as if it is a strange way of doing things.

    My suggestion is therefore that you create a base image with the least common denominator (some of the shared libraries that are common to all other images) and that your other images use that image as the FROM-image.

    Furthermore, If you need more complex setup of your environment with lots of dependencies and heavy provisioning, I suggest that you take a look at other provisioning tools such as Chef or Puppet.

    Docker linking is about linking microservices, that is separate processes, and has no relation to your question as far as I can see.

    There is no out-of-the-box facility to compose separate docker images into one container, the way you call ‘linking’ in your question.

    If you don’t want to have that giant monolithic image, you might consider using provisioning tools a-la puppet, chef or ansible together with docker. One example here. There you might theoretically get use of the existing recipes/playbooks for the libraries you need. I would be surprised though if this approach would be much easier for you than to maintain your “big monolithic” Dockerfile.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.