Linking docker containers to combine different libraries

Docker containers can be linked. Most examples involve linking a Redis container with an SQL container. The beauty of linking containers is that you can keep the SQL environment separate from your Redis environment, and instead of building one monolithic image one can maintain two nicely separate ones.

I can see how this works for server applications (where the communication is transmitted through ports), but I have troubles replicating a similar approach for different libraries. As a concrete example, I’d like to use a container with Ipython Notebook together with the C/C++-library caffe (which exposes a Python interface through a package in one of its subfolders) and an optimisation library such as Ipopt. Containers for Ipython and Caffe readily exist, and I am currently working on a separate image for Ipopt. Yet how do I link the three together without building one giant monolithic Dockerfile? Caffe, Ipython and Ipopt each have a range of dependencies, making a combined maintenance a real nightmare.

  • After installing docker on centos7,Failed to start docker.“Job for docker.service failed.”
  • Opening file from Docker Container in Python/Django Web App
  • Angular2 docker setup for multiple environments
  • Docker can't find a script that I COPY to an image when I try to invoke it twice via /script.sh && /script.sh (but once works!)
  • On Docker for Mac what would be the host ip as seen from the container? [duplicate]
  • redis+elk stack using docker-compose
  • Docker MySQL: create new user
  • Deepanimebot on Docker: Mismatch between number of layers in weight file and model
  • Connection to docker container failing because of postgis port issue
  • “Service cron status” command does not give back the status of cron in the ubuntu docker container
  • How to assign static public IP to docker container
  • Docker installation debian openjdk-7-jre
  • 2 Solutions collect form web for “Linking docker containers to combine different libraries”

    My view on docker containers is that each container typically represents one process. E.g. redis or nginx. Containers typically communicates with each other using networking or via shared files in volumes.

    Each container runs its own operating system (typically specified in the FROM-section in your Dockerfile). In your case, you are not running any specific processes but instead you simply wish to share libraries. This is not what docker was designed for and I am not even sure that it is doable but it sure seems as if it is a strange way of doing things.

    My suggestion is therefore that you create a base image with the least common denominator (some of the shared libraries that are common to all other images) and that your other images use that image as the FROM-image.

    Furthermore, If you need more complex setup of your environment with lots of dependencies and heavy provisioning, I suggest that you take a look at other provisioning tools such as Chef or Puppet.

    Docker linking is about linking microservices, that is separate processes, and has no relation to your question as far as I can see.

    There is no out-of-the-box facility to compose separate docker images into one container, the way you call ‘linking’ in your question.

    If you don’t want to have that giant monolithic image, you might consider using provisioning tools a-la puppet, chef or ansible together with docker. One example here. There you might theoretically get use of the existing recipes/playbooks for the libraries you need. I would be surprised though if this approach would be much easier for you than to maintain your “big monolithic” Dockerfile.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.