Linking docker containers to combine different libraries

Docker containers can be linked. Most examples involve linking a Redis container with an SQL container. The beauty of linking containers is that you can keep the SQL environment separate from your Redis environment, and instead of building one monolithic image one can maintain two nicely separate ones.

I can see how this works for server applications (where the communication is transmitted through ports), but I have troubles replicating a similar approach for different libraries. As a concrete example, I’d like to use a container with Ipython Notebook together with the C/C++-library caffe (which exposes a Python interface through a package in one of its subfolders) and an optimisation library such as Ipopt. Containers for Ipython and Caffe readily exist, and I am currently working on a separate image for Ipopt. Yet how do I link the three together without building one giant monolithic Dockerfile? Caffe, Ipython and Ipopt each have a range of dependencies, making a combined maintenance a real nightmare.

  • I've started my Docker container but it is not staying up
  • Requests to docker containers failing on chrome for OSX
  • Elastic beanstalk with Docker Volumes pointing to files
  • How to hot deploy Java EE applications in Docker containers
  • Passenger Still Runs in Production Despite Everything Seeming to Be Correct?
  • writing liberty logs directly to graylog
  • Jenkins Slave can't read settings.xml
  • Running rails migrations in docker container sometimes results in ActiveRecord::DuplicateMigrationNameError
  • Figure out IP address within docker container
  • Running MySQL in a Docker container
  • Setup Openstack Havana with Docker driver
  • Can I use Docker simply to set up the general environment of an EC2 instance?
  • 2 Solutions collect form web for “Linking docker containers to combine different libraries”

    My view on docker containers is that each container typically represents one process. E.g. redis or nginx. Containers typically communicates with each other using networking or via shared files in volumes.

    Each container runs its own operating system (typically specified in the FROM-section in your Dockerfile). In your case, you are not running any specific processes but instead you simply wish to share libraries. This is not what docker was designed for and I am not even sure that it is doable but it sure seems as if it is a strange way of doing things.

    My suggestion is therefore that you create a base image with the least common denominator (some of the shared libraries that are common to all other images) and that your other images use that image as the FROM-image.

    Furthermore, If you need more complex setup of your environment with lots of dependencies and heavy provisioning, I suggest that you take a look at other provisioning tools such as Chef or Puppet.

    Docker linking is about linking microservices, that is separate processes, and has no relation to your question as far as I can see.

    There is no out-of-the-box facility to compose separate docker images into one container, the way you call ‘linking’ in your question.

    If you don’t want to have that giant monolithic image, you might consider using provisioning tools a-la puppet, chef or ansible together with docker. One example here. There you might theoretically get use of the existing recipes/playbooks for the libraries you need. I would be surprised though if this approach would be much easier for you than to maintain your “big monolithic” Dockerfile.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.