Docker options for chaining dockerfiles

I would love to make reusable dockerfiles. I don’t see anyway to do this since there are no variables and no include option correct? Anyone have a good option (besides dyanmically creating / modifying the dockerfile and build time)?

Would love to be able to resuse production env dockerfiles since there is 1 per service (ex: 1 web app, 1 database, 1 cache, etc) and build all of them on the same image so I can run a full app in 1 container (1 possible use case is running multiple feature branches on a single server).

  • Docker run command ignoring part of Dockerfile CMD when ENTRYPOINT present
  • Symlink to volume in Docker container
  • Add multiple network interfaces inside a pod in Kubernetes
  • Log only real fatal errors from PHP-FPM in Docker container
  • Can't build docker image from github repository branch
  • Getting User name + password to docker container
  • Yeah I know I could build each container separately and link them but seems like total over kill for setup and managing (since in some cases may need 6 or more containers for a single feature branch).

  • How to run nodejs inside container correctly?
  • libGL error: failed to load driver: swrast - Running Ubuntu Docker container on Mac OS X host
  • Exposing ipv6 ports in docker containers
  • Not able to access shared data volume
  • building oracle docker image on mac os x fails with “This system does not meet the minimum requirements for swap space.”
  • On server startup NPM find start script
  • 3 Solutions collect form web for “Docker options for chaining dockerfiles”

    If you do not want to create multiple images you can always create a monster image with everything installed in it. In my opinion, that is a code smell, but it is possible. In case you will never split up the services across servers this might even be valid.

    If you want to use the same kind of image in different contexts, you can use environment variables to parameterize behavior and still use the same image. For instance, have a bash script that expects YOUR_VARIABLE and the start your container using the -e option (see the documentation). E.g.

    docker run -e "YOUR_VARIABLE=value" --rm ubuntu env
    

    There are tools such as fig that make bringing up a number of containers easy. You still get the separate systems that can be deployed individually, but they come up and are used as a whole.

    You can use tags to name docker images. This makes it somewhat easy to chain dockerfile:

    Assuming a directory . with sub-directories for modules module1, …

    build_all.sh:

    docker build -t "module1" module1
    docker build -t "module1" module2
    

    If this is the sequenced used for building then in module2/Dockerfile you can use

    FROM module1
    

    Not perfect, because why would module2 need to know ahead of time about module1, but it works.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.