Docker options for chaining dockerfiles
I would love to make reusable dockerfiles. I don’t see anyway to do this since there are no variables and no include option correct? Anyone have a good option (besides dyanmically creating / modifying the dockerfile and build time)?
Would love to be able to resuse production env dockerfiles since there is 1 per service (ex: 1 web app, 1 database, 1 cache, etc) and build all of them on the same image so I can run a full app in 1 container (1 possible use case is running multiple feature branches on a single server).
Yeah I know I could build each container separately and link them but seems like total over kill for setup and managing (since in some cases may need 6 or more containers for a single feature branch).
3 Solutions collect form web for “Docker options for chaining dockerfiles”
If you do not want to create multiple images you can always create a monster image with everything installed in it. In my opinion, that is a code smell, but it is possible. In case you will never split up the services across servers this might even be valid.
If you want to use the same kind of image in different contexts, you can use environment variables to parameterize behavior and still use the same image. For instance, have a bash script that expects YOUR_VARIABLE and the start your container using the
-e option (see the documentation). E.g.
docker run -e "YOUR_VARIABLE=value" --rm ubuntu env
There are tools such as fig that make bringing up a number of containers easy. You still get the separate systems that can be deployed individually, but they come up and are used as a whole.
You can use tags to name docker images. This makes it somewhat easy to chain dockerfile:
Assuming a directory
. with sub-directories for modules
docker build -t "module1" module1 docker build -t "module1" module2
If this is the sequenced used for building then in
module2/Dockerfile you can use
Not perfect, because why would module2 need to know ahead of time about module1, but it works.