Should I make a service for shared Docker dependencies?

I have 3 different services that uses GraphicsMagick as a dependency and I’m just starting with Docker. So I’m wondering, should I make a separate light API for GraphicsMagick (maybe using PHP) and put it in a separate Docker container? Since GraphicsMagick is just an executable.

Or it would be slow and the best way is to install GraphicsMagick as a dependency for each service container?

  • Warning: Module 'curl' already loading in Unknown on line 0 (Fix this from a custom.php.ini file)
  • unable to install pg-native (libpq-dev) on ubuntu 14.04
  • run ionic framework into docker does not create localy folder/files
  • What on premises container management solutions are there?
  • Why does the path command doesn't work in Docker file
  • Cannot remove all containers in a row
  • Thanks!

  • How to launch existing Wordpress app image with Docker not at default 80 port?
  • Docker and libseccomp
  • Docker: input from web UI user to Docker run Python interpreter
  • CMD dockerfile is not executed
  • Why does Dockerfile RUN print echo options?
  • When I am going to extend mysql docker image, I can't add my own entrypoint shell
  • One Solution collect form web for “Should I make a service for shared Docker dependencies?”

    As mentioned in comments and your original question, there two approaches here. One is to just install GraphicsMagick in a base image or in individual service images. The other would be to build a separate service (a sort of worker or image manipulation API of sorts) specific to GraphicsMagick. I guess the answer will depend on what pros and cons are most important to you right now.

    GraphicsMagick in a base image has the advantages that it will be easy to implement. You don’t have to build something extra. The GraphicsMagick binary shouldn’t be much hassle to install and probably only adds a couple MB size to your end resulting images.

    Building a separate API service image with GraphicsMagick has the overhead of development time and service complexity. You’d probably also need to implement some sort of service discovery with this model so your other service images know how to get to this new API. Though, down the road this will be the more scalable model. Depending on where your load is, this can help scale separately of other service containers, and also be run on separate hosts if desired, especially when image manipulations can use CPU and starve other services.

    So I would ask yourself these questions:

    • Can you afford the additional development time?
    • Does the application need this separate scalability?
    • Is there no issue managing an additional service that might also need some service discovery?

    If you can answer yes to all those questions, then you might be in line to just build out a separate service for this. Docker definitely lends itself well to service oriented architectures and this is probably the more proper way to build out the application. But a lot can be said for something that “just works” and takes minimal time to implement now, especially if it will work fine for a good amount of time.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.