Should I make a service for shared Docker dependencies?

I have 3 different services that uses GraphicsMagick as a dependency and I’m just starting with Docker. So I’m wondering, should I make a separate light API for GraphicsMagick (maybe using PHP) and put it in a separate Docker container? Since GraphicsMagick is just an executable.

Or it would be slow and the best way is to install GraphicsMagick as a dependency for each service container?

  • Symfony app/console command is not running with cronjob inside Docker container
  • docker compose build single container
  • Use perf inside a docker container without --privileged
  • docker -P not exposing ports of application started as argument
  • Copy directory to other directory at Docker using ADD command
  • Docker Machine could not reach ip address/machine successfully using hyperv
  • Thanks!

  • Docker, monitoring container status, alerts
  • Docker DNS not working
  • Extend Existing Docker Image
  • Running Desktop Apps on Docker Containers on Windows 10
  • docker cp permissions are wrong when you mount the directory back into container
  • Does docker-compose up destroy old database?
  • One Solution collect form web for “Should I make a service for shared Docker dependencies?”

    As mentioned in comments and your original question, there two approaches here. One is to just install GraphicsMagick in a base image or in individual service images. The other would be to build a separate service (a sort of worker or image manipulation API of sorts) specific to GraphicsMagick. I guess the answer will depend on what pros and cons are most important to you right now.

    GraphicsMagick in a base image has the advantages that it will be easy to implement. You don’t have to build something extra. The GraphicsMagick binary shouldn’t be much hassle to install and probably only adds a couple MB size to your end resulting images.

    Building a separate API service image with GraphicsMagick has the overhead of development time and service complexity. You’d probably also need to implement some sort of service discovery with this model so your other service images know how to get to this new API. Though, down the road this will be the more scalable model. Depending on where your load is, this can help scale separately of other service containers, and also be run on separate hosts if desired, especially when image manipulations can use CPU and starve other services.

    So I would ask yourself these questions:

    • Can you afford the additional development time?
    • Does the application need this separate scalability?
    • Is there no issue managing an additional service that might also need some service discovery?

    If you can answer yes to all those questions, then you might be in line to just build out a separate service for this. Docker definitely lends itself well to service oriented architectures and this is probably the more proper way to build out the application. But a lot can be said for something that “just works” and takes minimal time to implement now, especially if it will work fine for a good amount of time.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.