Should I make a service for shared Docker dependencies?

I have 3 different services that uses GraphicsMagick as a dependency and I’m just starting with Docker. So I’m wondering, should I make a separate light API for GraphicsMagick (maybe using PHP) and put it in a separate Docker container? Since GraphicsMagick is just an executable.

Or it would be slow and the best way is to install GraphicsMagick as a dependency for each service container?

  • docker-compose up not adding code to remote container
  • Docker creates huge image sizes
  • How to achieve consistency of re-baking an AMI
  • Access hosts zfs from docker container
  • How to use 'when' conditional with Systemd Unit configs
  • Zabbix server and agent using dns with docker swarm 1.12- connection problems
  • Thanks!

  • Ansible Docker Module from OSX
  • Docker: MySQL-Socket created with wrong permissions when bind-mounted from host to container
  • How to write PHP code to execute two functions
  • Centralised configuration of docker-compose services
  • Why is my dockerfile not working? [closed]
  • How to export an environment variable to a docker image?
  • One Solution collect form web for “Should I make a service for shared Docker dependencies?”

    As mentioned in comments and your original question, there two approaches here. One is to just install GraphicsMagick in a base image or in individual service images. The other would be to build a separate service (a sort of worker or image manipulation API of sorts) specific to GraphicsMagick. I guess the answer will depend on what pros and cons are most important to you right now.

    GraphicsMagick in a base image has the advantages that it will be easy to implement. You don’t have to build something extra. The GraphicsMagick binary shouldn’t be much hassle to install and probably only adds a couple MB size to your end resulting images.

    Building a separate API service image with GraphicsMagick has the overhead of development time and service complexity. You’d probably also need to implement some sort of service discovery with this model so your other service images know how to get to this new API. Though, down the road this will be the more scalable model. Depending on where your load is, this can help scale separately of other service containers, and also be run on separate hosts if desired, especially when image manipulations can use CPU and starve other services.

    So I would ask yourself these questions:

    • Can you afford the additional development time?
    • Does the application need this separate scalability?
    • Is there no issue managing an additional service that might also need some service discovery?

    If you can answer yes to all those questions, then you might be in line to just build out a separate service for this. Docker definitely lends itself well to service oriented architectures and this is probably the more proper way to build out the application. But a lot can be said for something that “just works” and takes minimal time to implement now, especially if it will work fine for a good amount of time.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.