Docker separation of concerns / services

I have a laravel project which I am using with docker. Currently I am using a single container to host all the services (apache, mySQL etc) as well as the needed dependencies (project files, git, composer etc) I need for my project.

From what I am reading the current best practice is to put each service into a separate container. So far this seems simple enough since these services are designed to run at length (apache server, mySQL server). When I spin up these ‘service’ containers using -d they remain running (docker ps) since their main process continuously runs.

  • Fluentd capture stack traces from Docker
  • how to stop solr in standard way within docker
  • Docker on OSX: How to start boot2docker in one shell and use it in another
  • Can't connect to docker from docker-compose
  • How to collect metrics from services running on docker containers using collectd, telegraph or similar tools
  • Not able to deploy ASP .net vnext application on VM on Azure using Docker
  • However, when I remove all the services from my project container, then there is no main process left to continuously run. This means my container immediately exits once spun up.

    I have read the ‘hacks’ of running other processes like tail -f /dev/null, sleep infinity, using interactive mode, installing supervisord (which I assume would end up watching no processes in such containers?) and even leaving the container to run in the foreground (taking up a terminal console…).

    How do I network such a container to keep it running like the abstracted services but detached without these hacks? I cannot seem to find much information on this in the official docker docs nor can I find any examples of other projects (please link any)

    EDIT: I am not talking about volumes / storage containers to store the data my project processes, but rather how I can use a container to store the project itself and its dependencies that aren’t services (project files, git, composer)

  • cannot run container after commit changes
  • Dockerizing Delayed Job
  • Share and update docker data containers across containers
  • How to mount current directory as read-only but still allow changes inside the container?
  • How to pass ARG value to ENTRYPOINT?
  • docker php mongodb passing links to mongodb connection
  • 2 Solutions collect form web for “Docker separation of concerns / services”

    when you run the container try running with the flags …

    docker run -dt ..... etc
    

    you might even try …..

    docker run -dti ..... etc
    

    let me know if this brings any joy. has certainly worked for me on occassions.

    i know you wanted to avoid hacks but if the above fails then also add …

    CMD cat 
    

    to the end of your Dockerfile – it is a hack but is the cleanest hack 🙂

    So after reading this a few times along with Joachim Isaksson’s comment, I finally get it. Tools don’t need the containers to run continuously to use. Proper separation of the project files, services (mySQL, apache) and tools (git, composer) are done differently.

    The project files are persisted within a data volume container. The services are networked since they expose ports. The tools live in their own containers which share the project files data volume – they are not networked. Logs, databases and other output can be persisted in different volumes.

    When you wish to run one of these tools, you spin up the tool container by passing the relevant command using docker run. The tool then manipulates the data within the directory persisted within the shared volume. The containers only persist as long as the command to manipulate the data within the shared volume takes to run and then the container stops.

    I don’t know why this took me so long to grasp, but this is the aha moment for me.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.