Docker separation of concerns / services

I have a laravel project which I am using with docker. Currently I am using a single container to host all the services (apache, mySQL etc) as well as the needed dependencies (project files, git, composer etc) I need for my project.

From what I am reading the current best practice is to put each service into a separate container. So far this seems simple enough since these services are designed to run at length (apache server, mySQL server). When I spin up these ‘service’ containers using -d they remain running (docker ps) since their main process continuously runs.

  • Run Docker container with preset IP from Ansible
  • how do i run a container so that it is running behind a proxy + on running?
  • Where is Docker storing the images? [closed]
  • How do I migrate my current deployment methodology to one using Docker?
  • Docker Google cloud
  • Basic elasticsearch tribe setup with docker
  • However, when I remove all the services from my project container, then there is no main process left to continuously run. This means my container immediately exits once spun up.

    I have read the ‘hacks’ of running other processes like tail -f /dev/null, sleep infinity, using interactive mode, installing supervisord (which I assume would end up watching no processes in such containers?) and even leaving the container to run in the foreground (taking up a terminal console…).

    How do I network such a container to keep it running like the abstracted services but detached without these hacks? I cannot seem to find much information on this in the official docker docs nor can I find any examples of other projects (please link any)

    EDIT: I am not talking about volumes / storage containers to store the data my project processes, but rather how I can use a container to store the project itself and its dependencies that aren’t services (project files, git, composer)

  • Running Docker Image
  • How to connect docker container via ssh?
  • Gitlab-ci and docker compose: tls handshake timeout
  • Docker and Analytics Install
  • how to properly build spring boot docker image using Dockerfile?
  • Docker - Share environment variables with referenced volume containers
  • 2 Solutions collect form web for “Docker separation of concerns / services”

    when you run the container try running with the flags …

    docker run -dt ..... etc
    

    you might even try …..

    docker run -dti ..... etc
    

    let me know if this brings any joy. has certainly worked for me on occassions.

    i know you wanted to avoid hacks but if the above fails then also add …

    CMD cat 
    

    to the end of your Dockerfile – it is a hack but is the cleanest hack 🙂

    So after reading this a few times along with Joachim Isaksson’s comment, I finally get it. Tools don’t need the containers to run continuously to use. Proper separation of the project files, services (mySQL, apache) and tools (git, composer) are done differently.

    The project files are persisted within a data volume container. The services are networked since they expose ports. The tools live in their own containers which share the project files data volume – they are not networked. Logs, databases and other output can be persisted in different volumes.

    When you wish to run one of these tools, you spin up the tool container by passing the relevant command using docker run. The tool then manipulates the data within the directory persisted within the shared volume. The containers only persist as long as the command to manipulate the data within the shared volume takes to run and then the container stops.

    I don’t know why this took me so long to grasp, but this is the aha moment for me.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.