Dockerized PHP Application Architecture Best Practices

I’m pretty new do Docker. I played a lot with Docker in my development environment but I tried to deploy real app only once.

I’ve read tons of documentations and watched dozes of videos but still have a lot of questions.
I do understand that Docker is just a tool that can be used in so many different ways, but now I’m trying to find the best way to develop and deploy web apps.

  • Yarn fails running on Dockerfile
  • How do I create an image and run it locally?
  • Docker-Compose too many levels of symlinks with NFS
  • How to rename a Bluemix namespace (container registry)?
  • The best practices for PostgreSQL docker container initialization with some data
  • How to inspect docker service CPU limitations?
  • I’ll use real PHP App case to make my question more concrete and practical.
    To keep it simple let’s assume I’m building a very simple PHP App so I’ll need:

    1. Web Server (nginx)
    2. PHP Interpreter (php-fpm or hhvm)
    3. Persistent storage for SESSIONs

    The best example/tutorial I could find was this one year old post. Dylan proposes this kind of structure:
    Dockerized PHP App

    He use Data Only container for the whole PHP project files and logs and docker-compose to run all this images with proper links. In development env I’ll mount a host directory as a data volume and for production I’ll copy files directly to Data Only Images and deploy.

    This is understandable. I do want to share data across nginx and php-fpm. nginx needs access to static files (.img, .css, .js…) and php-fpm need access to PHP files. And both services are separated so can be updated/changed independently.

    Data only container shares a data volume that is linked to nginx and php-fpm by --volumes-from option.

    But as I understand – there’s a problem with Data Only containers and -v flag.
    Official Docker Documentation says that data volume is specially-designated directory to persist data! It is said that

    Data volumes persist even if the container itself is deleted.

    So this solution is great for data I do not want to loose like Session files, DB storage, logs etc.. But not for my code files, right? I do want to change my code files. I want to deploy changes without rebuilding nginx and php-fpm images.

    Another problem is when I tried this approach I could not deploy code changes until I stopped all running containers, removed them and their images and rebuild them entirely. Just rebuilding and deploying Data Only images did nothing!

    I’ve seen some other implementations when data is stored directly in Interpreter container, but it’s not an option because I need nginx to have access to these files also.

    The question is what is the best practices on where to put my project code files and how to deploying changes for this kind of app?

    Thanks.

  • share ports between docker and vagrant
  • Docker Change Port Binding
  • Using custom shell script as docker entrypoint
  • docker compose mount host directory on osx with xhyve
  • Running a docker image in Openshift Origin
  • Docker - ELK - vm.max_map_count
  • One Solution collect form web for “Dockerized PHP Application Architecture Best Practices”

    Right, don’t use a data volume for your code. docker-compose makes a point to re-use old volumes (so you don’t lose data), so you’d always be stuck with old code.

    Use a COPY directive to add the static resources in the nginx Dockerfile and a COPY in the application (phpfpm) Dockerfile to add the code. In dev you can use a host volume so that you don’t have to restart containers to see your code changes (assuming the web server supports picking up changes).

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.