What is the convention to include supporting services in a project that uses docker?

There are lots of articles and documentation for docker that describe how I can configure a web node profile. I find however that it’s relatively light on describing how I might go on to include profiles for things like my database server, caching, workers and message queue services alongside.

If I’m trying to create an all-inclusive project with my Dockerfiles in source control, how am I supposed to structure things?

  • Adding custom settings into boot2docker profile
  • java.net.InetAddress java class doesn't resolve IP on Alpine Docker container
  • What is the default working directory when running a mesos chronos command
  • PHP development environment with docker (and how to use composer)
  • NodeJS microservices
  • What's Better for Many Containers: a Large Custom Base Image or Several Small Custom Images?
  • Not an exhaustive list, but such that:

    • I can maybe spin up the entire cluster in one command
    • All processes can communicate with each other
    • I am placing Dockerfile(s?) in a meaningful location
    • I can select and signal the correct environment to run in to my apps
    • I correctly share my app’s source code to the web node

  • Docker official registry url
  • Best way to share Docker volume between different computers?
  • Passing multiple env files in docker run command
  • How can I disable internet access for docker containers
  • No space left on device docker volume
  • How to resolve SG Client's ENOTFOUND error
  • One Solution collect form web for “What is the convention to include supporting services in a project that uses docker?”

    I can maybe spin up the entire cluster in one command

    fig

    All processes can communicate with each other

    You can link containers using:

    docker run --link other-continer:ALIAS -it this-container
    

    All exposed ports on linked container will be available to this container.
    Environment variable [ALIAS]_PORT_[PORT]_TCP_ADDR and [ALIAS]_PORT_[PORT]_TCP_PORT will tell you where to connect to the other host.

    I am placing Dockerfile(s?) in a meaningful location

    No good answer here our other discussion is relevant. Place them each in their own folder, and then you can put them anywhere. They are not semantically linked to the source of your project.

    I can select and signal the correct environment to run in to my apps

    You can use:

    docker run -e VAR_NAME=value ...
    

    I correctly share my app’s source code to the web node:

    You can use the following command to pull in source into your docker container at build time.

    ADD http://url-to-your-source/ /folder/in/container
    

    Putting it all together:

    github.com/user/project/src/...
    
    # Or each of these in their own github repo if you want to be super canonical
    github.com/user/project/docker-files/base/Dockerfile
    github.com/user/project/docker-files/web/Dockerfile
    github.com/user/project/docker-files/db/Dockerfile
    github.com/user/project/docker-files/worker/Dockerfile
    
    github.com/user/project/fig.yml
    

    Your fig.yml would look something like

    base:
      build: ./docker-files/base/
    web:
      build: ./docker-files/web/
      command: Something
      links:
       - db
       - worker
      ports:
       - "8000:8000"
    db:
      build: ./docker-files/db/
      expose:
       - "3306"
    worker:
      build: ./docker-files/worker/
      expose:
       - "9000"
    

    The entry point in the web node will be a script which looks up the environment variables below and sets up configuration files accordingly before starting your web server. This should be done in script not in Dockerfile as the linkages are created at run time not build time.

    DB_PORT\_3306\_TCP_ADDR
    DB_PORT\_3306\_TCP_PORT
    WORKER_PORT\_9000\_TCP_ADDR
    WORKER_PORT\_9000\_TCP_PORT
    

    You should use the following command to pull source into your docker container (if you need source) otherwise you can just pull built artifact from somewhere as well.

    ADD https://api.github.com/repos/user/projects/tarball /tmp/src
    

    To avoid repetition add common steps such as maybe installing php or pulling source into the base docker container and then use from: base in your other containers to pull in image with those steps already performed.

    You can then do:

    fig build
    fig start 
    
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.