Docker Swarm, Compose and PostgreSQL

I’m attempting to run a containerized application that relies on a postgres database. I am using Swarm and docker-compose to manage it.

I manually create a volume against a host that is part of the Swarm cluster ( docker volume create --name=my_data ) and use this volume in my compose file. I say in the compose file that the volume is external.

  • Bitbucket Pipeline Rake Test take 20 minutes
  • Hadoop (yarn) control resource consumption (CPU and RAM), on docker
  • HTTP Code 504 while uploading layer “”
  • Docker can't pull image from repository
  • How to run a service on AWS ECS with container overrides?
  • Noob FED tries to run MeanJS app on docker port mapping throws error
  • I find that when the container starts the data volume and the postgres application are not necessarily co-located. If they are not co-located, docker seems to create me a new volume.

    A little further reading has lead me to believe that I should be looking at volume plugins such as Flocker. If I want to achieve persistent data for swarm applications using a database, is it best practice to use a volume plugin like this?

    My compose file …

    version: '2'
          image : ui
           - 8080:8080
           ADDRESS_SERVICE_URI: http://camel:8091
          image : camel
           - 18081:8081
           - 18091:8091
           - 8081
           - 8091
          image: postgres:9.4.5
           - app1_address_data:/var/lib/postgresql/data
          external: true
           name: my-net

  • Can I build a Docker image to “cache” a yocto/bitbake build?
  • Awestruct in Docker Container via Jenkins
  • Building solr core in cloud mode with docker
  • Docker - understanding start/run
  • Docker compose using environment variables to set extra host
  • docker-compose can not reach swarm
  • Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.