Centralized team development environment with docker

I want to build a “centralized” development environment using docker for my development team (4 PHP developers)

  • I have one big Linux server (lot of RAM, Disk, CPU) that runs the containers.
  • All developers have an account on this linux server (a home directory) where they put (git clone) the projects source code. Locally (on their desktop machine) they have access to their home directory via a network share.
  • I want that all developers are able to work at the same time on the same projects, but viewing the result of their code editing in different containers (or set of containers for project who use linking containers)

The docker PHP development environment by itself is not a problem. I already tried something like that with success : http://geoffrey.io/a-php-development-environment-with-docker.html

  • Can you explain Docker with a practical example/case? [closed]
  • How to set up a volume linked to S3 in Docker Cloud with AWS?
  • How to use command in docker-compose.yml
  • How to create Dockerfile in Boot2docker to create an image
  • Ansible Docker Module from OSX
  • Should I created three images for Apache http server, php and memcached separately?
  • I can use fig, with a fig.yml at the root of each project source code, so each developer can do a fig up to launch the set of containers for a given project. I can even use a different FIG_PROJECT_NAME environment variable for each account so I suppose that 2 developer can fig up the same project and their will be no container names collisions

    Does it make sense ?

    But after, I don’t really know how to dynamically giving access to the running containers : when running there will be typically a web server in a container mapped to a random port in the host. How can I setup a sort of “dynamic DNS” to point to the running container(s), accessible, let say, through a nginx reverse proxy (the vhost creation and destroy has to be dynamic too) ?

    To summarize, the workflow I would like to have :

    • A developer ssh into the dev env (the big linux server).
    • from his home directory he goes into the project directory and do a fig up
    • a vhost is created in the nginx reverse proxy, pointing to the running container and a DNS entry (or /etc/hosts entry) is added that is the server_name of this previously generated vhost.
    • The source code is mounted into the container from a host directory (-v host/dir:container/dir, so the developer can edit any file while the container is running
    • The result can be viewed by accessing the vhost, for example :
      randomly-generated-id.dev.example.org
    • when the changes are ok, the developper can do a git commit/push
    • then the dev do a fig stop which in turn delete the nginx reverse proxy corresponding vhost and also delete the dynamic DNS entry.

    So, how would to do a setup like this ? I mentioned tool like fig but if you have any other suggestions … but remember that I would like to keep a lightweight workflow (after all we are a small team :))

    Thanks for your help.

  • Scripting Docker, Not Connected After Running Script?
  • get IP address of node app running in docker container
  • AWS beanstalk environment isn't rotating docker container logs
  • How to start an LXC container inside a Docker container
  • pdf2htmlEX cannot open or read file
  • Unable to migrate Django db when using docker container
  • One Solution collect form web for “Centralized team development environment with docker”

    Does it make sense ?

    yes, that setup makes sense

    I would suggest taking a look at one of these projects:

    They’re all designed to create DNS entries for containers as they start. Then just point your DNS server at it and you should get a nice domain name every time someone starts up an environment (I don’t think you’ll need a nginx proxy). But you might also be interested in this approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.