Docker: Best practice for development and production environment

Suppeose I have a simple node.js app. I can build a container to run the app with a simple Dockerfile like this:

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y nodejs nodejs-legacy npm
COPY . /app
RUN npm install
CMD node index.js

This will copy the source code into the container and I can ship it off to a registry no problem.

  • Start tor and polipo when I launch my container
  • file system issue when i use vagrant+docker on windows
  • Docker pull shows “Unknown blob”
  • Swift Perfect Docker Debugging the build process
  • Run existing WordPress site using Docker
  • Docker tomcat edit expanded war files
  • But for development I don’t want to rebuild the container for every change in my code. So naturally, I use a volume in combination to nodemon. Here’s my questions:

    • How do I keep the different configurations? Two dockerfiles? Use compose with two different compose files?
    • The node_nodules folder on my host is different from the one I need in the container (i.e. some packages are installed globally on the host). Can I exclude it from the volume? If so, I need to run npm install after mounting the volume. How do I do this?

    So my question is really: How do I keep dev and deploy environments separate. Two Dockerfiles? Two compose-files? Are there any best practices?

  • Docker keeps pushing same hashes
  • Docker container behavior when used in production
  • Can I use fig to initialise a persisting database in docker?
  • cqlsh to Cassandra single node running in docker
  • Get list of docker tags available to pull from command line?
  • Will docker container auto sync time with the host machine?
  • 3 Solutions collect form web for “Docker: Best practice for development and production environment”

    So the way I handle it is I have 2 Docker files (Dockerfile and

    In the I have:

    FROM node:6
    # Update the repository
    RUN apt-get update
    # useful tools if need to ssh in or used by other tools
    RUN apt-get install -y curl net-tools jq
    # app location
    ENV ROOT /usr/src/app
    COPY package.json /usr/src/app/
    # copy over private npm repo access file
    ADD .npmrc /usr/src/app/.npmrc
    # set working directory
    # install packages
    RUN npm install
    # copy all other files over
    COPY . ${ROOT}
    # start it up
    CMD [ "npm", "run", "start" ]
    # what port should I have
    EXPOSE 3000

    My NPM scripts look like this

    "scripts": {
        "start": "node_modules/.bin/supervisor -e js,json --watch './src/' --no-restart-on error ./index.js",
        "start-production": "node index.js",

    You will notice it uses supervisor for start so any changes to any file under src will cause it to restart the server without requiring a restart to docker.

    Last is the docker compose.

      build: .
        - "./src:/usr/src/app/src"
        - "./node_modules:/usr/src/node_modules"
        - "3000:3000"
      build: .
      dockerfile: Dockerfile
        - "3000:3000"

    So you see in a dev mode it loads it mounts the current directory’s src folder to the container at /usr/src/app/src and also the node_modules directory to the /usr/src/node_modules.

    This makes it so that I can make changes locally and save, the volume will update the container’s file, then supervisor will see that change and restart the server.

    ** Note as it doesn’t watch the node_modules folder you have to change another file in the src directory to do the restart **

    Use environment variables. See the documentation Docker env. This is the recommended way, also for use in the production.

    You can use single Dockerfile in which you’ll just declare VOLUME section.

    Remember that volume won’t get mounted unless you’ll specify that explicitly during docker run with -v <path>:<path> option. Having that given, you can declare multiple VOLUMEs even in your prod environment.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.