Docker images for application packaging

Apparently there seems to be two practices for application packaging and deployment

  1. create a docker image and deploy it
  2. build and deploy application from ground-up.

I am confused on how to use option 1). The premise is that you take a docker image and re-use it on any platform. But how is this a viable solution in practice, as an environment often has platform and application -specific configurations? The docker image from my test environment cannot be deployed to productions, as it contains mocks and test -level configurations.

  • Access mongo shell running in docker container in linux script
  • docker-machine commands from C#
  • How to install Dockerfile from GitLab to allow pull and commit
  • SIGTERM not received by java process using 'docker stop' and the official java image
  • Docker: connection reset by peer
  • Cannot pipe docker pid to bash command
  • howto: elastic beanstalk + deploy docker + graceful shutdown
  • How to add private registry certs to Docker Machine
  • exposing Mongodb from docker container on server
  • Connecting to Kubernetes on Docker on OSX?
  • Sharing docker registry images among gcloud projects
  • how to do network isolation among different applications in kubernetes or in docker cluster?
  • One Solution collect form web for “Docker images for application packaging”

    The idea of packing an application as a Docker image is having all external/system configuration embedded in the application itself. i.e: any specific version of an external engine such as java or ruby; the basics GNU/Linux software you have in your system (not different versions of awk or grep any more), etc.

    From my point of view, it is possible to have some slight differences between a develop and a production image, but this differences should be minor configuration parameters like log level or things like that. The advantage of using a container a distribution system of your app is to avoid all the pain related to the external differences, and also a new approach to the problem of ‘web size architectures’ and elastic platforms, having a new standard way to deploy them. Having some external services mocked in your test/development system should not be a problem, or if they are I think the problem is of the mock itself. The mock should be embedded in your application container, but you can have them as another image (or when possible avoid mocking the service and using it as a container).

    Edit 1:
    As general approach if you are using Docker as a tool helping with continuous integration or the deployment to production, I would not recommend having different containers for development and for production. If you have experience using IT automation tools such as Puppet, Chef, Ansible or Salt, they are an easy and probably fast way to configure your containers (and some as Chef has a docker specific approach, chef-container, which has some advantages here), and it’s a good option to consider if you infrastructure is built using them.
    But if you are building/designing a new architecture based on Docker I would check other options more decentralized and container-oriented as Consul or etcd, to manage the configurations templates and data, service discovery, elastic deployment with an orchestrator…

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.