Docker images for application packaging

Apparently there seems to be two practices for application packaging and deployment

  1. create a docker image and deploy it
  2. build and deploy application from ground-up.

I am confused on how to use option 1). The premise is that you take a docker image and re-use it on any platform. But how is this a viable solution in practice, as an environment often has platform and application -specific configurations? The docker image from my test environment cannot be deployed to productions, as it contains mocks and test -level configurations.

  • Best practice for building docker for Python - requirements file OR install individually
  • Use Xcode with Docker (cross-compiling)
  • Docker TLS Error on Ubuntu
  • Defusing fork bomb: kill forking processes
  • How do I provide dynamic host names for all docker containers?
  • Docker with MongoDB (3.4) Replica Shard
  • Is it possible to log into Gitlabs container registry without using the CI runner?
  • Direct-LVM stops working after reboot
  • Docker maven plugin ClientProtocolException (Windows 10 using Docker Toolbox)
  • ssh issue of docker in redhat 6.5
  • Not able to use a Dockerfile when using Jenkins CloudBees Docker Custom Build Environment Plugin
  • How to properly configure the DNS configuration of the VirtualBox to resolve docker containers hostnames within the local network?
  • One Solution collect form web for “Docker images for application packaging”

    The idea of packing an application as a Docker image is having all external/system configuration embedded in the application itself. i.e: any specific version of an external engine such as java or ruby; the basics GNU/Linux software you have in your system (not different versions of awk or grep any more), etc.

    From my point of view, it is possible to have some slight differences between a develop and a production image, but this differences should be minor configuration parameters like log level or things like that. The advantage of using a container a distribution system of your app is to avoid all the pain related to the external differences, and also a new approach to the problem of ‘web size architectures’ and elastic platforms, having a new standard way to deploy them. Having some external services mocked in your test/development system should not be a problem, or if they are I think the problem is of the mock itself. The mock should be embedded in your application container, but you can have them as another image (or when possible avoid mocking the service and using it as a container).

    Edit 1:
    As general approach if you are using Docker as a tool helping with continuous integration or the deployment to production, I would not recommend having different containers for development and for production. If you have experience using IT automation tools such as Puppet, Chef, Ansible or Salt, they are an easy and probably fast way to configure your containers (and some as Chef has a docker specific approach, chef-container, which has some advantages here), and it’s a good option to consider if you infrastructure is built using them.
    But if you are building/designing a new architecture based on Docker I would check other options more decentralized and container-oriented as Consul or etcd, to manage the configurations templates and data, service discovery, elastic deployment with an orchestrator…

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.