Should docker image be bundled with code?

We are building a SaaS application. I don’t have (for now – for this app) high demands on availability. It’s mostly going to be used in a specific time zone and for business purposes only, so scheduled restarting at 3 in the morning shouldn’t be a problem at all.

It is an ASP.NET application running in mono with the fastcgi server. Each customer will have – due to security reasons – his own application deployed. This is going to be done using docker containers, with an Nginx server in the front, to distribute the requests based on URL. The possible ways how to deploy it are for me:

  • How to configure dns for concourse build container?
  • Deploy a docker container to production
  • CI and CD implementation issues
  • Is is possible to deploy a Docker image straight to AWS?
  • continuous integration of docker image by running locally with its own IP?
  • kubernetes Deployment. how to change container environment variables for rolling updates?
    1. Create a docker image with the fcgi server only and run the code from a mount point
    2. Create a docker image with the fcgi server and the code

    pros for 1. would seem

    • It’s easier to update the code, since the docker containers can keep running
    • Configuration can be bundled with the code
    • I could easily (if I ever wanted to) add minor changes for specific clients

    pros for 2. would seem

    • everything is in an image, no need to mess around with additional files, just pull it and run it

    cons for 1.

    • a lot of folders for a lot of customers additionally to the running containers

    cons for 2.

    • Configuration can’t be in the image (or can it? – should i create specific images per customer with their configuration?) => still additional files for each customer
    • Updating a container is harder since I need to restart it – but not a big deal, as stated in the beginning

    For now – the first year – the number of customers will be low and when the demand is low, any solution is good enough. I’m looking rather at – what is going to work with >100 customers.

    Also for future I want to set up CI for this project, so we wouldn’t need to update all customers instances manually. Docker images can have automated builds but not sure that will be enough.

    My concerns are basically – which solution is less messier, maybe easier to automate?

    I couldn’t find any best practices with docker which cover a similar scenario.

  • Correct way to mount a host directory in a docker container using salt
  • How to prevent docker containers from consuming all CPU?
  • how to configure docker to use apache and php in separate containers
  • Should you recreate containers when deploying web app?
  • Deploying Spring WAR to Tomcat-based docker
  • Publishing docker swarm mode port only to localhost
  • One Solution collect form web for “Should docker image be bundled with code?”

    It’s likely that your application’s dependencies are going to be dependent on the code, so you’ll still have to sometimes rebuild the images and restart the containers (whenever you add a new dependency).

    This means you would have two upgrade workflows:

    • One where you update just the code (when there are no dependency changes)
    • One where you update the images too, and restart the containers (when there are dependency changes)

    This is most likely undesirable, because it’s difficult to automate.

    So, I would recommend bundling the code on the image.

    You should definitely make sure that your application’s configuration can be stored somewhere else, though (e.g. on a volume, or accessed through through environment variables).

    Ultimately, Docker is a platform to package, deploy and run applications, so packaging the application (i.e. bundling the code on the image) seems to be the better way to use it.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.