Docker: Best way to handle security updates of packages from apt-get inside docker containers

On my current server i use unattended-upgrades to automatically handle security updates.
But i’m wondering what people would suggest for working inside docker containers.
I have several docker containers running for each service of my app.
Should i have the unattended-upgrades setup in each? Or maybe upgrade them locally and push the upgraded images up? Any other ideas?

Does anyone have any experience with this in production maybe?

  • ServiceStack Docker architecture
  • can't write ñ, ä, ë, ü in a ubuntu docker container
  • Unable to run Registrator using docker-compose. Get connection refused error
  • How to change the version of Ruby in a Docker image (replace 2.2.0 with 2.0.0 )
  • Docker port bindings
  • Why does Docker ADD command not copy this file?
  • Can Ansible deploy Docker containers remotely?
  • docker push fails due to “unauthorized: authentication required”, using gitlab
  • Build and run a development environment with Docker
  • Permission denied @ rb_sysopen - log/application.log (Errno::EACCES)
  • Setting up redis with docker
  • Docker and Namespace-related errors after a successful login to Bluemix
  • 2 Solutions collect form web for “Docker: Best way to handle security updates of packages from apt-get inside docker containers”

    I do updates automatically as you did (before). I currently have Stage containers and nothing in Prod, yet. But there is no harm done applying updates to each container: some redundant networking activity, perhaps, if you have multiple containers based in the same image, but harmless otherwise.

    Rebuilding a container strikes me as unnecessarily time consuming and involves a more complex process.

    WRT Time:
    The time to rebuild is added to the time needed to update so it is ‘extra’ time in that sense. And if you have start-up processes for your container, those have to be repeated.

    WRT Complexity:
    On the one hand you are simply running updates with apt. On the other you are basically acting as an integration server: the more steps, the more to go wrong.

    Also, the updates do not create a ‘golden image’ since it is easily repeatable.

    And finally, since the kernel is not ever actually updated, you would not ever need to restart the container.

    I would rebuild the container. They are usually oriented to run one app, and may have little sense to update the supporting filesystem and all the included but not used/exposed apps there.

    Having the data in a separate volume let you have a script that rebuilds the container and restarts it. It would have the advantage that loading another container from that image or pushing through a repository to another server would have all the fixes applied.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.