Docker: Best way to handle security updates of packages from apt-get inside docker containers

On my current server i use unattended-upgrades to automatically handle security updates.
But i’m wondering what people would suggest for working inside docker containers.
I have several docker containers running for each service of my app.
Should i have the unattended-upgrades setup in each? Or maybe upgrade them locally and push the upgraded images up? Any other ideas?

Does anyone have any experience with this in production maybe?

  • Can I share docker images between windows and linux?
  • Can't connect to ElasticSearch Docker cluster which run on local virtual machine using Java API
  • Not able to install packages in docker containers
  • How to deal with temporary apt-key adv failures?
  • docker: layers and Dockerfile
  • Docker (compose) fails to build container when mounting single file from official WordPress image
  • Docker dynamic base image name
  • Docker: how to restart process inside of container?
  • Replicating Heroku environment in Docker (or other container)
  • EOF Error :: when docker command is executed
  • Installing ssh-keyscan on Alpine linux?
  • dotnet core SDK/Runtime on debian in dockers
  • 2 Solutions collect form web for “Docker: Best way to handle security updates of packages from apt-get inside docker containers”

    I do updates automatically as you did (before). I currently have Stage containers and nothing in Prod, yet. But there is no harm done applying updates to each container: some redundant networking activity, perhaps, if you have multiple containers based in the same image, but harmless otherwise.

    Rebuilding a container strikes me as unnecessarily time consuming and involves a more complex process.

    WRT Time:
    The time to rebuild is added to the time needed to update so it is ‘extra’ time in that sense. And if you have start-up processes for your container, those have to be repeated.

    WRT Complexity:
    On the one hand you are simply running updates with apt. On the other you are basically acting as an integration server: the more steps, the more to go wrong.

    Also, the updates do not create a ‘golden image’ since it is easily repeatable.

    And finally, since the kernel is not ever actually updated, you would not ever need to restart the container.

    I would rebuild the container. They are usually oriented to run one app, and may have little sense to update the supporting filesystem and all the included but not used/exposed apps there.

    Having the data in a separate volume let you have a script that rebuilds the container and restarts it. It would have the advantage that loading another container from that image or pushing through a repository to another server would have all the fixes applied.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.