Docker: Best way to handle security updates of packages from apt-get inside docker containers

On my current server i use unattended-upgrades to automatically handle security updates.
But i’m wondering what people would suggest for working inside docker containers.
I have several docker containers running for each service of my app.
Should i have the unattended-upgrades setup in each? Or maybe upgrade them locally and push the upgraded images up? Any other ideas?

Does anyone have any experience with this in production maybe?

  • Redirect several log files in docker container to stdout (legacy application)
  • How to debug kuberenetes system pod elections on masters?
  • Dockerfile - How to pass an answer to a prompt post apt-get install?
  • docker(1.5.1) artifactory (3.x) registry management
  • How to get docker run to take the directory from the client machine to the host container?
  • How to expose ports only within the docker network?
  • Docker resource management and runtime configuration
  • /public/img/ folder not acceping new assets in Go-compiled web app
  • Deploying jhipster-console to different remote docker machines
  • Use Docker to compile and run untrusted code in Django App [closed]
  • Docker Ignores limits.conf (trying to solve “too many open files” error)
  • Docker Compose : build with docker --rm --force-rm to avoid storing intermediates
  • 2 Solutions collect form web for “Docker: Best way to handle security updates of packages from apt-get inside docker containers”

    I do updates automatically as you did (before). I currently have Stage containers and nothing in Prod, yet. But there is no harm done applying updates to each container: some redundant networking activity, perhaps, if you have multiple containers based in the same image, but harmless otherwise.

    Rebuilding a container strikes me as unnecessarily time consuming and involves a more complex process.

    WRT Time:
    The time to rebuild is added to the time needed to update so it is ‘extra’ time in that sense. And if you have start-up processes for your container, those have to be repeated.

    WRT Complexity:
    On the one hand you are simply running updates with apt. On the other you are basically acting as an integration server: the more steps, the more to go wrong.

    Also, the updates do not create a ‘golden image’ since it is easily repeatable.

    And finally, since the kernel is not ever actually updated, you would not ever need to restart the container.

    I would rebuild the container. They are usually oriented to run one app, and may have little sense to update the supporting filesystem and all the included but not used/exposed apps there.

    Having the data in a separate volume let you have a script that rebuilds the container and restarts it. It would have the advantage that loading another container from that image or pushing through a repository to another server would have all the fixes applied.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.