Docker: Best way to handle security updates of packages from apt-get inside docker containers

On my current server i use unattended-upgrades to automatically handle security updates.
But i’m wondering what people would suggest for working inside docker containers.
I have several docker containers running for each service of my app.
Should i have the unattended-upgrades setup in each? Or maybe upgrade them locally and push the upgraded images up? Any other ideas?

Does anyone have any experience with this in production maybe?

  • Replicating Heroku environment in Docker (or other container)
  • php support for mongoDB
  • Fixing Permissions on Docker
  • docker: Says connection refused when attempting to connect to a published port
  • Is it possible change date in docker container?
  • Naming Docker Containers on start ECS
  • How to capture packets for single docker container
  • Unable to start docker on SUSE 12
  • what should I set in HTTP_HOST environment variable inside a docker for appengine task queue?
  • docker expose ports only on host
  • Cannot start container lstat no such file or directory
  • Docker and MySQL can't connect
  • 2 Solutions collect form web for “Docker: Best way to handle security updates of packages from apt-get inside docker containers”

    I do updates automatically as you did (before). I currently have Stage containers and nothing in Prod, yet. But there is no harm done applying updates to each container: some redundant networking activity, perhaps, if you have multiple containers based in the same image, but harmless otherwise.

    Rebuilding a container strikes me as unnecessarily time consuming and involves a more complex process.

    WRT Time:
    The time to rebuild is added to the time needed to update so it is ‘extra’ time in that sense. And if you have start-up processes for your container, those have to be repeated.

    WRT Complexity:
    On the one hand you are simply running updates with apt. On the other you are basically acting as an integration server: the more steps, the more to go wrong.

    Also, the updates do not create a ‘golden image’ since it is easily repeatable.

    And finally, since the kernel is not ever actually updated, you would not ever need to restart the container.

    I would rebuild the container. They are usually oriented to run one app, and may have little sense to update the supporting filesystem and all the included but not used/exposed apps there.

    Having the data in a separate volume let you have a script that rebuilds the container and restarts it. It would have the advantage that loading another container from that image or pushing through a repository to another server would have all the fixes applied.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.