Populating Docker containers with sensitive information using kubernetes

I have a pod that runs containers which require access to sensitive information like API keys and DB passwords. Right now, these sensitive values are embedded in the controller definitions like so:

  value: password

which are then available inside the Docker container as the $DB_PASSWORD environment variable. All fairly easy.

  • Docker ubuntu image - bash: man: command not found
  • Run Maven ant plugin after the .war is deployed?
  • copying files using cp in dockfile fails
  • Docker and “The OpenSSL library reported an error” when deployed
  • “dockerfile” text file busy
  • persist and share data from docker mongo container (with docker)
  • But reading their documentation on Secrets, they explicitly say that putting sensitive configuration values into your definition breaches best practice and is potentially a security issue. The only other strategy I can think of is the following:

    • create an OpenPGP key per user community or namespace
    • use crypt to set the configuration value into etcd (which is encrypted using the private key)
    • create a kubernetes secret containing the private key, like so
    • associate that secret with the container (meaning that the private key will be accessible as a volume mount), like so
    • when the container is launched, it will access the file inside the volume mount for the private key, and use it to decrypt the conf values returned from etcd
    • this can then be incorporated into confd, which populates local files according to a template definition (such as Apache or WordPress config files)

    This seems fairly complicated, but more secure and flexible, since the values will no longer be static and stored in plaintext.

    So my question, and I know it’s not an entirely objective one, is whether this is completely necessary or not? Only admins will be able to view and execute the RC definitions in the first place; so if somebody’s breached the kubernetes master, you have other problems to worry about. The only benefit I see is that there’s no danger of secrets being committed to the filesystem in plaintext…

    Are there any other ways to populate Docker containers with secret information in a secure way?

  • How to close and reopen docker terminal
  • Issues with docker 1.2
  • docker exec bash in windows - keyboard arrow keys don't work
  • exited with code 0 docker
  • How should I run thumbd as a service inside a Docker container?
  • docker push not actually updating ECR
  • 2 Solutions collect form web for “Populating Docker containers with sensitive information using kubernetes”

    Unless you have many megabytes of config, this system sounds unnecessarily complex. The intended usage is for you to just put each config into a secret, and the pods needing the config can mount that secret as a volume.

    You can then use any of a variety of mechanisms to pass that config to your task, e.g. if it’s environment variables source secret/config.sh; ./mybinary is a simple way.

    I don’t think you gain any extra security by storing a private key as a secret.

    I would personally resolve to user a remote keymanager that your software could access across the net over a HTTPS connection. For example Keywhiz or Vault would probably fit the bill.

    I would host the keymanager on a separate isolated subnet, and configure firewall to only allow access to ip addresses which I expected to need the keys. Both KeyWhiz and Vault comes with an ACL mechanism, so you may not have to do anything with firewalls at all, but it does not hurt to consider it — however the key here is to host the keymanager on a separate network, and possible even a separate hosting provider.

    You local configuration file in the container would contain just the URL of the key service, and possible a credentials to retrieve the key from the keymanager — the credentials would be useless to an attacker if he didn’t match the ACL/IP addresses.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.