Docker: LetsEncrypt for development of “Https everywhere”

During development, test, and staging, we have a variety of docker servers that come and go as virtual machines. Eventually, the docker images under this process will get to a customer machine with a well-defined host and domain names. However, until that point all the machines are only our internal network. In the customer-deployed environment it is the intent that ALL ‘http’ communication be it internal or external is via HTTPS. Given this intent, it is highly desirable to wire all the containers up with useable/testable SSL certificates.

One, two, three, and on and on of MANY docker/letsencrypt/nginx tutorials describe how to do this at the end, but not during the development process. Does anyone know if such a focused setup is possible? Do I need to make the inner-most docker container (ours happens to house a Tomcat webapp) have a public domain? Or is this just completely impractical [even knowing this for certain will be a big help!]? If this usage is possible, might anyone know (or have) specifics on what needs to be done to get this functional?

  • Stop tomcat writing to stdout and stderror
  • docker run with cd command fails with executable file not found in $PATH
  • How to find (closest) tag based on image_id
  • AWS cloudwatch terminal output logs
  • Running uv4l inside Docker Container - not registering device node
  • How to force docker build to use devpi server for pip install command?

    In case it wasn’t clear from the above. I want to ship Docker containers one of which will probably be a letsencrypt/nginx proxy. There are many to choose from on Docker Hub. However, I can’t figure out how to setup such a system for development/test where all the machines are on an internal network. The certificates can be ‘test’ – the need is to allow HTTPS/TLS, not a green lock in Chrome! This will allow for a huge amount of testing (ie. HTTP properly locked down, TLSv1.0 turned off to avoid certain vulnerabilities, etc, etc).

  • Start docker container from another application in another docker container
  • Does it make sense to install the runtime on docker?
  • Docker container IO performance
  • What is the purpose of adding docker repository key to apt-key?
  • What causes BitBake worker processes to exit unexpectedly?
  • Searching the Google Container Registry
  • One Solution collect form web for “Docker: LetsEncrypt for development of “Https everywhere””

    I suggest you forget about Letsencrypt. The value proposition of that service is really focused on “getting that green lock in the browser”, which you explicitly say you don’t require.

    Also, Letsencrypt requires access to your server to verify that the ACME challenge file is there, which means YES, you need every such server to have a publicly reachable domain. So you need to own the domain and have DNS pointing to your specific server, which sounds undesirable in a testing environment.

    So in summary I think you’re trying to use the wrong tool for your needs. Try using regular self-signed certificates as described in this question. For that to work, the connecting clients must be set to not verify the certificates.

    Or you can take it to the next level and create your own CA. For that to work, you need to make all your containers import that root cert so that they will trust it.

    Of course, once you ship the containers/images into production, don’t forget to undo these things and get real valid certificates. That’s when Letsencrypt will be useful.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.