Docker: LetsEncrypt for development of “Https everywhere”

During development, test, and staging, we have a variety of docker servers that come and go as virtual machines. Eventually, the docker images under this process will get to a customer machine with a well-defined host and domain names. However, until that point all the machines are only our internal network. In the customer-deployed environment it is the intent that ALL ‘http’ communication be it internal or external is via HTTPS. Given this intent, it is highly desirable to wire all the containers up with useable/testable SSL certificates.

One, two, three, and on and on of MANY docker/letsencrypt/nginx tutorials describe how to do this at the end, but not during the development process. Does anyone know if such a focused setup is possible? Do I need to make the inner-most docker container (ours happens to house a Tomcat webapp) have a public domain? Or is this just completely impractical [even knowing this for certain will be a big help!]? If this usage is possible, might anyone know (or have) specifics on what needs to be done to get this functional?

  • Can I set Neo4j on Docker to import CSV file on the first run, not after that?
  • Docker inspect: Select field that has forward slash using jq
  • Iptables to forward remote port to local port for local access
  • Unable to load AWS credentials from any provider in the chain in Docker EC2 env
  • Network name of containers on multiple bridges
  • Is there a way to run separate set of processes with docker dynamically?
  • UPDATE

    In case it wasn’t clear from the above. I want to ship Docker containers one of which will probably be a letsencrypt/nginx proxy. There are many to choose from on Docker Hub. However, I can’t figure out how to setup such a system for development/test where all the machines are on an internal network. The certificates can be ‘test’ – the need is to allow HTTPS/TLS, not a green lock in Chrome! This will allow for a huge amount of testing (ie. HTTP properly locked down, TLSv1.0 turned off to avoid certain vulnerabilities, etc, etc).

  • Connecting to a Remote EJB Module running in Docker
  • Speed up Gitlab CI reusing docker machine for stages
  • Running a cronjob or task inside a docker cloud container
  • docker Container command '/hello' not found or does not exist after switching hard drive
  • Log level as a field for Docker GELF logging driver
  • docker compose - add python to LAMP
  • One Solution collect form web for “Docker: LetsEncrypt for development of “Https everywhere””

    I suggest you forget about Letsencrypt. The value proposition of that service is really focused on “getting that green lock in the browser”, which you explicitly say you don’t require.

    Also, Letsencrypt requires access to your server to verify that the ACME challenge file is there, which means YES, you need every such server to have a publicly reachable domain. So you need to own the domain and have DNS pointing to your specific server, which sounds undesirable in a testing environment.

    So in summary I think you’re trying to use the wrong tool for your needs. Try using regular self-signed certificates as described in this question. For that to work, the connecting clients must be set to not verify the certificates.

    Or you can take it to the next level and create your own CA. For that to work, you need to make all your containers import that root cert so that they will trust it.

    Of course, once you ship the containers/images into production, don’t forget to undo these things and get real valid certificates. That’s when Letsencrypt will be useful.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.