Docker: LetsEncrypt for development of “Https everywhere”

During development, test, and staging, we have a variety of docker servers that come and go as virtual machines. Eventually, the docker images under this process will get to a customer machine with a well-defined host and domain names. However, until that point all the machines are only our internal network. In the customer-deployed environment it is the intent that ALL ‘http’ communication be it internal or external is via HTTPS. Given this intent, it is highly desirable to wire all the containers up with useable/testable SSL certificates.

One, two, three, and on and on of MANY docker/letsencrypt/nginx tutorials describe how to do this at the end, but not during the development process. Does anyone know if such a focused setup is possible? Do I need to make the inner-most docker container (ours happens to house a Tomcat webapp) have a public domain? Or is this just completely impractical [even knowing this for certain will be a big help!]? If this usage is possible, might anyone know (or have) specifics on what needs to be done to get this functional?

  • Do I require a load balancer for a web service container in google cloud?
  • How to log PHP errors in an AWS Elastic Beanstalk Docker container
  • Run docker as non-root in a development environment for an specific process
  • Error running quickstart-camelservlet in OpenShift V3 with Fabric8 and Docker
  • CPU multithreading of Caffe in a Docker Container
  • Docker container running but browser refuse to connect
  • UPDATE

    In case it wasn’t clear from the above. I want to ship Docker containers one of which will probably be a letsencrypt/nginx proxy. There are many to choose from on Docker Hub. However, I can’t figure out how to setup such a system for development/test where all the machines are on an internal network. The certificates can be ‘test’ – the need is to allow HTTPS/TLS, not a green lock in Chrome! This will allow for a huge amount of testing (ie. HTTP properly locked down, TLSv1.0 turned off to avoid certain vulnerabilities, etc, etc).

  • Difference between --link and --alias in overlay docker network?
  • Getting Docker Container Id in Makefile to use in another command
  • In docker, how are storage driver and backing file system different?
  • Can't connect Docker + PostgreSQL 9.3
  • what is the correct way to set up puma, rails and nginx with docker?
  • Access other containers of a pod in Kubernetes
  • One Solution collect form web for “Docker: LetsEncrypt for development of “Https everywhere””

    I suggest you forget about Letsencrypt. The value proposition of that service is really focused on “getting that green lock in the browser”, which you explicitly say you don’t require.

    Also, Letsencrypt requires access to your server to verify that the ACME challenge file is there, which means YES, you need every such server to have a publicly reachable domain. So you need to own the domain and have DNS pointing to your specific server, which sounds undesirable in a testing environment.

    So in summary I think you’re trying to use the wrong tool for your needs. Try using regular self-signed certificates as described in this question. For that to work, the connecting clients must be set to not verify the certificates.

    Or you can take it to the next level and create your own CA. For that to work, you need to make all your containers import that root cert so that they will trust it.

    Of course, once you ship the containers/images into production, don’t forget to undo these things and get real valid certificates. That’s when Letsencrypt will be useful.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.