Implement on-demand docker container start-up

Situation: lots of heavy docker conainers that get hit periodically for a while, then stay unused for a longer period.

Wish: start the containers on demand (like systemd starts things through socket activation) and stop them after idling for given period. No visible downtime to end-user.

  • How to connect docker's container with pipeline
  • Docker for SQL Server 2012 Container support
  • What is meant by shared kernel in Docker?
  • Best Kubernetes Nginx Architecture
  • What is the difference between docker Swarm and Swarm mode?
  • How to use sqlite3 file from different docker containers?
  • Options:

    • Kubernetes has resource controllers which can scale replicas. I suppose it would be possible to keep the number of replicas on 0 and set it to 1 when needed, but how can one achieve that? The user guide says there is something called an auto-scaling control agent but I don’t see any further information on this. Is there a pluggable, programmable agent one can use to track requests and scale based on user-defined logic?
    • I don’t see any solution in Docker Swarm, correct me if I’m wrong though.
    • Use a custom http server written in chosen language that will have access to the docker daemon. Before routing to correct place it would check for existence of container and ensure it is running. Downside – not a general solution, has to not be a container or have access to the daemon.
    • Use systemd as described here. Same downsides as above, i.e. not general and one has to handle networking tasks himself (like finding the IP of the spawned container and feeding it into the server/proxy’s configuration).

    Any ideas appreciated!

  • Want to ssh into a running docker container running inside CentOs Image
  • Docker shutdown hook or support for graceful exit
  • Eclipse Che does not start: 'java.lang.ClassNotFoundException org.apache.juli.ClassLoaderLogManager'
  • How to run a shell script using dockerfiles CMD
  • Alternatives to ssh X11-forwarding for Docker containers
  • Cant' send “kill” command to a docker running in IBM Containers service
  • 2 Solutions collect form web for “Implement on-demand docker container start-up”

    You could use Kubernetes’ built-in Horizonal Pod Autoscaling (HPA) to scale up from 1 instance of each container to as many are needed to handle the load, but there’s no built-in functionality for 0-to-1 scaling on receiving a request, and I’m not aware of any widely used solution.

    1. You can use systemd to manage your docker containers. See https://developer.atlassian.com/blog/2015/03/docker-systemd-socket-activation/

    2. Some time ago I talked to an ops guy for pantheon.io about how they do this sort of thing with docker. I guess it would have been before Kubernetes even came out. Pantheon do drupal hosting. The way they have things set up, every server they run for clients is containerised, but as you describe, the container goes away when it’s not needed. The only resource that’s reserved then other than disk storage is a socket number on the host.

      They have a fairly simple daemon which listens on the sockets of all inactive servers. When it receives a request, the daemon stops listening for more incoming connections on that socket, starts the required container, and proxies that one request to the new container. Subsequent connections go direct to the container until it’s idle for a period, and the listener daemon takes over the port again. That’s about as much detail as I know about what they did, but you get the idea.

    3. I imagine that something like the daemon that pantheon implemented could be used to send commands to Kubernetes rather than straight to the Docker daemon. Maybe a systemd based approach to dynamically starting contaners could also communicate with Kubernetes as required. Either of these might allow you to fire up pods, not just containers.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.