Container Orchestration for provisioning single containers based on user action

I’m pretty new to Docker orchestration and managing a fleet of containers. I’m wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?

I plan on having a pool of CoreOS servers to run the containers on and I’m imagining the scheduler to have an API that I can just call to create the container.

  • How to expose a service running inside a docker container, bound to localhost
  • How to link docker containers?
  • docker tomcat directory not found?
  • Node http-proxy in Docker container
  • copying files using cp in dockfile fails
  • How do I SSH to a Docker in Mac container [duplicate]
  • Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I’m wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?

  • Using Docker for local development replacing Vagrant
  • Dockerfile dosen't work on Google managed Virtual Machine
  • Docker Swarm Linking
  • Docker modify iptables of a single running container
  • docker-compose.yml with multiple env_files
  • Why use docker? Aren't java files like WAR files already running on JVM?
  • 2 Solutions collect form web for “Container Orchestration for provisioning single containers based on user action”

    I’d look at Kubernetes + the Jobs API (short lived) or Deployments (long lived)

    I’m not sure exactly what you mean by command, but I’ll assume its some sort of dev env triggered by a CLI, make-dev.

    1. User triggers make-dev, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.
    2. Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an Ingress rule + Service
    3. Kubernetes will schedule it out across your fleet of machines
    4. Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like devclusters.com/foobar123
    5. User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod

    This should scale well, and if your different environments use the same base container image, they should start really fast.

    Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try https://coreos.com/tectonic

    I┬áplan on having a pool of CoreOS servers to run the containers on and I’m imagining the scheduler to have an API that I can just call to create the container

    kubernetes comes with a RESTful API that you can use to directly create pods (the unit of work in kubernetes which contains one or more containers) within your cluster.

    The command line utility kubectl also interacts with the cluster in the exact same way, via the api. There are client libraries written in golang, Java, and Python at the moment with others on the way to help communicate with the cluster.

    If you later want a higher level abstraction to manage pods, update them and manage their lifetimes, looking at one of the controllers (replicaset, replication controller, deployment, statefulset) should help.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.