Container Orchestration for provisioning single containers based on user action

I’m pretty new to Docker orchestration and managing a fleet of containers. I’m wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?

I plan on having a pool of CoreOS servers to run the containers on and I’m imagining the scheduler to have an API that I can just call to create the container.

  • In docker run, how to set multi-value in --ulimit
  • How to host multiple dockerized websites (ngnix) in one ip address?
  • Mkbundle Mono Assembly binding redirection
  • Cannot connect to MySQL within running docker container
  • install/access executable for existing docker container
  • Docker pull error
  • Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I’m wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?

  • Docker public registry push fails: Repository does not exist
  • Kubernetes services to be exposed to local machine
  • Installing Docker into Vagrant VM failed
  • Docker image builds on laptop, not on Digital Ocean - my understanding of Docker is shattered
  • Cannot connect to the Docker daemon
  • How do have my docker image run apache as a service
  • 2 Solutions collect form web for “Container Orchestration for provisioning single containers based on user action”

    I’d look at Kubernetes + the Jobs API (short lived) or Deployments (long lived)

    I’m not sure exactly what you mean by command, but I’ll assume its some sort of dev env triggered by a CLI, make-dev.

    1. User triggers make-dev, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.
    2. Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an Ingress rule + Service
    3. Kubernetes will schedule it out across your fleet of machines
    4. Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like devclusters.com/foobar123
    5. User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod

    This should scale well, and if your different environments use the same base container image, they should start really fast.

    Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try https://coreos.com/tectonic

    I┬áplan on having a pool of CoreOS servers to run the containers on and I’m imagining the scheduler to have an API that I can just call to create the container

    kubernetes comes with a RESTful API that you can use to directly create pods (the unit of work in kubernetes which contains one or more containers) within your cluster.

    The command line utility kubectl also interacts with the cluster in the exact same way, via the api. There are client libraries written in golang, Java, and Python at the moment with others on the way to help communicate with the cluster.

    If you later want a higher level abstraction to manage pods, update them and manage their lifetimes, looking at one of the controllers (replicaset, replication controller, deployment, statefulset) should help.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.