Container Orchestration for provisioning single containers based on user action

I’m pretty new to Docker orchestration and managing a fleet of containers. I’m wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?

I plan on having a pool of CoreOS servers to run the containers on and I’m imagining the scheduler to have an API that I can just call to create the container.

  • Create a volume in docker from windows host
  • On which MAC address Docker interface with the internet?
  • How to access OrientDB bin scripts installed in Docker
  • Using Docker and MongoDB
  • What is wrong in my docker-compose.yml file?
  • Run a new container based on currently running container
  • Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I’m wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?

  • MYSQL installation errors with docker
  • How to share a Docker container folder when it is not empty?
  • Adjust OOM killer for subprocess in Docker container
  • Back-end access and secret keys required?
  • How to use gpicview inside docker?
  • Docker Compose raise an AccessDeniedExpcetion
  • 2 Solutions collect form web for “Container Orchestration for provisioning single containers based on user action”

    I’d look at Kubernetes + the Jobs API (short lived) or Deployments (long lived)

    I’m not sure exactly what you mean by command, but I’ll assume its some sort of dev env triggered by a CLI, make-dev.

    1. User triggers make-dev, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.
    2. Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an Ingress rule + Service
    3. Kubernetes will schedule it out across your fleet of machines
    4. Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like devclusters.com/foobar123
    5. User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod

    This should scale well, and if your different environments use the same base container image, they should start really fast.

    Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try https://coreos.com/tectonic

    I┬áplan on having a pool of CoreOS servers to run the containers on and I’m imagining the scheduler to have an API that I can just call to create the container

    kubernetes comes with a RESTful API that you can use to directly create pods (the unit of work in kubernetes which contains one or more containers) within your cluster.

    The command line utility kubectl also interacts with the cluster in the exact same way, via the api. There are client libraries written in golang, Java, and Python at the moment with others on the way to help communicate with the cluster.

    If you later want a higher level abstraction to manage pods, update them and manage their lifetimes, looking at one of the controllers (replicaset, replication controller, deployment, statefulset) should help.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.