Container Orchestration for provisioning single containers based on user action

I’m pretty new to Docker orchestration and managing a fleet of containers. I’m wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?

I plan on having a pool of CoreOS servers to run the containers on and I’m imagining the scheduler to have an API that I can just call to create the container.

  • Why so many layers on base images? Docker
  • How can I find the docker name from a program running inside Docker
  • How to publish/push Dockerfile?
  • “Could not find a version that satisfies the requirement cv2==1.0” when deploy Docker app with Anaconda and OpenCV to Heroku
  • How to prevent code inside docker container from accessing the network?
  • Docker engine api ContainerTop ps_args not having any effect
  • Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I’m wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?

  • docker postgres, fail to map volume in windows
  • `Authorization Token has expired` issue AWS-CLI on MacOS Sierra
  • Debug django app running inside docker image, using pycharm debugger
  • Fail when start a new container with elasticsearch 5.0
  • Create Jenkins Docker Image with pre configured jobs
  • Launch docker container for each user
  • 2 Solutions collect form web for “Container Orchestration for provisioning single containers based on user action”

    I’d look at Kubernetes + the Jobs API (short lived) or Deployments (long lived)

    I’m not sure exactly what you mean by command, but I’ll assume its some sort of dev env triggered by a CLI, make-dev.

    1. User triggers make-dev, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.
    2. Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an Ingress rule + Service
    3. Kubernetes will schedule it out across your fleet of machines
    4. Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like
    5. User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod

    This should scale well, and if your different environments use the same base container image, they should start really fast.

    Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try

    I┬áplan on having a pool of CoreOS servers to run the containers on and I’m imagining the scheduler to have an API that I can just call to create the container

    kubernetes comes with a RESTful API that you can use to directly create pods (the unit of work in kubernetes which contains one or more containers) within your cluster.

    The command line utility kubectl also interacts with the cluster in the exact same way, via the api. There are client libraries written in golang, Java, and Python at the moment with others on the way to help communicate with the cluster.

    If you later want a higher level abstraction to manage pods, update them and manage their lifetimes, looking at one of the controllers (replicaset, replication controller, deployment, statefulset) should help.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.