What does Kubernetes actually do? [closed]

Kubernetes is billed as a container cluster “scheduler/orchestrator”, but I have no idea what this means. After reading the Kubernetes site and (vague) GitHub wiki, the best I can tell is that its somehow figures out what VMs are available/capable of running your Docker container, and then deploys them there. But that is just my guess, and I haven’t seen any concrete verbiage in their documentation to support that.

So what is Kubernetes, exactly, and what are some specific problems that it solves?

  • Getting “This application is already running”, but can't find RUNNING_PIT
  • Docker compose share environment variables
  • Node.js web application not running properly in Docker stack
  • Dockerfile for NGINX Web server
  • How does a DOCKER communicate with a Windows client
  • Boot2Docker searching for docker-bootstrap.sock which does not exist
  • Process For Code Development With Hadoop And Docker
  • Can I access the host (OS X) filesystem from the boot2docker VM?
  • Issues with putting dockerized Postgres OLTP database in read only
  • Adding config file in docker run command
  • Window version of Docker not able to mount local directory to ubuntu image on container?
  • How do a see the “ancestor tree” of a docker-compose file
  • 3 Solutions collect form web for “What does Kubernetes actually do? [closed]”

    The purpose of Kubernetes is to make it easier to organize and schedule your application across a fleet of machines. At a high level it is an operating system for your cluster.

    Basically, it allows you to not worry about what specific machine in your datacenter each application runs on. Additionally it provides generic primitives for health checking and replicating your application across these machines, as well as services for wiring your application into micro-services so that each layer in your application is decoupled from other layers so that you can scale/update/maintain them independently.

    While it is possible to do many of these things in application layer, such solutions tend to be one-off and brittle, it’s much better to have separation of concerns, where an orchestration system worries about how to run your application, and you worry about the code that makes up your application.

    As you read from its Github page:

    Kubernetes is an open source system for managing containerized
    applications across multiple hosts, providing basic mechanisms for
    deployment, maintenance, and scaling of applications.

    Kubernetes is:

    lean: lightweight, simple, accessible
    portable: public, private, hybrid, multi cloud
    extensible: modular, pluggable, hookable, composable
    self-healing: auto-placement, auto-restart, auto-replication
    

    Kubernetes builds upon a decade and a half of experience at Google
    running production workloads at scale, combined with best-of-breed
    ideas and practices from the community.

    For me Kubernetes is a container orchestration tool from Google. Due to its design you can implement compatibility with any container engine, but I think now it’s limited to Docker. There are some important concepts in its architecture:

    Kubernetes works with the following concepts:

    Clusters are the compute resources on top of which your containers are
    built. Kubernetes can run anywhere! See the Getting Started Guides for
    instructions for a variety of services.

    Pods are a colocated group of Docker containers with shared volumes.
    They’re the smallest deployable units that can be created, scheduled,
    and managed with Kubernetes. Pods can be created individually, but
    it’s recommended that you use a replication controller even if
    creating a single pod. More about pods.

    Replication controllers manage the lifecycle of pods. They ensure that
    a specified number of pods are running at any given time, by creating
    or killing pods as required. More about replication controllers.

    Services provide a single, stable name and address for a set of pods.
    They act as basic load balancers. More about services.

    Labels are used to organize and select groups of objects based on
    key:value pairs. More about labels.

    So, you have a group of machines that forms a cluster where your containers are run. Yo can also define a group of containers that provide a service, in a similar way you do with other tools like fig (i.e.: webapp pod can be a rails server and a postgres database). You have also other tools to ensure a number of containers/pods of a service running at the same time, a key-value store, a kind of built-in load balancer…

    If you know something about coreos, it’s a very similar solution but from Google. Algo Kubernetes has a good integration with Google Cloud Engine.

    Kubernetes provides much of the same functionality as Infrastructure as a Service APIs, but aimed at dynamically scheduled containers rather than virtual machines, and as Platform as a Service systems, but with greater flexibility, including:

    • mounting storage systems,
    • distributing secrets,
    • application health checking,
    • replicating application instances,
    • horizontal auto-scaling,
    • naming and discovery,
    • load balancing,
    • rolling updates,
    • resource monitoring,
    • log access and ingestion,
    • support for introspection and debugging, and
    • identity and authorization.

    If you already use other mechanisms for service discovery, secret distribution, load balancing, monitoring, etc., of course you can continue to use them, but we aim to make it easy to transition to Kubernetes from existing IaaS and PaaS systems by providing this functionality.

    https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#why-do-i-need-kubernetes-and-what-can-it-do

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.