Kubernetes Liveness Probe Logging

We’re using Kubernetes 1.1.3 with its default fluentd-elasticsearch logging.

We also use LivenessProbes on our containers to make sure they operate as expected.

  • Apache Connection Refused when running Docker-client Java API
  • Using a shared MySQL container
  • Save export images from one server to another
  • kube-dns can not resolve 'kubernetes.default.svc.cluster.local'
  • when pushing docker image to private docker registry, having trouble marking it 'public' via my script (but can do via web ui)
  • Our problem is that lines we send out to the STDOUT from the LivenessProbe does not appear to reach Elastic Search.

    Is there a way to make fluentd ship LivenessProbes output like it does to regular containers in a pod?

  • Setting up CD for a Ruby on Rails project with Bitbucket Pipelines and Docker
  • elastic-beanstalk docker app not updating upon deploy
  • docker: 'stack' is not a docker command
  • Cassandra: How to increase the number of node instances in localhost
  • Jenkins With Nginx Reverse Proxy And Resolver
  • Docker container out of sync with host
  • One Solution collect form web for “Kubernetes Liveness Probe Logging”

    The output from the probe is swallowed by the Kubelet component on the node, which is responsible for running the probes (source code, if you’re interested). If a probe fails, its output will be recorded as an event associated with the pod, which should be accessible through the API.

    The output of successful probes isn’t recorded anywhere unless your Kubelet has a log level of at least –v=4, in which case it’ll be in the Kubelet’s logs.

    Feel free to file a feature request in a Github issue if you have ideas of what you’d like to be done with the output 🙂

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.