Kubernetes can't start due to too many open files in system

I am trying create a bunch of pods, services and deployment using Kubernetes, but keep hitting the following errors when I run the kubectl describe command.

for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container bbdb58770a848733bf7130b1b230d809fcec3062b2b16748c5e4a8b12cc0533a: [8] System error: too many open files in system\n"

  • Docker - container with multiple images
  • Docker error link folder Read-only file system
  • Cannot deploy docker containers from different hosts
  • What is the best practice of docker + ufw under Ubuntu
  • Can access to Bluemix container registry be access controlled?
  • Copy a folder into a mounted volume within a Docker Container
  • I have already terminated all pods and try restarting the machine, but it doesn’t solve the issue. I am not an Linux expert, so I am just wondering how shall find all the open files and close them?

  • Access Windows GPU in Boot2Docker
  • Unable to access jarfile when running Docker image
  • Integrating Docker with jenkins for continuous integration
  • Cannot replace the software source inside docker Ubuntu image
  • PhpStorm doesn't read PHP configuration from Docker container
  • Docker application upload on AWS elastic beanstalk fails
  • One Solution collect form web for “Kubernetes can't start due to too many open files in system”

    You can confirm which process is hogging file descriptors by running:

    lsof | awk '{print $2}' | sort | uniq -c | sort -n
    

    That will give you a sorted list of open FD counts with the pid of the process. Then you can look up each process w/

    ps -p <pid>
    

    If the main hogs are docker/kubernetes, then I would recommend following along on the issue that caesarxuchao referenced.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.