Kubernetes can't start due to too many open files in system

I am trying create a bunch of pods, services and deployment using Kubernetes, but keep hitting the following errors when I run the kubectl describe command.

for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container bbdb58770a848733bf7130b1b230d809fcec3062b2b16748c5e4a8b12cc0533a: [8] System error: too many open files in system\n"

  • Best practices for managing SSH keys with Azure container service deployments
  • Why does deploying on Kubernetes take so long?
  • How can I make sure to always connect to the same docker container?
  • How to perform checkpoint and restore of docker containers across two hosts?
  • Docker on CentOS with bridge to LAN network
  • DevOps: automatically restarting a failed container
  • I have already terminated all pods and try restarting the machine, but it doesn’t solve the issue. I am not an Linux expert, so I am just wondering how shall find all the open files and close them?

  • How to change resolv.conf for existing docker containers
  • How do I mount a host directory as a volume in docker compose
  • How to link MySQL RDS in docker-compose.yml file?
  • How to edit file after I shell to a docker container?
  • Docker mounting volume for editing source code
  • docker-compose - unable to attach to containers
  • One Solution collect form web for “Kubernetes can't start due to too many open files in system”

    You can confirm which process is hogging file descriptors by running:

    lsof | awk '{print $2}' | sort | uniq -c | sort -n

    That will give you a sorted list of open FD counts with the pid of the process. Then you can look up each process w/

    ps -p <pid>

    If the main hogs are docker/kubernetes, then I would recommend following along on the issue that caesarxuchao referenced.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.