Pods doesn't run on another node after its node down

I create a mysql pods which is running in Node3 172.24.18.125. But after I stop all kubernetes services in Node3, this pods disappears after a while instead of running in Node1 or Node2. Why the kubernetes master doesn’t reschedule the pods in another node? Below are the yaml files for pods and replication controller.

[root@localhost pods]# kubectl get nodes
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
172.24.18.123 database=mysql,kubernetes.io/hostname=172.24.18.123 Ready
172.24.18.124 kubernetes.io/hostname=172.24.18.124 Ready
172.24.18.125 kubernetes.io/hostname=172.24.18.125 Ready

YAML file to create mysql pod:
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 1
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: welcome
ports:
- containerPort: 3306
  name: mysql

mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mysql
  name: mysql
spec:
  publicIPs:
    - 172.24.18.120
  ports:
    # the port that this service should serve on
    - port: 3306
  # label keys and values that must match in order to receive traffic for this service
  selector:
    name: mysql

replicationcontroller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
 name: mysql-controller
spec:
 replicas: 2
 selector:
   name: mysql
 template:
   metadata:
     labels:
       name: mysql
   spec:
     containers:
       - name: mysql
         image: mysql
         ports:
           - containerPort: 3306

  • Is there a way to share files beween Docker containers that don't exist on the host?
  • Docker / Postgres: Mounting an existing database within a dockerized Postgresql
  • nginx with docker not proxying onto my ruby application correctly
  • Gearmand Does Not Start - Socket Address Family Not Supported
  • Beginners guide for Mesos, Marathon, Docker Integration [closed]
  • docker-compose oci runtime error starting container process caused connection reset by peer
  • How to run some script inside Docker Container?
  • Docker: RUN pip install boto succeeds but is missing from the resulting image
  • docker exec TERM setting
  • docker, nginx, django and how to serve static files
  • Django & WordPress - wp-admin redirect issue
  • Docker: Creating a data volume container vs simply using the -v flag with `run`?
  • One Solution collect form web for “Pods doesn't run on another node after its node down”

    Pods aren’t rescheduled to different nodes if they were created directly as pods. A pod is only ever run on a single node.

    Replication controllers take care of this detail for you by detecting when the number of running pods changes (e.g. due to a failed node) and creating new replicas of the pod when needed.

    By default, Kubernetes considers the pods on a failed node to be dead once the node hasn’t reported to the master for 5 minutes. After that point, if your pod was part of a replication controller, the controller should create a new replica that will be scheduled on a different node.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.