docker microservice apps restart over and over again in kubernetes

I am trying to run microservice applications with kubernetes. I have rabbitmq, elasticsearch and eureka discovery service running on kubernetes. Other than that, I have three microservice applications. When I run two of them, it is fine; however when I run the third one they all began restarting over and over again without any reason.

One of my config files:

  • Docker container: lsmod not found
  • Linking container with docker-compose
  • Can Assign two Pods in Replication Controller to the two Different Nodes on Kubernetes?
  • port redirect to docker containers by hostname
  • Add/edit files inside a docker container using a remotely?
  • Docker Volumes - bind mount different filesystem to ../docker/volumes/
  • apiVersion: v1
    kind: Service
    metadata:
      name: hrm
      labels:
        app: suite
    spec:
      type: NodePort
      ports:
        - port: 8086
          nodePort: 30001
      selector:
        app: suite
        tier: hrm-core
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: hrm
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: suite
            tier: hrm-core
        spec:
          containers:
          - image: privaterepo/hrm-core
            name: hrm
            ports:
            - containerPort: 8086
          imagePullSecrets:
          - name: regsecret
    

    Result from kubectl describe pod hrm:

     State:     Running
          Started:      Mon, 12 Jun 2017 12:08:28 +0300
        Last State:     Terminated
          Reason:       Error
          Exit Code:    137
          Started:      Mon, 01 Jan 0001 00:00:00 +0000
          Finished:     Mon, 12 Jun 2017 12:07:05 +0300
        Ready:      True
        Restart Count:  5
      18m       18m     1   kubelet, minikube               Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "hrm" with CrashLoopBackOff: "Back-off 10s restarting failed container=hrm pod=hrm-3288407936-cwvgz_default(915fb55c-4f4a-11e7-9240-080027ccf1c3)"
    

    kubectl get pods:

    NAME                        READY     STATUS    RESTARTS   AGE
    discserv-189146465-s599x    1/1       Running   0          2d
    esearch-3913228203-9sm72    1/1       Running   0          2d
    hrm-3288407936-cwvgz        1/1       Running   6          46m
    parabot-1262887100-6098j    1/1       Running   9          2d
    rabbitmq-279796448-9qls3    1/1       Running   0          2d
    suite-ui-1725964700-clvbd   1/1       Running   3          2d
    

    kubectl version:

    Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"2017-04-07T20:43:50Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
    

    minikube version:

    minikube version: v0.18.0
    

    When I look at pod logs, there is no error. It seems like it starts without any problem. what could be the problem here?

    edit: output of kubectl get events:

    19m        19m         1         discserv-189146465-lk3sm    Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
    19m        19m         1         discserv-189146465-lk3sm    Pod          spec.containers{discserv}   Normal    Pulling                   kubelet, minikube       pulling image "private repo"
    19m        19m         1         discserv-189146465-lk3sm    Pod          spec.containers{discserv}   Normal    Pulled                    kubelet, minikube       Successfully pulled image "private repo"
    19m        19m         1         discserv-189146465-lk3sm    Pod          spec.containers{discserv}   Normal    Created                   kubelet, minikube       Created container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67
    19m        19m         1         discserv-189146465-lk3sm    Pod          spec.containers{discserv}   Normal    Started                   kubelet, minikube       Started container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67
    19m        19m         1         esearch-3913228203-6l3t7    Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
    19m        19m         1         esearch-3913228203-6l3t7    Pod          spec.containers{esearch}    Normal    Pulled                    kubelet, minikube       Container image "elasticsearch:2.4" already present on machine
    19m        19m         1         esearch-3913228203-6l3t7    Pod          spec.containers{esearch}    Normal    Created                   kubelet, minikube       Created container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60
    19m        19m         1         esearch-3913228203-6l3t7    Pod          spec.containers{esearch}    Normal    Started                   kubelet, minikube       Started container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60
    18m        18m         1         hrm-3288407936-d2vhh        Pod                                      Normal    Scheduled                 default-scheduler       Successfully assigned hrm-3288407936-d2vhh to minikube
    18m        18m         1         hrm-3288407936-d2vhh        Pod          spec.containers{hrm}        Normal    Pulling                   kubelet, minikube       pulling image "private repo"
    18m        18m         1         hrm-3288407936-d2vhh        Pod          spec.containers{hrm}        Normal    Pulled                    kubelet, minikube       Successfully pulled image "private repo"
    18m        18m         1         hrm-3288407936-d2vhh        Pod          spec.containers{hrm}        Normal    Created                   kubelet, minikube       Created container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e
    18m        18m         1         hrm-3288407936-d2vhh        Pod          spec.containers{hrm}        Normal    Started                   kubelet, minikube       Started container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e
    18m        18m         1         hrm-3288407936              ReplicaSet                               Normal    SuccessfulCreate          replicaset-controller   Created pod: hrm-3288407936-d2vhh
    18m        18m         1         hrm                         Deployment                               Normal    ScalingReplicaSet         deployment-controller   Scaled up replica set hrm-3288407936 to 1
    19m        19m         1         minikube                    Node                                     Normal    RegisteredNode            controllermanager       Node minikube event: Registered Node minikube in NodeController
    19m        19m         1         minikube                    Node                                     Normal    Starting                  kubelet, minikube       Starting kubelet.
    19m        19m         1         minikube                    Node                                     Warning   ImageGCFailed             kubelet, minikube       unable to find data for container /
    19m        19m         1         minikube                    Node                                     Normal    NodeAllocatableEnforced   kubelet, minikube       Updated Node Allocatable limit across pods
    19m        19m         1         minikube                    Node                                     Normal    NodeHasSufficientDisk     kubelet, minikube       Node minikube status is now: NodeHasSufficientDisk
    19m        19m         1         minikube                    Node                                     Normal    NodeHasSufficientMemory   kubelet, minikube       Node minikube status is now: NodeHasSufficientMemory
    19m        19m         1         minikube                    Node                                     Normal    NodeHasNoDiskPressure     kubelet, minikube       Node minikube status is now: NodeHasNoDiskPressure
    19m        19m         1         minikube                    Node                                     Warning   Rebooted                  kubelet, minikube       Node minikube has been rebooted, boot id: f66e28f9-62b3-4066-9e18-33b152fa1300
    19m        19m         1         minikube                    Node                                     Normal    NodeNotReady              kubelet, minikube       Node minikube status is now: NodeNotReady
    19m        19m         1         minikube                    Node                                     Normal    Starting                  kube-proxy, minikube    Starting kube-proxy.
    19m        19m         1         minikube                    Node                                     Normal    NodeReady                 kubelet, minikube       Node minikube status is now: NodeReady
    8m         8m          1         minikube                    Node                                     Warning   SystemOOM                 kubelet, minikube       System OOM encountered
    18m        18m         1         parabot-1262887100-r84kf    Pod                                      Normal    Scheduled                 default-scheduler       Successfully assigned parabot-1262887100-r84kf to minikube
    8m         18m         2         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Pulling                   kubelet, minikube       pulling image "private repo"
    8m         18m         2         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Pulled                    kubelet, minikube       Successfully pulled image "private repo"
    18m        18m         1         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Created                   kubelet, minikube       Created container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045
    18m        18m         1         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Started                   kubelet, minikube       Started container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045
    8m         8m          1         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Created                   kubelet, minikube       Created container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b
    8m         8m          1         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Started                   kubelet, minikube       Started container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b
    18m        18m         1         parabot-1262887100          ReplicaSet                               Normal    SuccessfulCreate          replicaset-controller   Created pod: parabot-1262887100-r84kf
    18m        18m         1         parabot                     Deployment                               Normal    ScalingReplicaSet         deployment-controller   Scaled up replica set parabot-1262887100 to 1
    19m        19m         1         rabbitmq-279796448-pcqqh    Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
    19m        19m         1         rabbitmq-279796448-pcqqh    Pod          spec.containers{rabbitmq}   Normal    Pulling                   kubelet, minikube       pulling image "rabbitmq"
    19m        19m         1         rabbitmq-279796448-pcqqh    Pod          spec.containers{rabbitmq}   Normal    Pulled                    kubelet, minikube       Successfully pulled image "rabbitmq"
    19m        19m         1         rabbitmq-279796448-pcqqh    Pod          spec.containers{rabbitmq}   Normal    Created                   kubelet, minikube       Created container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50
    19m        19m         1         rabbitmq-279796448-pcqqh    Pod          spec.containers{rabbitmq}   Normal    Started                   kubelet, minikube       Started container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50
    19m        19m         1         suite-ui-1725964700-ssshn   Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
    19m        19m         1         suite-ui-1725964700-ssshn   Pod          spec.containers{suite-ui}   Normal    Pulling                   kubelet, minikube       pulling image "private repo"
    19m        19m         1         suite-ui-1725964700-ssshn   Pod          spec.containers{suite-ui}   Normal    Pulled                    kubelet, minikube       Successfully pulled image "private repo"
    19m        19m         1         suite-ui-1725964700-ssshn   Pod          spec.containers{suite-ui}   Normal    Created                   kubelet, minikube       Created container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a
    19m        19m         1         suite-ui-1725964700-ssshn   Pod          spec.containers{suite-ui}   Normal    Started                   kubelet, minikube       Started container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a
    

  • Docker/Boot2Docker: Set HTTP/HTTPS proxies for docker on OS X
  • Kafka On Docker without docker-compose?
  • JSON (config file) with dynamic variables
  • Define Docker container volume bindings in a configuration file?
  • Docker - Network calls fail during image build on corporate network
  • persist and share data from docker mongo container (with docker)
  • One Solution collect form web for “docker microservice apps restart over and over again in kubernetes”

    See kubectl get logs for any obvious errors. In this case, as suspected, it looks like it is insufficient resources problem (or a service that has resource leaks).
    If possible, try increasing resources to see if it helps.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.