Why doesn't my pod respond to requests on the exposed port?

I’ve just launched a fairly basic cluster based on the CoreOS kube-aws scripts.

https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html

  • Elastic Beanstalk CLI - Not replacing instance on deploy
  • Docker trying to push 900MB to aws ec2 container
  • No space in docker thin pool
  • Enterprise private Docker registry best practices [closed]
  • How to create a base CoreOS image and build custom parent images based on it
  • Docker on Amazon Linux AMI
  • I’ve activated the registry add-on, and I have it correctly proxying to my local box so I can push images to the cluster on localhost:5000. I also have the proxy pod correctly loaded on each node so that localhost:5000 will also pull images from that registry.

    https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry

    Then I dockerized a fairly simple Sinatra app to run on my cluster and pushed it to the registry. I also prepared a ReplicationController definition and Service definition to run the app. The images pulled and started no problem, I can use kubectl to get the startup logs from each pod that belongs to the replication group.

    My problem is that when I curl the public ELB endpoint for my service, it just hangs.

    Things I’ve tried:

    • I got the public IP for one of the nodes running my pod and attempted to curl it at the NodePort described in the service description, same thing.
    • I SSH’d into that node and attempted curl localhost:3000, same result.
    • Also SSH’d into that node, I attempted to curl <pod-ip>:3000, same result.
    • ps shows the Puma process running and listening on port 3000.
    • docker ps on the node shows that the app container is not forwarding any ports to the host. Is that maybe the problem?

    The requests must be routing correctly because hitting those IPs at any other port results in a connection refused rather than hanging.

    The Dockerfile for my app is fairly straightforward:

    FROM ruby:2.2.4-onbuild
    RUN apt-get update -qq && apt-get install -y \
      libpq-dev \
      postgresql-client
    
    RUN mkdir -p /app
    WORKDIR /app
    
    COPY . /app
    
    EXPOSE 3000
    
    ENTRYPOINT ['ruby', '/app/bin/entrypoint.rb']
    

    Where entrypoint.rb will start a Puma server listening on port 3000.

    My replication group is defined like so:

    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: web-controller
      namespace: app
    spec:
      replicas: 2
      selector:
        app: web
      template:
        metadata:
          labels:
            app: web
        spec:
          volumes:
          - name: secrets
            secret:
              secretName: secrets
          containers:
          - name: app
            image: localhost:5000/app:v2
            resources:
              limits:
                cpu: 100m
                memory: 50Mi
            env:
            - name: DATABASE_NAME
              value: app_production
            - name: DATABASE_URL
              value: postgresql://some.postgres.aws.com:5432
            - name: ENV
              value: production
            - name: REDIS_URL
              value: redis://some.redis.aws.com:6379
            volumeMounts:
            - name: secrets
              mountPath: "/etc/secrets"
              readOnly: true
            command: ['/app/bin/entrypoint.rb', 'web']
            ports:
              - containerPort: 3000
    

    And here is my service:

    apiVersion: v1
    kind: Service
    metadata:
      name: web-service
    spec:
      ports:
      - port: 80
        targetPort: 3000
        protocol: TCP
      selector:
        app: web
      type: LoadBalancer
    

    Output of kubectl describe service web-service:

    Name:           web-service
    Namespace:      app
    Labels:         <none>
    Selector:       app=web
    Type:           LoadBalancer
    IP:         10.3.0.204
    LoadBalancer Ingress:   some.elb.aws.com
    Port:           <unnamed>   80/TCP
    NodePort:       <unnamed>   32062/TCP
    Endpoints:      10.2.47.3:3000,10.2.73.3:3000
    Session Affinity:   None
    No events.
    

    docker ps on one of the nodes shows that the app container is not forwarding any ports to the host. Is that maybe the problem?

    Edit to add entrypoint.rb and Procfile

    entrypoint.rb:

    #!/usr/bin/env ruby
    
    db_user_file = '/etc/secrets/database_user'
    db_password_file = '/etc/secrets/database_password'
    
    ENV['DATABASE_USER'] = File.read(db_user_file) if File.exists?(db_user_file)
    ENV['DATABASE_PASSWORD'] = File.read(db_password_file) if File.exists?(db_password_file)
    
    exec("bundle exec foreman start #{ARGV[0]}")
    

    Procfile:

    web: PORT=3000 bundle exec puma
    message_worker: bundle exec sidekiq -q messages -c 1 -r ./config/environment.rb
    email_worker: bundle exec sidekiq -q emails -c 1 -r 
    

  • Unable to access docker container via local host, webpack
  • What does CREATED container mean in docker?
  • Cannot replace the software source inside docker Ubuntu image
  • Why docker (Container-based technologies) are useful
  • Docker will not attach to an image
  • Protocol for “docker pull” command
  • One Solution collect form web for “Why doesn't my pod respond to requests on the exposed port?”

    There was nothing wrong with my Kubernetes set up. It turns out that the app was failing to start because the connection to the DB was timing out due to some unrelated networking issue.

    For anyone curious: don’t launch anything external to Kubernetes in the 10.x.x.x IP range (e.g. RDS, Elasticache, etc). Long story short, Kubernetes currently has an IPTables masquerade rule hardcoded that messes up communication with anything in that range that isn’t part of the cluster. See the details here.

    What I ended up doing was creating a separate VPC for my data stores on a different IP range and peering it with my Kubernetes VPC.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.