Why doesn't my pod respond to requests on the exposed port?

I’ve just launched a fairly basic cluster based on the CoreOS kube-aws scripts.


  • Docker nfs4 mount on Elastic Beanstalk
  • Kubernetes Workflow
  • Deploying Django with Docker on Amazon Elastic Beanstalk
  • AWS container service: set max_map_count
  • AWS ECS for Glassfish HA architecture
  • How do I set the AWS Autoscaler to use the same IP addresses?
  • I’ve activated the registry add-on, and I have it correctly proxying to my local box so I can push images to the cluster on localhost:5000. I also have the proxy pod correctly loaded on each node so that localhost:5000 will also pull images from that registry.


    Then I dockerized a fairly simple Sinatra app to run on my cluster and pushed it to the registry. I also prepared a ReplicationController definition and Service definition to run the app. The images pulled and started no problem, I can use kubectl to get the startup logs from each pod that belongs to the replication group.

    My problem is that when I curl the public ELB endpoint for my service, it just hangs.

    Things I’ve tried:

    • I got the public IP for one of the nodes running my pod and attempted to curl it at the NodePort described in the service description, same thing.
    • I SSH’d into that node and attempted curl localhost:3000, same result.
    • Also SSH’d into that node, I attempted to curl <pod-ip>:3000, same result.
    • ps shows the Puma process running and listening on port 3000.
    • docker ps on the node shows that the app container is not forwarding any ports to the host. Is that maybe the problem?

    The requests must be routing correctly because hitting those IPs at any other port results in a connection refused rather than hanging.

    The Dockerfile for my app is fairly straightforward:

    FROM ruby:2.2.4-onbuild
    RUN apt-get update -qq && apt-get install -y \
      libpq-dev \
    RUN mkdir -p /app
    WORKDIR /app
    COPY . /app
    EXPOSE 3000
    ENTRYPOINT ['ruby', '/app/bin/entrypoint.rb']

    Where entrypoint.rb will start a Puma server listening on port 3000.

    My replication group is defined like so:

    apiVersion: v1
    kind: ReplicationController
      name: web-controller
      namespace: app
      replicas: 2
        app: web
            app: web
          - name: secrets
              secretName: secrets
          - name: app
            image: localhost:5000/app:v2
                cpu: 100m
                memory: 50Mi
            - name: DATABASE_NAME
              value: app_production
            - name: DATABASE_URL
              value: postgresql://some.postgres.aws.com:5432
            - name: ENV
              value: production
            - name: REDIS_URL
              value: redis://some.redis.aws.com:6379
            - name: secrets
              mountPath: "/etc/secrets"
              readOnly: true
            command: ['/app/bin/entrypoint.rb', 'web']
              - containerPort: 3000

    And here is my service:

    apiVersion: v1
    kind: Service
      name: web-service
      - port: 80
        targetPort: 3000
        protocol: TCP
        app: web
      type: LoadBalancer

    Output of kubectl describe service web-service:

    Name:           web-service
    Namespace:      app
    Labels:         <none>
    Selector:       app=web
    Type:           LoadBalancer
    LoadBalancer Ingress:   some.elb.aws.com
    Port:           <unnamed>   80/TCP
    NodePort:       <unnamed>   32062/TCP
    Session Affinity:   None
    No events.

    docker ps on one of the nodes shows that the app container is not forwarding any ports to the host. Is that maybe the problem?

    Edit to add entrypoint.rb and Procfile


    #!/usr/bin/env ruby
    db_user_file = '/etc/secrets/database_user'
    db_password_file = '/etc/secrets/database_password'
    ENV['DATABASE_USER'] = File.read(db_user_file) if File.exists?(db_user_file)
    ENV['DATABASE_PASSWORD'] = File.read(db_password_file) if File.exists?(db_password_file)
    exec("bundle exec foreman start #{ARGV[0]}")


    web: PORT=3000 bundle exec puma
    message_worker: bundle exec sidekiq -q messages -c 1 -r ./config/environment.rb
    email_worker: bundle exec sidekiq -q emails -c 1 -r 

  • Docker: Reconnect new postgres container to existing Data container
  • MySQL docker image not working as Local MySQL server
  • vnc to a docker container in vagrant
  • Docker - is it necessary to push images to remote server?
  • docker-compose run does not set links
  • Container keeps restarting when trying to run consul agent as docker container
  • One Solution collect form web for “Why doesn't my pod respond to requests on the exposed port?”

    There was nothing wrong with my Kubernetes set up. It turns out that the app was failing to start because the connection to the DB was timing out due to some unrelated networking issue.

    For anyone curious: don’t launch anything external to Kubernetes in the 10.x.x.x IP range (e.g. RDS, Elasticache, etc). Long story short, Kubernetes currently has an IPTables masquerade rule hardcoded that messes up communication with anything in that range that isn’t part of the cluster. See the details here.

    What I ended up doing was creating a separate VPC for my data stores on a different IP range and peering it with my Kubernetes VPC.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.