access docker container in kubernetes

I have a docker container with an application exposing port 8080.
I can run it and access it on my local computer:

$ docker run -p 33333:8080 foo
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit) 

I can test it with:

  • Programmatically removes files/folder resides in docker container
  • How to run local server in host?
  • Run Chromium inside container: libGl error
  • Docker containers communication
  • debug spring boot in vagrant and docker
  • How to let two docker containers can ping each other
  • $ nc -v locahost 33333
    connection succeeded!
    

    However when I deploy it in Kubernetes it doesn’t work.
    Here is the manifest file:

    apiVersion: v1
    kind: Pod
    metadata:
      name: foo-pod
      namespace: foo
      labels:
        name: foo-pod
    spec:
      containers:
      - name: foo
        image: bar/foo:latest
        ports:
        - containerPort: 8080
    

    and

    apiVersion: v1
    kind: Service
    metadata:
      name: foo-service
      namespace: foo
    spec:
      type: NodePort
      ports:
        - port: 8080
        - NodePort: 33333
      selector:
        name: foo-pod
    

    Deployed with:

    $ kubectl apply -f foo.yaml 
    $ nc -v <publicIP> 33333
    Connection refused 
    

    I don’t understand why I cannot access it…

  • Rancher server not finding the rancher agent on same server
  • Docker and gulp
  • Docker daemon fails to restart gracefully
  • cucumber tests on docker won't fire at the integration-test phase
  • How can I initialize `libdc1394` on Docker
  • Unable to run docker image created from ISO
  • One Solution collect form web for “access docker container in kubernetes”

    The problem was that the application was listening on IP 127.0.0.1.
    It needs to listen on 0.0.0.0 to work in kubernetes. A change in the application code did the trick.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.