access docker container in kubernetes

I have a docker container with an application exposing port 8080.
I can run it and access it on my local computer:

$ docker run -p 33333:8080 foo
* Running on (Press CTRL+C to quit) 

I can test it with:

  • Programmatically removes files/folder resides in docker container
  • How to run local server in host?
  • Run Chromium inside container: libGl error
  • Docker containers communication
  • debug spring boot in vagrant and docker
  • How to let two docker containers can ping each other
  • $ nc -v locahost 33333
    connection succeeded!

    However when I deploy it in Kubernetes it doesn’t work.
    Here is the manifest file:

    apiVersion: v1
    kind: Pod
      name: foo-pod
      namespace: foo
        name: foo-pod
      - name: foo
        image: bar/foo:latest
        - containerPort: 8080


    apiVersion: v1
    kind: Service
      name: foo-service
      namespace: foo
      type: NodePort
        - port: 8080
        - NodePort: 33333
        name: foo-pod

    Deployed with:

    $ kubectl apply -f foo.yaml 
    $ nc -v <publicIP> 33333
    Connection refused 

    I don’t understand why I cannot access it…

  • Rancher server not finding the rancher agent on same server
  • Docker and gulp
  • Docker daemon fails to restart gracefully
  • cucumber tests on docker won't fire at the integration-test phase
  • How can I initialize `libdc1394` on Docker
  • Unable to run docker image created from ISO
  • One Solution collect form web for “access docker container in kubernetes”

    The problem was that the application was listening on IP
    It needs to listen on to work in kubernetes. A change in the application code did the trick.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.