access docker container in kubernetes

I have a docker container with an application exposing port 8080.
I can run it and access it on my local computer:

$ docker run -p 33333:8080 foo
* Running on (Press CTRL+C to quit) 

I can test it with:

  • Docker: uwsgi service not recognised
  • Kubernetes NodePort custom port
  • Vagrant docker provisioner command
  • Running npm update inside dockers
  • Issue to node-sass and Docker
  • How to link docker host to container by name
  • $ nc -v locahost 33333
    connection succeeded!

    However when I deploy it in Kubernetes it doesn’t work.
    Here is the manifest file:

    apiVersion: v1
    kind: Pod
      name: foo-pod
      namespace: foo
        name: foo-pod
      - name: foo
        image: bar/foo:latest
        - containerPort: 8080


    apiVersion: v1
    kind: Service
      name: foo-service
      namespace: foo
      type: NodePort
        - port: 8080
        - NodePort: 33333
        name: foo-pod

    Deployed with:

    $ kubectl apply -f foo.yaml 
    $ nc -v <publicIP> 33333
    Connection refused 

    I don’t understand why I cannot access it…

  • If I run sudo apt-get update inside docker image and run it as a container, do I still need to sudo apt-get update on the machine too?
  • Restarting named container assigns different IP
  • Environment variable and docker-compose
  • How to share local files to docker container In swarm with docker-compose
  • Docker (Dockerfile): Share a directory from host to container
  • How can I access from network newly deployed pod in kubernetes?
  • One Solution collect form web for “access docker container in kubernetes”

    The problem was that the application was listening on IP
    It needs to listen on to work in kubernetes. A change in the application code did the trick.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.