Google Kubernetes storage in EC2

I started to use Docker and I’m trying out Google’s Kubernetes project for my container orchestration. It looks really good!

The only thing I’m curious of is how I would handle the volume storage.

  • Docker ports work in localhost but not with public ip
  • Pulling AWS EC2 container results in “image no found”
  • Choosing the right AWS Services and software tools
  • AWS EC2 ELB Docker Routing
  • How can I setup a instance EC2 with docker and access it with the IP?
  • Unable to connect to dockerized mysql db remotely
  • I’m using EC2 instances and the containers do volume from the EC2 filesystem.

    The only thing left is the way I have to deploy my application code into all those EC2 instances, right? How can I handle this?

  • docker command from the running docker / process
  • Can I transport just the image changes/layers i'm concerned with?
  • .Net Core with Docker on EC2 tutorial problems
  • How to make docker container be notified when volume from another container was updated?
  • How to avoid redundancy and time loss when re-building images during development?
  • Amazon AWS ECS Task delay
  • One Solution collect form web for “Google Kubernetes storage in EC2”

    It’s somewhat unclear what you’re asking, but a good place to start would be reading about your options for volumes in Kubernetes.

    The options include using local EC2 disk with a lifetime tied to the lifetime of your pod (emptyDir), local EC2 disk with lifetime tied to the lifetime of the node VM (hostDir), and an Elastic Block Store volume (awsElasticBlockStore).

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.