Google Kubernetes storage in EC2

I started to use Docker and I’m trying out Google’s Kubernetes project for my container orchestration. It looks really good!

The only thing I’m curious of is how I would handle the volume storage.

  • Remote debugging NodeJS Container on AWS
  • Copy multiple files in docker with cp
  • What is the correct way to deploy EAR on Weblogic with Docker
  • unable to run docker container (image size 9 GB) from AWS instance
  • Connect to cassandra inside docker from other server
  • AWS ECR image list
  • I’m using EC2 instances and the containers do volume from the EC2 filesystem.

    The only thing left is the way I have to deploy my application code into all those EC2 instances, right? How can I handle this?

  • Run a shell script on dokku app deployment
  • Install docker 1.2 on Amazon Linux AMI release 2014.03
  • Different file owner inside Docker container and in host machine
  • How can I setup a instance EC2 with docker and access it with the IP?
  • Host verification failed error when running git clone inside dockerfile on AWS EC2 instance as host and a private git repository
  • Isolate PHP versions on AWS EC2
  • One Solution collect form web for “Google Kubernetes storage in EC2”

    It’s somewhat unclear what you’re asking, but a good place to start would be reading about your options for volumes in Kubernetes.

    The options include using local EC2 disk with a lifetime tied to the lifetime of your pod (emptyDir), local EC2 disk with lifetime tied to the lifetime of the node VM (hostDir), and an Elastic Block Store volume (awsElasticBlockStore).

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.