Google Kubernetes storage in EC2

I started to use Docker and I’m trying out Google’s Kubernetes project for my container orchestration. It looks really good!

The only thing I’m curious of is how I would handle the volume storage.

  • Switching off AWS instance automatically once a job is done
  • Stop VM with MongoDB docker image without losing data
  • Continuous deployment & AWS autoscaling using Ansible (+Docker ?)
  • Docker trying to push 900MB to aws ec2 container
  • Deploy Django app with Docker
  • Get the external Container Id to the Go Template
  • I’m using EC2 instances and the containers do volume from the EC2 filesystem.

    The only thing left is the way I have to deploy my application code into all those EC2 instances, right? How can I handle this?

  • Screen within Docker from ssh
  • what triggers Elastic Beanstalk to pull in an updated Docker image
  • Docker and sensitive information used at run-time
  • After stopping docker containers , previously running containers cannot be started
  • Is it possible to log into Gitlabs container registry without using the CI runner?
  • Docker-Machine AWS Policies
  • One Solution collect form web for “Google Kubernetes storage in EC2”

    It’s somewhat unclear what you’re asking, but a good place to start would be reading about your options for volumes in Kubernetes.

    The options include using local EC2 disk with a lifetime tied to the lifetime of your pod (emptyDir), local EC2 disk with lifetime tied to the lifetime of the node VM (hostDir), and an Elastic Block Store volume (awsElasticBlockStore).

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.