How to limit Docker filesystem space available to container(s)

The general scenario is that we have a cluster of servers and we want to set up virtual clusters on top of that using Docker.

For that we have created Dockerfiles for different services (Hadoop, Spark etc.).

  • Build with maven in docker throw {}->unix://localhost:80: No such file or director
  • Unable to attach to Docker container: “.: .: is a directory”
  • pod routes don't match IP
  • Not able to start any containers
  • Run py.test in a docker container as a service
  • What is the easiest way to export a Docker container to VM
  • Regarding the Hadoop HDFS service however, we have the situation that the disk space available to the docker containers equals to the disk space available to the server. We want to limit the available disk space on a per-container basis so that we can dynamically spawn an additional datanode with some storage size to contribute to the HDFS filesystem.

    We had the idea to use loopback files formatted with ext4 and mount these on directories which we use as volumes in docker containers. However, this implies a large performance loss.

    I found another question on SO (Limit disk size and bandwidth of a Docker container) but the answers are almost 1,5 years old which – regarding the speed of development of docker – is ancient.

    Which way or storage backend would allow us to

    • Limit storage on a per-container basis
    • Has near bare-metal performance
    • Doesn’t require repartitioning of the server drives

  • Rails 5, Kubernetes and Google Container Engine
  • Docker and local /etc/hosts records
  • Why doesn't “pip install” work with Flask and gevent inside a debian docker container?
  • Docker EE Admin UI vs Portainer
  • Docker swarm mode mesh routing is not working at all
  • CHMOD in dockerfile for python
  • 2 Solutions collect form web for “How to limit Docker filesystem space available to container(s)”

    You can specify runtime constraints on memory and CPU, but not disk space.

    The ability to set constraints on disk space has been requested (issue 12462, issue 3804), but isn’t yet implemented, as it depends on the underlying filesystem driver.

    This feature is going to be added at some point, but not right away. It’s a bit more difficult to add this functionality right now because a lot of chunks of code are moving from one place to another. After this work is done, it should be much easier to implement this functionality.

    Please keep in mind that quota support can’t be added as a hack to devicemapper, it has to be implemented for as many storage backends as possible, so it has to be implemented in a way which makes it easy to add quota support for other storage backends.


    Update August 2016: as shown below, and in issue 3804 comment, PR 24771 and PR 24807 have seen then been merged. docker run now allow to set storage driver options per container

    $ docker run -it --storage-opt size=120G fedora /bin/bash
    

    This (size) will allow to set the container rootfs size to 120G at creation time.
    This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers

    An update ..

    $ docker run -it --storage-opt size=120G fedora /bin/bash
    

    ..This option is only available for the devicemapper, btrfs, overlay2,
    windowsfilter and zfs graph drivers..

    Further info:

    https://github.com/docker/docker/blob/master/docs/reference/commandline/run.md#set-storage-driver-options-per-container

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.