How to limit Docker filesystem space available to container(s)

The general scenario is that we have a cluster of servers and we want to set up virtual clusters on top of that using Docker.

For that we have created Dockerfiles for different services (Hadoop, Spark etc.).

  • sending mail using external smtp from container
  • trigger inotify event over NFS on Linux?
  • How change time in docker container, if you need test cron tasks?
  • Having Docker access External files
  • Docker Swarm Service Networking
  • Error while calling python subprocess in docker container
  • Regarding the Hadoop HDFS service however, we have the situation that the disk space available to the docker containers equals to the disk space available to the server. We want to limit the available disk space on a per-container basis so that we can dynamically spawn an additional datanode with some storage size to contribute to the HDFS filesystem.

    We had the idea to use loopback files formatted with ext4 and mount these on directories which we use as volumes in docker containers. However, this implies a large performance loss.

    I found another question on SO (Limit disk size and bandwidth of a Docker container) but the answers are almost 1,5 years old which – regarding the speed of development of docker – is ancient.

    Which way or storage backend would allow us to

    • Limit storage on a per-container basis
    • Has near bare-metal performance
    • Doesn’t require repartitioning of the server drives

  • Jhipster application development with Docker and gulp
  • Access docker containers from local network via dev suffix
  • Docker Swarms and Stacks: What's the difference?
  • Docker container published ports not accessible?
  • How to connect Mysql Docker to other docker in Dockerfile?
  • Cache “go get” in docker build
  • 2 Solutions collect form web for “How to limit Docker filesystem space available to container(s)”

    You can specify runtime constraints on memory and CPU, but not disk space.

    The ability to set constraints on disk space has been requested (issue 12462, issue 3804), but isn’t yet implemented, as it depends on the underlying filesystem driver.

    This feature is going to be added at some point, but not right away. It’s a bit more difficult to add this functionality right now because a lot of chunks of code are moving from one place to another. After this work is done, it should be much easier to implement this functionality.

    Please keep in mind that quota support can’t be added as a hack to devicemapper, it has to be implemented for as many storage backends as possible, so it has to be implemented in a way which makes it easy to add quota support for other storage backends.


    Update August 2016: as shown below, and in issue 3804 comment, PR 24771 and PR 24807 have seen then been merged. docker run now allow to set storage driver options per container

    $ docker run -it --storage-opt size=120G fedora /bin/bash
    

    This (size) will allow to set the container rootfs size to 120G at creation time.
    This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers

    An update ..

    $ docker run -it --storage-opt size=120G fedora /bin/bash
    

    ..This option is only available for the devicemapper, btrfs, overlay2,
    windowsfilter and zfs graph drivers..

    Further info:

    https://github.com/docker/docker/blob/master/docs/reference/commandline/run.md#set-storage-driver-options-per-container

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.