Using AWS EFS with Docker

I am using the new Elastic File System provided by amazon, on my single container EB deploy. I can’t figure out why the mounted EFS cannot be mapped into the container.

The EFS mount is successfully performed on the host at /efs-mount-point.

  • Fatal error when running a JavaFX GUI app inside Eclipse in a Docker container
  • All my docker containers have the error “Too many open files”
  • Nginx docker (13: Permission denied) while logging request to mounted volume
  • Unable to run java Program. Error inside docker
  • Best practices for debugging vagrant+docker+flask
  • Docker produces 'port already allocated' when IP address is specified in port definition
  • Provided to the Dockerrun.aws.json is

    {
      "AWSEBDockerrunVersion": "1"
      "Volumes": [
        {
          "HostDirectory": "/efs-mount-point",
          "ContainerDirectory": "/efs-mount-point"
        }
      ]
    }
    

    The volume is then created in the container once it starts running. However it has mapped the hosts directory /efs-mount-point, not the actual EFS mount point. I can’t figure out how to get Docker to map in the EFS volume mounted at /efs-mount-point instead of the host’s directory.

    Do NFS volumes play nice with Docker?

  • AWS EC2 Container Service Container Id, what does this mean?
  • Docker and local /etc/hosts records
  • Importing a MySQL database with a Python script fails when the same command works on the command line, what gives?
  • Accesing Spring-Boot WebApplication running in Docker Container
  • Git not found when running docker-machine create
  • How to verify locally built docker images?
  • 3 Solutions collect form web for “Using AWS EFS with Docker”

    You need to restart docker after mounting the EFS volume in the host EC2 instance.

    Here’s an example, .ebextensions/efs.config:

    commands:
       01mkdir:
          command: "mkdir -p /efs-mount-point"
       02mount:
          command: "mountpoint -q /efs-mount-point || mount -t nfs4 -o nfsvers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).fs-fa35c253.efs.us-west-2.amazonaws.com:/ /efs-mount-point"
       03restart:
          command: "service docker restart"
    

    EFS with AWS Beanstalk – Multicontainer Docker will work. But numerous of things will stop working because you have to restart docker after you mount the EFS.

    The instance commands

    Searching around you might find that you need to do “docker restart” after mounting EFS. It’s not that simple. You will experience troubles when autoscaling happens and / or when deploying new version of the app.

    Below is a script I use for mounting a EFS to the docker instance, where the following steps is needed:

    1. Stop ECS manager. Takes 60 seconds.
    2. Stop Docker service
    3. Kill remaining docker stuff
    4. Remove network previous bindings. See the issue https://github.com/docker/docker/issues/7856#issuecomment-239100381
    5. Mount EFS
    6. Start docker service.
    7. Start the ECS service
    8. Wait for 120 seconds. Making the ECS come to the correct start/* state. Else e.g. 00enact script will fail. Note this display is mandatory and is really hard to find any documentation on.

    Here is my script:

    .ebextensions/commands.config:

    commands:
      01stopdocker:
        command: "sudo stop ecs  > /dev/null 2>&1 || /bin/true && sudo service docker stop"
      02killallnetworkbindings:
        command: 'sudo killall docker  > /dev/null 2>&1 || /bin/true'
      03removenetworkinterface:
        command: "rm -f /var/lib/docker/network/files/local-kv.db"
        test: test -f /var/lib/docker/network/files/local-kv.db
      # Mount the EFS created in .ebextensions/media.config
      04mount:
        command: "/tmp/mount-efs.sh"
      # On new instances, delay needs to be added because of 00task enact script. It tests for start/ but it can be various states of start...
      # Basically, "start ecs" takes some time to run, and it runs async - so we sleep for some time.
      # So basically let the ECS manager take it's time to boot before going on to enact scritps and post deploy scripts.
      09restart:
        command: "service docker start && sudo start ecs && sleep 120s"
    

    The mount script and environment variables

    .ebextensions/mount-config.config

    # efs-mount.config
    # Copy this file to the .ebextensions folder in the root of your app source folder
    option_settings:
      aws:elasticbeanstalk:application:environment:
        EFS_REGION: '`{"Ref": "AWS::Region"}`'
        # Replace with the required mount directory
        EFS_MOUNT_DIR: '/efs_volume'
        # Use in conjunction with efs_volume.config or replace with EFS volume ID of an existing EFS volume
        EFS_VOLUME_ID: '`{"Ref" : "FileSystem"}`'
    
    packages:
      yum:
        nfs-utils: []
    files:
      "/tmp/mount-efs.sh":
          mode: "000755"
          content : |
            #!/bin/bash
    
            EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_REGION')
            EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_MOUNT_DIR')
            EFS_VOLUME_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_VOLUME_ID')
    
            echo "Mounting EFS filesystem ${EFS_DNS_NAME} to directory ${EFS_MOUNT_DIR} ..."
    
            echo 'Stopping NFS ID Mapper...'
            service rpcidmapd status &> /dev/null
            if [ $? -ne 0 ] ; then
                echo 'rpc.idmapd is already stopped!'
            else
                service rpcidmapd stop
                if [ $? -ne 0 ] ; then
                    echo 'ERROR: Failed to stop NFS ID Mapper!'
                    exit 1
                fi
            fi
    
            echo 'Checking if EFS mount directory exists...'
            if [ ! -d ${EFS_MOUNT_DIR} ]; then
                echo "Creating directory ${EFS_MOUNT_DIR} ..."
                mkdir -p ${EFS_MOUNT_DIR}
                if [ $? -ne 0 ]; then
                    echo 'ERROR: Directory creation failed!'
                    exit 1
                fi
                chmod 777 ${EFS_MOUNT_DIR}
                if [ $? -ne 0 ]; then
                    echo 'ERROR: Permission update failed!'
                    exit 1
                fi
            else
                echo "Directory ${EFS_MOUNT_DIR} already exists!"
            fi
    
            mountpoint -q ${EFS_MOUNT_DIR}
            if [ $? -ne 0 ]; then
                AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
                echo "mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}"
                mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}
                if [ $? -ne 0 ] ; then
                    echo 'ERROR: Mount command failed!'
                    exit 1
                fi
            else
                echo "Directory ${EFS_MOUNT_DIR} is already a valid mountpoint!"
            fi
    
            echo 'EFS mount complete.'
    

    The resource and configuration

    You will have to change the option_settings below. To find the VPC and subnets which you must define under option_settings below, look in AWS web console -> VPC, there you must find the Default VPC id and the 3 default subnet ids. If your beanstalk uses custom VPC you must use these settings.

    .ebextensions/efs-volume.config:

    # efs-volume.config
    # Copy this file to the .ebextensions folder in the root of your app source folder
    option_settings:
      aws:elasticbeanstalk:customoption: 
        EFSVolumeName: "EB-EFS-Volume"
        VPCId: "vpc-xxxxxxxx"
        SubnetUSWest2a: "subnet-xxxxxxxx"
        SubnetUSWest2b: "subnet-xxxxxxxx"
        SubnetUSWest2c: "subnet-xxxxxxxx"
    
    Resources:
      FileSystem:
        Type: AWS::EFS::FileSystem
        Properties:
          FileSystemTags:
          - Key: Name
            Value:
              Fn::GetOptionSetting: {OptionName: EFSVolumeName, DefaultValue: "EB_EFS_Volume"}
      MountTargetSecurityGroup:
        Type: AWS::EC2::SecurityGroup
        Properties:
          GroupDescription: Security group for mount target
          SecurityGroupIngress:
          - FromPort: '2049'
            IpProtocol: tcp
            SourceSecurityGroupId:
              Fn::GetAtt: [AWSEBSecurityGroup, GroupId]
            ToPort: '2049'
          VpcId:
            Fn::GetOptionSetting: {OptionName: VPCId}
      MountTargetUSWest2a:
        Type: AWS::EFS::MountTarget
        Properties:
          FileSystemId: {Ref: FileSystem}
          SecurityGroups:
          - {Ref: MountTargetSecurityGroup}
          SubnetId:
            Fn::GetOptionSetting: {OptionName: SubnetUSWest2a}
      MountTargetUSWest2b:
        Type: AWS::EFS::MountTarget
        Properties:
          FileSystemId: {Ref: FileSystem}
          SecurityGroups:
          - {Ref: MountTargetSecurityGroup}
          SubnetId:
            Fn::GetOptionSetting: {OptionName: SubnetUSWest2b}
      MountTargetUSWest2c:
        Type: AWS::EFS::MountTarget
        Properties:
          FileSystemId: {Ref: FileSystem}
          SecurityGroups:
          - {Ref: MountTargetSecurityGroup}
          SubnetId:
            Fn::GetOptionSetting: {OptionName: SubnetUSWest2c}
    

    Resources:

    AWS has instructions to automatically create and mount an EFS on elastic beanstalk. They can be found here

    These instructions link to two config files to be customized and placed in .ebextensions folder of your deployment package.

    1. storage-efs-createfilesystem.config
    2. storage-efs-mountfilesystem.config

    The file storage-efs-mountfilesystem.config needs to be further modified to work with Docker containers. Add the following command:

    02_restart:
      command: "service docker restart"
    

    And for multi-container environments Elastic Container Service has to be restarted as well (it was killed when docker was restarted above):

    03_start_eb:
      command: |
          start ecs
          start eb-docker-events
          sleep 120
      test: sh -c "[ -f /etc/init/ecs.conf ]"
    

    so the complete commands section of storage-efs-mountfilesystem.config is:

    commands:
      01_mount:
        command: "/tmp/mount-efs.sh"
      02_restart:
        command: "service docker restart"
      03_start_eb:
        command: |
            start ecs
            start eb-docker-events
            sleep 120
        test: sh -c "[ -f /etc/init/ecs.conf ]"
    

    The reason this does not work “out-of-the-box” is because the docker daemon is started by the EC2 instance before the commands in .ebextensions are run. The startup order is:

    1. start docker daemon
    2. In multi-container docker environments start the Elastic Container Service Agent
    3. Run commands in .ebextensions
    4. Run container app(s)

    At step one, the filesystem view the docker daemon provides to the containers is fixed. Therefore changes in the host filesystems made during step 3 are not reflected in the container’s view.

    One strange effect is that the container sees a mount point prior to the filesystem being mounted on the host. The host sees the mounted filesystem. Therefore a file written by a container will be written to the host directory under the mounted directory, not the mounted filesystem. Unmounting the filesystem on the EC2 host will expose the container files written into the mount directory.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.