Direct-LVM stops working after reboot

In order to run Docker in production, I am following the steps in

Everything seems to be working fine. But if I reboot for some reason, everything messes up. Even if I remove everything from /var/lib/docker and I run lvremove, vgremove and pvremove it still says Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (docker-thinpool) that already has used data blocks and resists to start.

  • Command line shortcut to connect to a docker container [closed]
  • how to protect a django web app deployed in elastic beanstalk docker nginx using password
  • Graylog container cannot connect to MongoDB container
  • docker push keeps pushing
  • Docker [error response]: Failed to choose a UCP node. UCP node does not exist.
  • Docker : Unable to run Docker commands
  • I know there has to be some documentation that shows how to make direct-lvm settings persistent against reboots. Something automatically restores the settings after reboot. I could not find any.

    So how do I achieve persistence for my direct-lvm settings?

  • Comparing Docker Tags
  • Docker: rails executable file not found in $PATH
  • Mount host directory with a symbolic link inside in docker container
  • Can't add a user with a high UID in docker Alpine
  • docker container started in Detached mode stopped after process execution
  • docker build Error checking context: 'can't stat '\\?\C:\Users\username\AppData\Local\Application Data''
  • 2 Solutions collect form web for “Direct-LVM stops working after reboot”

    Fortunately, someone in Docker Community understood the problem and shared his solution with. The solution is: .

    apt install -y thin-provisioning-tools
    mkdir /usr/lib/docker-storage-setup
    mkdir /etc/sysconfig
    git clone     /opt/docker-storage-setup
    cp /opt/docker-storage-setup/ /usr/bin/docker-storage-setup
    cp /opt/docker-storage-setup/docker-storage-setup.service /lib/systemd/system/docker-storage-setup.service
    cp /opt/docker-storage-setup/ /usr/lib/docker-storage-setup
    VG=docker DATA_SIZE=95%FREE STORAGE_DRIVER=devicemapper /opt/docker-storage-setup/
    systemctl enable docker-storage-setup
    lvrename docker/thinpool docker/docker-pool

    And the related section in Systemd service file in /lib/systemd/system/docker.service needs to be updated as --storage-opt=dm.thinpooldev=/dev/mapper/docker-docker--pool

    In a bug report Eric Paris says:

    IF you are using device mapper (instead of loopback) /var/lib/docker contains metadata informing docker about the contents of the device mapper storage area. If you delete /var/lib/docker that metadata is lost. Docker is then able to detect that the thin pool has data but docker is unable to make use of that information. The only solution is to delete the thin pool and recreate it so that both the thin pool and the metadata in /var/lib/docker will be empty.

    I ran into the same problem because of the documentation’s wording you mentioned. There was the step rm -rf /var/lib/docker.bk and it was only then that I removed the original files, causing the failure.

    With pvremove -ff /dev/sda2 (my lvm drive) and recreating, wiping the signatures in the lvm partitions, it worked for me.

    I think with your settings it should already be persistent.

    For me another error occured, after reboot lsblk did not show my lvm volumes, neither ls /dev/mapper. I am using Ubuntu and a commit message says that its default setup doesn’t (fully?) support thin provisioning. After sudo apt-get install thin-provisioning-tools the command sudo vgchange -ay docker as well as reboots worked for me.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.