Direct-LVM stops working after reboot

In order to run Docker in production, I am following the steps in

Everything seems to be working fine. But if I reboot for some reason, everything messes up. Even if I remove everything from /var/lib/docker and I run lvremove, vgremove and pvremove it still says Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (docker-thinpool) that already has used data blocks and resists to start.

  • Dealing with “HTTP request timed out. Exiting.” when building a large aspnet project for Docker
  • Docker container image on AWS
  • is it considered bad practice to create ssh key in container?
  • updating docker image given changes to local filesystem
  • bcrypt and Docker bcrypt_lib.node: invalid ELF header
  • Is it possible to pause a Docker image build?
  • I know there has to be some documentation that shows how to make direct-lvm settings persistent against reboots. Something automatically restores the settings after reboot. I could not find any.

    So how do I achieve persistence for my direct-lvm settings?

  • Change Graph.db location on Neo4j Docker to local file
  • Docker - Rollback data in MySQL container after recreating
  • Can not add a volume to Mongodb Docker instance
  • AspnetPublishHandler with name “Custom” was not found when publishing ASP.NET 5 project to Docker
  • How do I connect the Postgres database running on the local machine to the docker container
  • Docker images as OS for multiple servers
  • 2 Solutions collect form web for “Direct-LVM stops working after reboot”

    Fortunately, someone in Docker Community understood the problem and shared his solution with. The solution is: .

    apt install -y thin-provisioning-tools
    mkdir /usr/lib/docker-storage-setup
    mkdir /etc/sysconfig
    git clone     /opt/docker-storage-setup
    cp /opt/docker-storage-setup/ /usr/bin/docker-storage-setup
    cp /opt/docker-storage-setup/docker-storage-setup.service /lib/systemd/system/docker-storage-setup.service
    cp /opt/docker-storage-setup/ /usr/lib/docker-storage-setup
    VG=docker DATA_SIZE=95%FREE STORAGE_DRIVER=devicemapper /opt/docker-storage-setup/
    systemctl enable docker-storage-setup
    lvrename docker/thinpool docker/docker-pool

    And the related section in Systemd service file in /lib/systemd/system/docker.service needs to be updated as --storage-opt=dm.thinpooldev=/dev/mapper/docker-docker--pool

    In a bug report Eric Paris says:

    IF you are using device mapper (instead of loopback) /var/lib/docker contains metadata informing docker about the contents of the device mapper storage area. If you delete /var/lib/docker that metadata is lost. Docker is then able to detect that the thin pool has data but docker is unable to make use of that information. The only solution is to delete the thin pool and recreate it so that both the thin pool and the metadata in /var/lib/docker will be empty.

    I ran into the same problem because of the documentation’s wording you mentioned. There was the step rm -rf /var/lib/docker.bk and it was only then that I removed the original files, causing the failure.

    With pvremove -ff /dev/sda2 (my lvm drive) and recreating, wiping the signatures in the lvm partitions, it worked for me.

    I think with your settings it should already be persistent.

    For me another error occured, after reboot lsblk did not show my lvm volumes, neither ls /dev/mapper. I am using Ubuntu and a commit message says that its default setup doesn’t (fully?) support thin provisioning. After sudo apt-get install thin-provisioning-tools the command sudo vgchange -ay docker as well as reboots worked for me.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.