docker container size much greater than actual size

I am trying to build an image from debian:latest. After the build, the reported virtual size of the image from docker images command is 1.917 GB. I logged in to check the size (du -sh /)and it’s 573 MB. I am pretty sure that this huge size is not possible normally. What is going on here? How to get the correct size of the image? More importantly when I push this repository the size is 1.9 GB and not 573 MB.

enter image description here

  • Quick way to add multiple RPM's to a dockerfile
  • Having two images of docker for production and development
  • Specify network security group for docker-machine to use
  • Develop guide with docker images
  • Moving files into a Docker Data Volume
  • Docker best practice nodejs and static file
  • Output of du -sh /*

    8.9M    /bin
    4.0K    /boot
    0   /dev
    1.1M    /etc
    4.0K    /home
    30M /lib
    4.0K    /lib64
    4.0K    /media
    4.0K    /mnt
    4.0K    /opt
    du: cannot access '/proc/11/task/11/fd/4': No such file or directory
    du: cannot access '/proc/11/task/11/fdinfo/4': No such file or directory
    du: cannot access '/proc/11/fd/4': No such file or directory
    du: cannot access '/proc/11/fdinfo/4': No such file or directory
    0   /proc
    427M    /root
    8.0K    /run
    3.9M    /sbin
    4.0K    /srv
    0   /sys
    8.0K    /tmp
    88M /usr
    15M /var
    

  • ERROR: undefined method `closed?' for nil:NilClass
  • Is is possible to define memory and disk space for a Docker Container?
  • Volume is not shared between nodes of Docker Swarm
  • Setting AWS hazelcast cluster in WSO2 API manager cluster using docker
  • IDE within docker image?
  • Import host group into Docker container
  • 2 Solutions collect form web for “docker container size much greater than actual size”

    The 1.9GB size is not the image, it’s the image and its history. Use docker history textbox to check what takes so much space.

    See also Why are Docker container images so large?

    To reduce the size, you can change the way you build the image (it will depends on what you do, see answers from the link above), use docker export (see How to flatten a Docker image?) or use other extensions.

    Do you build that image via a Dockerfile? When you do that take care about your RUN statements. When you execute multiple RUN statements for each of those a new image layer is created which remains in the images history and counts on the images total size.

    So for instance if one RUN statement downloads a huge archive file, a next one unpacks that archive, and a following one cleans up that archive the archive and its extracted files remain in the images history:

    RUN curl <options> http://example.com/my/big/archive.tar.gz
    RUN tar xvzf <options>
    RUN <do whatever you need to do with the unpacked files>
    RUN rm archive.tar.gz
    

    There are more efficient ways in terms of image size to combine multiple steps in one RUN statement using the && operator. Like:

    RUN curl <options> http://example.com/my/big/archive.tar.gz \
        && tar xvzf <options> \
        && <do whatever you need to do with the unpacked files> \
        && rm archive.tar.gz
    

    In that way you can clean up files and folders that you need for the build process but not in the resulting image and keep them out of the images history as well. That is a quite common pattern to keep image sizes small.

    But of course you will not have a fine-grained image history which you could make reuse of, then.

    Update:

    As well as RUN statements ADD statements also create new image layers. Whatever you add to an image that way it stays in history and counts on the total image size. You cannot temporarily ADD things and then remove them so that they do not count on the total size.

    Try to ADD as less as possible to the image. Especially when you work with large files. Are there other ways to temporary get those files within a RUN statement so that you can do a cleanup during the same RUN execution? E.g. RUN git clone <your repo> && <do stuff> && rm -rf <clone dir>?

    A good practice would be to only ADD those things that are meant to stay on the image. Temporary things should be added and cleaned up with a single RUN statement instead where possible.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.