add pip requirements to docker image in runtime

I want to be able to add some extra requirements to an own create docker image. My strategy is build the image from a dockerfile with a CMD command that will execute a “pip install -r” command using a mounted volume in runtime.

This is my dockerfile:

  • How to get Docker working on a Surface Pro 4 laptop with Windows 10
  • Should I scale down Openshift app's replicas before deleting its resources?
  • Invalid ELF header xgboost (using a pkl in a Docker container)
  • Docker Django 404 for web static files, but fine for admin static files
  • How to use env var in WORKDIR stanza?
  • Can't connect to ASP.NET core through docker
  • FROM ubuntu:14.04
    RUN apt-get update
    RUN apt-get install -y python-pip python-dev build-essential 
    RUN pip install --upgrade pip
    WORKDIR /root
    CMD ["pip install -r /root/sourceCode/requirements.txt"]

    Having that dockerfile I build the image:

    sudo docker build -t test .

    And finally I try to attach my new requirements using this command:

    sudo docker run -v $(pwd)/sourceCode:/root/sourceCode -it test /bin/bash

    My local folder “sourceCode” has inside a valid requirements.txt file (it contains only one line with the value “gunicorn”).
    When I get the prompt I can see that the requirements file is there, but if I execute a pip freeze command the gunicorn package is not listed.

    Why the requirements.txt file is been attached correctly but the pip command is not working properly?

    Thank you.

  • Spring Boot in Docker throwing an exception 'hibernate.dialect' not set
  • dockerize c#.NET desktop application
  • Issue mounting docker volume with docker-compose
  • Why does docker mount a file with a numeric value as a directory?
  • Azure Hortonworks CloudBreak hosts file not correct
  • Appending to PATH in a Windows Docker container
  • 3 Solutions collect form web for “add pip requirements to docker image in runtime”


    pip command isn’t running because you are telling Docker to run /bin/bash instead.

    docker run -v $(pwd)/sourceCode:/root/sourceCode -it test /bin/bash

    Longer explanation

    The default ENTRYPOINT for a container is /bin/sh -c. You don’t override that in the Dockerfile, so that remains. The default CMD instruction is probably nothing. You do override that in your Dockerfile. When you run (ignore the volume for brevity)

    docker run -it test

    what actually executes inside the container is

    /bin/sh -c pip install -r /root/sourceCode/requirements.txt

    Pretty straight forward, looks like it will run pip when you start the container.

    Now let’s take a look at the command you used to start the container (again, ignoring volumes)

    docker run -v -it test /bin/bash

    what actually executes inside the container is

    /bin/sh -c /bin/bash

    the CMD arguments you specified in your Dockerfile get overridden by the COMMAND you specify in the command line. Recall that docker run command takes this form

    docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]

    Further reading

    1. This answer has a really to the point explanation of what CMD and ENTRYPOINT instructions do

      The ENTRYPOINT specifies a command that will always be executed when the container starts.

      The CMD specifies arguments that will be fed to the ENTRYPOINT.

    2. This blog post on the difference between ENTRYPOINT and CMD instructions that’s worth reading.

    You may change the last statement i.e., CMD to below.

    –specify absolute path of pip location in below statement

    CMD ["/usr/bin/pip", "install", "-r", "/root/sourceCode/requirements.txt"]

    UPDATE: adding additional answer based on comments.

    One thing must be noted that, if customized image is needed with additional requirements, that should part of the image rather than doing at run time.

    Using below base image to test:

    docker pull colstrom/python:legacy

    So, installing packages should be run using RUN command of Dockerfile.
    And CMD should be used what app you actually wanted to run as a process inside of container.

    Just checking if the base image has any pip packages by running below command and results nothing.

    docker run --rm --name=testpy colstrom/python:legacy /usr/bin/pip freeze

    Here is simple example to demonstrate the same:


    FROM colstrom/python:legacy
    COPY requirements.txt /requirements.txt
    RUN ["/usr/bin/pip", "install", "-r", "/requirements.txt"]
    CMD ["/usr/bin/pip", "freeze"]



    Build the image with pip packages Hope you know to place Dockerfile, requirements.txt file in fresh directory.

    D:\dockers\py1>docker build -t pypiptest .
    Sending build context to Docker daemon 3.072 kB
    Step 1 : FROM colstrom/python:legacy
     ---> 640409fadf3d
    Step 2 : COPY requirements.txt /requirements.txt
     ---> abbe03846376
    Removing intermediate container c883642f06fb
    Step 3 : RUN /usr/bin/pip install -r /requirements.txt
     ---> Running in 1987b5d47171
    Collecting selenium (from -r /requirements.txt (line 1))
      Downloading selenium-3.0.1-py2.py3-none-any.whl (913kB)
    Installing collected packages: selenium
    Successfully installed selenium-3.0.1
     ---> f0bc90e6ac94
    Removing intermediate container 1987b5d47171
    Step 4 : CMD /usr/bin/pip freeze
     ---> Running in 6c3435177a37
     ---> dc1925a4f36d
    Removing intermediate container 6c3435177a37
    Successfully built dc1925a4f36d
    SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

    Now run the image
    If you are not passing any external command, then container takes command from CMD which is just shows the list of pip packages. Here in this case, selenium.

    D:\dockers\py1>docker run -itd --name testreq pypiptest
    D:\dockers\py1>docker logs testreq

    So, the above shows that package is installed successfully.

    Hope this is helpful.

    Using the concepts that @Rao and @ROMANARMY have explained in their answers, I find out finally a way of doing what I wanted: add extra python requirements to a self-created docker image.

    My new Dockerfile is as follows:

    FROM ubuntu:14.04
    RUN apt-get update
    RUN apt-get install -y python-pip python-dev build-essential 
    RUN pip install --upgrade pip
    WORKDIR /root
    COPY .
    CMD ["/bin/bash" , ""]

    I’ve added as first command the execution of a shell script that has the following content:

    pip install -r /root/sourceCode/requirements.txt
    pip freeze > /root/sourceCode/freeze.txt

    And finally I build and run the image using these commands:

    docker build --tag test .
    docker run -itd --name container_test -v $(pwd)/sourceCode:/root/sourceCode test <- without any parameter at the end

    As I explained at the beginning of the post, I have in a local folder a folder named sourceCode that contains a valid requirements.txt file with only one line “gunicorn”

    So finally I’ve the ability of adding some extra requirements (gunicorn package in this example) to a given docker image.

    After building and running my experiment If I check the logs (docker logs container_test) I see something like this:

    Downloading gunicorn-19.6.0-py2.py3-none-any.whl (114kB)
        100% |################################| 122kB 1.1MB/s 
    Installing collected packages: gunicorn

    Furthermore, the container have created a freeze.txt file inside the mounted volume that contains all the pip packages installed, including the desired gunicorn:


    Now I’ve other problems with the permissions of the new created file, but that will be probably in a new post.

    Thank you!

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.