Does virtualenv serve a purpose (in production) when using docker?

For development we use virtualenv to have an isolated development when it comes to dependencies. From this question it seems deploying Python applications in a virtualenv is recommended.

Now we’re starting to use docker for deployment. This provides a more isolated environment so I’m questioning the use of virtualenv inside a docker container. In the case of a single application I do not think virtualenv has a purpose as docker already provides isolation. In the case where multiple applications are deployed on a single docker container, I do think virtualenv has a purpose as the applications can have conflicting dependencies.

  • I started a docker container in bluemix and ice ps shows my container in state “Queued” for hours
  • Docker Compose Link (Alias)
  • Simple example of Vagrantfile / Dockerfile to run node app
  • Docker containers cannot connect through overlay networks
  • Dockerfile RUN command taking a lot of disk space
  • Secure alternatives to DiD or the UNIX socket?
  • Should virtualenv be used when a single application is deployed in a docker container?

    Should docker contain multiple applications or only one application per container?

    If so, should virtualenv be used when deploying a container with multiple applications?

  • How to configure nginx to forward all unhandled domains to particular service?
  • Have Concourse only build new docker containers on file diff not on commit
  • Docker-local volume mount to the container:Connection refused
  • How do I remove old service images after an update?
  • What run flags are included in my Docker Container
  • How do I redirect a CNAME to an IP on azure?
  • 3 Solutions collect form web for “Does virtualenv serve a purpose (in production) when using docker?”

    Virtualenv was created long before docker. Today, I lean towards docker instead of virtualenv for these reasons:

    • Virtualenv still means people consuming your product need to download eggs. With docker, they get something which is “known to work”. No strings attached.
    • Docker can do much more than virtualenv (like create a clean environment when you have products that need different Python versions).

    The main drawback for Docker today is poor Windows support.

    As for “how many apps per container”, the usual policy is 1.

    Yes. You should still use virtualenv. Also, you should be building wheels instead of eggs now. Finally, you should make sure that you keep your Docker image lean and efficient by building your wheels in a container with the full build tools and installing no build tools into your application container.

    You should read this excellent article.

    The key take away is

    It’s true that in many cases, perhaps even most, simply installing
    stuff into the system Python with Pip works fine; however, for more
    elaborate applications, you may end up wanting to invoke a tool
    provided by your base container that is implemented in Python, but
    which requires dependencies managed by the host. By putting things
    into a virtualenv regardless, we keep the things set up by the base
    image’s package system tidily separated from the things our
    application is building, which means that there should be no unforseen
    interactions, regardless of how complex the application’s usage of
    Python might be.

    Introducing virtualenv is very easy, so I’d say start without it on your docker container.

    If the need arises, then maybe you can install it. Running “pip freeze > requirements.txt” will give you all your python packages.
    However, I doubt you’ll ever need virtualenv inside a docker container as creating another container would be a more preferable alternative.

    I would not recommend having more than one application in a single container. When you get to this point, your container is doing too much.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.