Does virtualenv serve a purpose (in production) when using docker?

For development we use virtualenv to have an isolated development when it comes to dependencies. From this question it seems deploying Python applications in a virtualenv is recommended.

Now we’re starting to use docker for deployment. This provides a more isolated environment so I’m questioning the use of virtualenv inside a docker container. In the case of a single application I do not think virtualenv has a purpose as docker already provides isolation. In the case where multiple applications are deployed on a single docker container, I do think virtualenv has a purpose as the applications can have conflicting dependencies.

  • If I use EXPOSE $PORT in a Dockerfile, can I un-expose the port it when I use `docker run`?
  • Consistent hostname for Docker containers/VM across platforms?
  • accessing the docker container for rabbitmq from ubuntu host
  • Run nano server container on a Raspberry PI 3
  • Does Dockerfile creates one single image or multiple image?
  • Using Consul, how can I assign domain names to containers?
  • Should virtualenv be used when a single application is deployed in a docker container?

    Should docker contain multiple applications or only one application per container?

    If so, should virtualenv be used when deploying a container with multiple applications?

  • How to keep Docker container volume files?
  • run apps using audio in a docker container
  • Docker Remote API filters: filter out network by name
  • Amazon ECS support for --shm-size in docker
  • docker cannot connect to redis from file
  • Error output after installation of TensorFlow in docker
  • 3 Solutions collect form web for “Does virtualenv serve a purpose (in production) when using docker?”

    Virtualenv was created long before docker. Today, I lean towards docker instead of virtualenv for these reasons:

    • Virtualenv still means people consuming your product need to download eggs. With docker, they get something which is “known to work”. No strings attached.
    • Docker can do much more than virtualenv (like create a clean environment when you have products that need different Python versions).

    The main drawback for Docker today is poor Windows support.

    As for “how many apps per container”, the usual policy is 1.

    Yes. You should still use virtualenv. Also, you should be building wheels instead of eggs now. Finally, you should make sure that you keep your Docker image lean and efficient by building your wheels in a container with the full build tools and installing no build tools into your application container.

    You should read this excellent article. https://glyph.twistedmatrix.com/2015/03/docker-deploy-double-dutch.html

    The key take away is

    It’s true that in many cases, perhaps even most, simply installing
    stuff into the system Python with Pip works fine; however, for more
    elaborate applications, you may end up wanting to invoke a tool
    provided by your base container that is implemented in Python, but
    which requires dependencies managed by the host. By putting things
    into a virtualenv regardless, we keep the things set up by the base
    image’s package system tidily separated from the things our
    application is building, which means that there should be no unforseen
    interactions, regardless of how complex the application’s usage of
    Python might be.

    Introducing virtualenv is very easy, so I’d say start without it on your docker container.

    If the need arises, then maybe you can install it. Running “pip freeze > requirements.txt” will give you all your python packages.
    However, I doubt you’ll ever need virtualenv inside a docker container as creating another container would be a more preferable alternative.

    I would not recommend having more than one application in a single container. When you get to this point, your container is doing too much.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.