De-allocating memory after python tensorflow workbook execution

To limit memory usage I read How to prevent tensorflow from allocating the totality of a GPU memory? and tried this code :

# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

These commands did free up memory but but memory is not de-allocated after code completion. This issue describes : a suggested fix is to update the driver
“After upgrading the GPU driver from 352.79 to 367.35 (the newest one), the problem disappeared. “
Unfortunately I’m not in position to update to latest version of driver. Has this issue been resolved.

  • docker container size much greater than actual size
  • Building from PyCharm using a Docker interpreter
  • PostgreSQL Docker installation how to setup initial database
  • How to build archlinux pkgbuild inside docker with gitlab-ci
  • How to replicate microservices when consuming same kafka topic?
  • Image deployment with config files
  • I also considered limiting the available memory to the docker container.
    Reading states “Containers can be constrained to a limited set of resources on a system (e.g one CPU core and 1GB of memory)” but kernel does not currently support this, here I try to add 1GB of memory to new docker instance :

    nvidia-docker run -m 1024m -d -it -p 8889:8889 -v /users/user1234/jupyter:/notebooks --name tensorflow-gpu-1GB tensorflow/tensorflow:latest-cpu

    But this does not appear possible as receive warning :
    WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.”

    Is there a command to free memory after tensorflow python workbook completion ?


    After killing / restarting the notebook the memory is de-allocated. But how to free memory after completion within the notebook.

  • Can docker compose build image from different Dokcerfiles at the same folder
  • Docker EC2 & port binding
  • Why are <none> images created and why doesn't Docker clean them up?
  • Marathon not loading docker container: Failed to get resource statistics for executor
  • Command to Copy/share jmeter master container results (Docker) generated in by running the script in non-gui mode to EC2 instance
  • Exposing Docker Container Ports
  • One Solution collect form web for “De-allocating memory after python tensorflow workbook execution”

    Ipython and jupyter notebooks will not free memory unless you use del or xdel on your objects:

    Delete a variable, trying to clear it from anywhere that IPython’s machinery has references to it. By default, this uses the identity of the named object in the user namespace to remove references held under other names. The object is also removed from the output history.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.