De-allocating memory after python tensorflow workbook execution

To limit memory usage I read How to prevent tensorflow from allocating the totality of a GPU memory? and tried this code :

# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

These commands did free up memory but but memory is not de-allocated after code completion. This issue describes :
https://github.com/tensorflow/tensorflow/issues/3701 a suggested fix is to update the driver
“After upgrading the GPU driver from 352.79 to 367.35 (the newest one), the problem disappeared. “
Unfortunately I’m not in position to update to latest version of driver. Has this issue been resolved.

  • How to access docker daemon from container with other user than root
  • Missing UDP packets between docker containers
  • docker: how to restart containers from files from /var/lib/docker/containers
  • Can I schedule Docker to run on specific time on Amazon ECS?
  • How to deploy my asp mvc website to Docker container
  • Docker Container: time/timezone wrong
  • I also considered limiting the available memory to the docker container.
    Reading https://devblogs.nvidia.com/parallelforall/nvidia-docker-gpu-server-application-deployment-made-easy/ states “Containers can be constrained to a limited set of resources on a system (e.g one CPU core and 1GB of memory)” but kernel does not currently support this, here I try to add 1GB of memory to new docker instance :

    nvidia-docker run -m 1024m -d -it -p 8889:8889 -v /users/user1234/jupyter:/notebooks --name tensorflow-gpu-1GB tensorflow/tensorflow:latest-cpu
    

    But this does not appear possible as receive warning :
    WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.”

    Is there a command to free memory after tensorflow python workbook completion ?

    Update
    

    After killing / restarting the notebook the memory is de-allocated. But how to free memory after completion within the notebook.

  • Failed to run jar in Docker container Spring boot
  • How to get my docker centos sshd passwordless server running?
  • Port data out of docker container
  • ansible - define var's value depending on another variable
  • Internet not available in Docker Chrome container
  • Generated files on host from docker
  • One Solution collect form web for “De-allocating memory after python tensorflow workbook execution”

    Ipython and jupyter notebooks will not free memory unless you use del or xdel on your objects:

    https://ipython.org/ipython-doc/3/interactive/magics.html

    %xdel:
    Delete a variable, trying to clear it from anywhere that IPython’s machinery has references to it. By default, this uses the identity of the named object in the user namespace to remove references held under other names. The object is also removed from the output history.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.