How to evaluate the CPU and Mem usage for speccific command/docker in linux?

I’m composing yaml file for scripts running in docker and orchestrated by kubernetes. Is there a way to evaluate the resource utilization for a specicific command or docker, or what’s the best practice to set the limit of cpu and mem for pods?


  • Docker - sh service script dont take options
  • Docker linking works on one host but not another
  • Strange Vagrant error with Docker and shell provisioners
  • Can't install s3fs-fuse(yum fuse-devel version issue) and can't install libfuse(./config missing issue)
  • Unexpected behaviour from awk in docker-machine
  • Docker configure physical location
  • Most of these scripts will run in a short time, so it’s hard to get the resource info. I just wanna try to find a tool to get the the maximum usage of cpu and mem, the tool works like time, to print out the execution time.

  • Can anyone explain docker.sock
  • Run jhipster-registry in production
  • Docker - Application source code updates
  • Execute docker run with sys.process
  • Where should live docker volumes on the host?
  • Docker-Tuleap-List all Tuleap database tables running in docker container
  • 2 Solutions collect form web for “How to evaluate the CPU and Mem usage for speccific command/docker in linux?”

    There are some good answers in the question: Peak memory usage of a linux/unix process

    TL;DR: /usr/bin/time -v <command> or use valgrind.

    That should help you get an idea for how much memory you need to assign as the limit for your app but CPU is a bit different beast. If your app is CPU bound then it will use all the CPU you give it no matter what limit you set. Also, in Kubernetes you assign cores (or millicores) to apps so it’s not always terribly useful to know what % of the CPU was used on any particular machine as that won’t readily translate to cores.

    You should give your app as many CPU cores as you feel comfortable with and that allows your app to succeed in an acceptable amount of time. That will depend on cost and how many cores you have available in your cluster. It also depends a bit on the architecture of your app. For instance, if the application can’t take advantage of multiple cores then there isn’t much use in giving it more than 1.

    In case you have any longer running apps, you could try installing Kubedash. If you have Heapster installed then Kubedash uses the built in metrics for Kubernetes to show you average and max CPU/Memory utilization. It helps a lot when trying to figure out what requests & limits to assign to a particular application.

    Kubedash screenshot

    Hope that helps!

    You can view statistics for container(s) using the docker stats command.

    For example;

    docker stats containera containerb
    CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O               BLOCK I/O
    containera          0.00%               24.15 MB / 1.041 GB   2.32%               1.8 MB / 79.37 kB     0 B / 81.92 kB
    containerb          0.00%               24.95 MB / 1.041 GB   2.40%               1.798 MB / 80.72 kB   0 B / 81.92 kB

    Or, see processes running in a container using docker top <container>

    docker top containera
    UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
    root                4558                2850                0                   21:13               ?                   00:00:00            sh -c npm install http-server -g && mkdir -p /public && echo "welcome to containera" > /public/index.html && http-server -a -p 4200
    root                4647                4558                0                   21:13               ?                   00:00:00            node /usr/local/bin/http-server -a -p 4200

    Limiting resources

    Docker compose (like docker itself) allows you to set limits on resources for a container, for example, limiting the maximum amount of memory used, cpu-shares, etc.

    Read this section in the docker-compose yaml reference, and the docker run reference on “Runtime constraints on resources”

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.