How to evaluate the CPU and Mem usage for speccific command/docker in linux?

I’m composing yaml file for scripts running in docker and orchestrated by kubernetes. Is there a way to evaluate the resource utilization for a specicific command or docker, or what’s the best practice to set the limit of cpu and mem for pods?

Edit

  • I run a docker container,but after few minutes , it was killed by himself
  • `docker attach` command can't enter into a container unless Ctrl+C or Enter pressed
  • Importing a python module in Jython StreamSets - ImportError: No module named
  • Meteor packages.json error workaround when upgrading from 1.2 to 1.3 and deploying in a docker container
  • Docker container UUID
  • Host name does not match the certificate subject provided by the peer, but it's a perfect match
  • Most of these scripts will run in a short time, so it’s hard to get the resource info. I just wanna try to find a tool to get the the maximum usage of cpu and mem, the tool works like time, to print out the execution time.

  • docker cp a folder with a relative symlink: invalid symlink
  • port forwarding in docker-compose
  • Docker swarm nodes on private networks?
  • Get the host IP from my django app container
  • How to set PGOPTIONS when starting Postgres using docker-compose
  • Docker: Running multiple applications VS running multiple containers
  • 2 Solutions collect form web for “How to evaluate the CPU and Mem usage for speccific command/docker in linux?”

    There are some good answers in the question: Peak memory usage of a linux/unix process

    TL;DR: /usr/bin/time -v <command> or use valgrind.

    That should help you get an idea for how much memory you need to assign as the limit for your app but CPU is a bit different beast. If your app is CPU bound then it will use all the CPU you give it no matter what limit you set. Also, in Kubernetes you assign cores (or millicores) to apps so it’s not always terribly useful to know what % of the CPU was used on any particular machine as that won’t readily translate to cores.

    You should give your app as many CPU cores as you feel comfortable with and that allows your app to succeed in an acceptable amount of time. That will depend on cost and how many cores you have available in your cluster. It also depends a bit on the architecture of your app. For instance, if the application can’t take advantage of multiple cores then there isn’t much use in giving it more than 1.

    In case you have any longer running apps, you could try installing Kubedash. If you have Heapster installed then Kubedash uses the built in metrics for Kubernetes to show you average and max CPU/Memory utilization. It helps a lot when trying to figure out what requests & limits to assign to a particular application.

    Kubedash screenshot

    Hope that helps!

    You can view statistics for container(s) using the docker stats command.

    For example;

    docker stats containera containerb
    
    CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O               BLOCK I/O
    containera          0.00%               24.15 MB / 1.041 GB   2.32%               1.8 MB / 79.37 kB     0 B / 81.92 kB
    containerb          0.00%               24.95 MB / 1.041 GB   2.40%               1.798 MB / 80.72 kB   0 B / 81.92 kB
    

    Or, see processes running in a container using docker top <container>

    docker top containera
    UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
    root                4558                2850                0                   21:13               ?                   00:00:00            sh -c npm install http-server -g && mkdir -p /public && echo "welcome to containera" > /public/index.html && http-server -a 0.0.0.0 -p 4200
    root                4647                4558                0                   21:13               ?                   00:00:00            node /usr/local/bin/http-server -a 0.0.0.0 -p 4200
    

    Limiting resources

    Docker compose (like docker itself) allows you to set limits on resources for a container, for example, limiting the maximum amount of memory used, cpu-shares, etc.

    Read this section in the docker-compose yaml reference, and the docker run reference on “Runtime constraints on resources”

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.