Docker:How to get exact time the Docker host receives the request & exact time the Docker container receives the request in docker?

I have a Django application running inside docker container. But because of some reason my application became very slow. I want to use profiling to my application.

For that I check with Apache & nginx logs.Then want to get more picture on this.How to get the exact time the docker host receives the request & exact time docker container receives the request.

  • Nginx reverse proxy falling after update Docker to 1.13
  • Java in python3-onbuild docker
  • Running nano in docker container
  • Is it possible to use vagrant + aws provider + docker provider?
  • Where could I find docker-standard environment variables like DOCKER_HOST?
  • Dockerode : run omxplayer command
  • Any help will be awesome!!!

  • Docker: copy data from host to mounted host directory to access it from the running container
  • Docker copy container volume files to host on first run
  • Increase HDFS configured capacity inside Docker
  • How to limit Docker filesystem space available to container(s)
  • docker containers need to be secure by default?
  • error 7#7: *1 upstream prematurely closed connection while reading response header from upstream
  • One Solution collect form web for “Docker:How to get exact time the Docker host receives the request & exact time the Docker container receives the request in docker?”

    The way I have seen this work in the past is to add a custom header with the current timestamp including milliseconds to each request as you are reverse proxying the requests. In your case this would be done on your nginx config. Something like this

    proxy_set_header X-Request-Start "t=${msec}";
    

    Then on the Apache side before it process the request you can do the same thing.

    For Apache

    RequestHeader set X-Request-Start-2 "%t"
    

    You could even record when the response is finished and have 3 points in time to compare.

    Then in your logs, or django, or in your metrics gathering system, you can compare the time between the points to find out how long it takes to go from nginx to Apache, and that is the request queue time. It should be pretty fast, but if Apache isn’t tuned correctly it is possible the requests are getting queued while waiting for others to get processed.

    You can use something like new relic who can scope out all the details of the request and show you the results in a pretty graph. They even have a free version that does what you are looking for, and they support docker.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.