Getting smaller number in the elapsed milliseconds while generating more httpwebrequests

I am stress testing the server and thus creating HttpWebRequest

inside a factory, but when the response time for different number of threads is checked, then for 1 thread the response time is long and when the number of threads increases then the response time decreases. what could be the reason?

the code looks like this:

for (int i = 0; i < tsk.Length; i++)
{
    tsk[i] = Task.Factory.StartNew((object obj) =>
    {
         System.Diagnostics.Stopwatch watch = System.Diagnostics.Stopwatch.StartNew();
         HttpWebResponse response = (HttpWebResponse)request.GetResponse();
         watch.Stop();
    }, i);
}

      

+3


source to share


2 answers


The reason for this behavior is simple: the more tasks you have on your system, the less time each of them has to do this job. Your computer has a limited set of cores on the CPU, so after a fixed value, you get a thread starvation problem , resulting in a higher response time.

You really need to switch the measurement to the server itself, save time when starting the request and at the end of the response - all other time is not about your server's performance, just the infrastructure speed.

I also highly recommend using the Parallel

class instead TPL

for stress testing, as it is better suited for concurrent operations.



Another problem in your code is that you are using a counter variable i

in the closure, so you must copy it:

for (int i = 0; i < tsk.Length; i++)
{
    int localI = i;
    tsk[localI] = Task.Factory.StartNew((object obj) =>
    {
         System.Diagnostics.Stopwatch watch = System.Diagnostics.Stopwatch.StartNew();
         HttpWebResponse response = (HttpWebResponse)request.GetResponse();
         watch.Stop();
    }, localI);
}

      

0


source


This is because the response time includes both the waiting time for a request in the queue for processing and the time it takes to process it.



With more threads to handle requests, the processing time for each request will be roughly the same (until you have so many threads that the processor is 100% running), but the queue time will be shorter.

0


source







All Articles