Take HTTP server testing ab vs wrk so the difference in results

I am trying to see how many requests an HTTP server can handle on my machine, so I am trying to do some kind of test, but the difference is so great that I am confused.

First I try to scan with ab and run this command

$ ab -n 100000 -c 1000 http://127.0.0.1/

      

Executing 1000 concurrent requests.

The result looks like this:

Concurrency Level:      1000
Time taken for tests:   12.055 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      12800000 bytes
HTML transferred:       1100000 bytes
Requests per second:    8295.15 [#/sec] (mean)
Time per request:       120.552 [ms] (mean)
Time per request:       0.121 [ms] (mean, across all concurrent requests)
Transfer rate:          1036.89 [Kbytes/sec] received

      

8295 requests per second, which seems reasonable.

But then I try to run it on wrk with this command:

$ wrk -t1 -c1000 -d5s http://127.0.0.1:80/

      

And I am getting the following results:

Running 5s test @ http://127.0.0.1:80/
  1 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    18.92ms   13.38ms 234.65ms   94.89%
    Req/Sec    27.03k     1.43k   29.73k    63.27%
  136475 requests in 5.10s, 16.66MB read
Requests/sec:  26767.50
Transfer/sec:      3.27MB

      

26,767 requests per second? I don't understand why there is such a huge difference.

Code run was the simplest Go server

package main

import (
    "net/http"
)

func main() {

    http.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
        w.Write([]byte("Hello World"))
    })

    http.ListenAndServe(":80", nil)
}

      

My goal is to see how many requests the go server can handle when I increase the number of cores, but this is too much of a difference before I even start adding more CPU power. Does anyone know how a Go server scales when more cores are added? And also why is there a huge difference between ab and wrk?

+3


source to share


1 answer


First: the tests are often quite artificial. Sending back a few bytes will result in very different results being deducted after you start adding database calls, template rendering, session parsing, etc. (Expect an order of magnitude difference)

Then apply local concerns - open the file / socket limits on your dev machine versus production, the competition between your benchmarking (ab / wrk) and your Go server for those resources, the local loopback adapter or OS TCP stacks (and TCP stack setup), etc. etc. It continues!

Besides:



  • ab

    not much appreciated
  • HTTP / 1.0 only and therefore no keepalives
  • Your other metrics vary a lot - for example look at the avg latency indicated by each tool - ab has a much higher latency.
  • The test is ab

    also executed for 12s

    , not 5s

    your wrk test.
  • Even 8k req / s is a huge load - 28 million requests per hour. Even if after calling the DB, sorting the JSON structure, etc., which has dropped to 3k / req / s, you can still handle significant amounts of load. Don't bet too early on these tests.

I have no idea what machine you are running on, but my iMac with a 3.5GHz i7-4771 can push up 64k req / s per thread responding with w.Write([]byte("Hello World\n"))

Short answer: use wrk

and keep in mind that performance comparison tools have a lot of variance.

+17


source







All Articles