Stress test made easy with Open Source tools

Ahmad Al-Sajid
9 min readAug 15, 2024

--

Stress testing is an integral part of software development because it helps ensure that applications can handle extreme conditions and unexpected surges in load. By pushing software beyond its normal operational capacity, stress tests identify potential bottlenecks, vulnerabilities, and weaknesses that could lead to system failures under real-world pressure. This proactive approach not only enhances the stability and performance of the software but also builds confidence in its ability to deliver consistent user experiences, even in the most demanding scenarios.

Today, we will demonstrate 3 Open source tools that can help us get the job done. Let’s have a look at them

HEY

Hey is a tiny program that sends some load to a web application. It was originally written in Go but it seems it is not been actively maintained. So, It might be a bit outdated to build and use from the original git repository. Don’t worry, we don’t need to focus on how to install Go, or update/upgrade it to get our work done. We are focusing on stress testing, not taking the stress to fix the tool. As a result, we have ahmadalsajid/hey-docker available on the Docker hub, which is an Alpine-based docker image with the exact features of the original hey, built with the latest version of Go. It is built for both the Linux/amd64 and Linux/amd64 architectures, and the size is still under 20MB. If you have installed docker on your machine already, you are just a command away from using hey. The basic usage of command would be

 docker run --rm  ahmadalsajid/hey-docker -n 200 -c 50 https://www.apache.org/

And Voilà! You will get the result


Summary:
Total: 31.8439 secs
Slowest: 20.0013 secs
Fastest: 0.1656 secs
Average: 5.3480 secs
Requests/sec: 6.2806


Response time histogram:
0.166 [1] |■
2.149 [64] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
4.133 [33] |■■■■■■■■■■■■■■■■■■■■■
6.116 [39] |■■■■■■■■■■■■■■■■■■■■■■■■
8.100 [14] |■■■■■■■■■
10.083 [13] |■■■■■■■■
12.067 [18] |■■■■■■■■■■■
14.051 [4] |■■■
16.034 [7] |■■■■
18.018 [1] |■
20.001 [6] |■■■■


Latency distribution:
10% in 0.6372 secs
25% in 1.4959 secs
50% in 4.1704 secs
75% in 7.7139 secs
90% in 11.2524 secs
95% in 14.5390 secs
99% in 20.0011 secs

Details (average, fastest, slowest):
DNS+dialup: 0.5441 secs, 0.0000 secs, 9.5668 secs
DNS-lookup: 0.5648 secs, 0.0000 secs, 2.2529 secs
req write: 0.0001 secs, 0.0000 secs, 0.0010 secs
resp wait: 0.8579 secs, 0.0642 secs, 6.8206 secs
resp read: 2.1870 secs, 0.0005 secs, 14.9386 secs

Status code distribution:
[200] 200 responses

The available options are

Options:
-n Number of requests to run. Default is 200.
-c Number of workers to run concurrently. Total number of requests cannot
be smaller than the concurrency level. Default is 50.
-q Rate limit, in queries per second (QPS) per worker. Default is no rate limit.
-z Duration of application to send requests. When duration is reached,
application stops and exits. If duration is specified, n is ignored.
Examples: -z 10s -z 3m.
-o Output type. If none provided, a summary is printed.
"csv" is the only supported alternative. Dumps the response
metrics in comma-separated values format.

-m HTTP method, one of GET, POST, PUT, DELETE, HEAD, OPTIONS.
-H Custom HTTP header. You can specify as many as needed by repeating the flag.
For example, -H "Accept: text/html" -H "Content-Type: application/xml" .
-t Timeout for each request in seconds. Default is 20, use 0 for infinite.
-A HTTP Accept header.
-d HTTP request body.
-D HTTP request body from file. For example, /home/user/file.txt or ./file.txt.
-T Content-type, defaults to "text/html".
-a Basic authentication, username:password.
-x HTTP Proxy address as host:port.
-h2 Enable HTTP/2.

-host HTTP Host header.

-disable-compression Disable compression.
-disable-keepalive Disable keep-alive, prevents re-use of TCP
connections between different HTTP requests.
-disable-redirects Disable following of HTTP redirects
-cpus Number of used cpu cores.
(default for current machine is 8 cores)

OHA

As we said earlier, Hey is not being actively monitored, oha is inspired by hey, but written in Rust. This raises the same problem, if you are unfamiliar with Rust, it will make your life miserable again. We are focusing on stress testing, not stressing ourselves. That’s why, we have another multi-arch, lightweight docker image of oha available in ahmadalsajid/oha-docker. You only need to pull the image and run it as a container.

docker pull ahmadalsajid/oha-docker
docker run --rm -it ahmadalsajid/oha-docker -n 200 -c 50 https://www.apache.org/

And you will see a beautiful terminal UI something like this

Image taken from https://github.com/hatoo/oha/blob/master/demo.gif

Once done, you will get the result displayed in the console. i.e.

Summary:
Success rate: 100.00%
Total: 7.2522 secs
Slowest: 7.2513 secs
Fastest: 0.1388 secs
Average: 1.2849 secs
Requests/sec: 27.5779

Total data: 3.03 MiB
Size/request: 15.49 KiB
Size/sec: 427.16 KiB

Response time histogram:
0.139 [1] |
0.850 [92] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
1.561 [58] |■■■■■■■■■■■■■■■■■■■■
2.273 [17] |■■■■■
2.984 [17] |■■■■■
3.695 [6] |■■
4.406 [4] |■
5.118 [1] |
5.829 [0] |
6.540 [1] |
7.251 [3] |■

Response time distribution:
10.00% in 0.3168 secs
25.00% in 0.5857 secs
50.00% in 0.9099 secs
75.00% in 1.5029 secs
90.00% in 2.6671 secs
95.00% in 3.5923 secs
99.00% in 7.0764 secs
99.90% in 7.2513 secs
99.99% in 7.2513 secs


Details (average, fastest, slowest):
DNS+dialup: 0.8507 secs, 0.1761 secs, 2.6988 secs
DNS-lookup: 0.0003 secs, 0.0000 secs, 0.0051 secs

Status code distribution:
[200] 200 responses

The input options are pretty much similar with hey, with only a slight difference in -q, the option works differently from hey. It sets the overall query per second instead of for each worker. Other options are

Options:
-n <N_REQUESTS> Number of requests to run. [default: 200]
-c <N_CONNECTIONS> Number of connections to run concurrently. You may should increase limit to number of open files for larger `-c`. [default: 50]
-p <N_HTTP2_PARALLEL> Number of parallel requests to send on HTTP/2. `oha` will run c * p concurrent workers in total. [default: 1]
-z <DURATION> Duration of application to send requests. If duration is specified, n is ignored.
When the duration is reached, ongoing requests are aborted and counted as "aborted due to deadline"
Examples: -z 10s -z 3m.
-q <QUERY_PER_SECOND> Rate limit for all, in queries per second (QPS)
--burst-delay <BURST_DURATION> Introduce delay between a predefined number of requests.
Note: If qps is specified, burst will be ignored
--burst-rate <BURST_REQUESTS> Rates of requests for burst. Default is 1
Note: If qps is specified, burst will be ignored
--rand-regex-url Generate URL by rand_regex crate but dot is disabled for each query e.g. http://127.0.0.1/[a-z][a-z][0-9]. Currently dynamic scheme, host and port with keep-alive are not works well. See https://docs.rs/rand_regex/latest/rand_regex/struct.Regex.html for details of syntax.
--max-repeat <MAX_REPEAT> A parameter for the '--rand-regex-url'. The max_repeat parameter gives the maximum extra repeat counts the x*, x+ and x{n,} operators will become. [default: 4]
--latency-correction Correct latency to avoid coordinated omission problem. It's ignored if -q is not set.
--no-tui No realtime tui
-j, --json Print results as JSON
--fps <FPS> Frame per second for tui. [default: 16]
-m, --method <METHOD> HTTP method [default: GET]
-H <HEADERS> Custom HTTP header. Examples: -H "foo: bar"
-t <TIMEOUT> Timeout for each request. Default to infinite.
-A <ACCEPT_HEADER> HTTP Accept Header.
-d <BODY_STRING> HTTP request body.
-D <BODY_PATH> HTTP request body from file.
-T <CONTENT_TYPE> Content-Type.
-a <BASIC_AUTH> Basic authentication, username:password
--http-version <HTTP_VERSION> HTTP version. Available values 0.9, 1.0, 1.1.
--http2 Use HTTP/2. Shorthand for --http-version=2
--host <HOST> HTTP Host header
--disable-compression Disable compression.
-r, --redirect <REDIRECT> Limit for number of Redirect. Set 0 for no redirection. Redirection isn't supported for HTTP/2. [default: 10]
--disable-keepalive Disable keep-alive, prevents re-use of TCP connections between different HTTP requests. This isn't supported for HTTP/2.
--no-pre-lookup *Not* perform a DNS lookup at beginning to cache it
--ipv6 Lookup only ipv6.
--ipv4 Lookup only ipv4.
--insecure Accept invalid certs.
--connect-to <CONNECT_TO> Override DNS resolution and default port numbers with strings like 'example.org:443:localhost:8443'
--disable-color Disable the color scheme.
--unix-socket <UNIX_SOCKET> Connect to a unix socket instead of the domain in the URL. Only for non-HTTPS URLs.
--vsock-addr <VSOCK_ADDR> Connect to a VSOCK socket using 'cid:port' instead of the domain in the URL. Only for non-HTTPS URLs.
--stats-success-breakdown Include a response status code successful or not successful breakdown for the time histogram and distribution statistics
-h, --help Print help
-V, --version Print version

Apache Benchmark (ab)

Finally, We will use the final one on our list, Apache Benchmark. It is also a CLI tool. Let’s install it first,

On Ubuntu

sudo apt install apache2-utils -y

On Mac

As of macos bigsur and later, ab is installed by default in macos

On Windows

Download Apache binaries from here. unzip, and then run ab from where you unzipped/installed it. Example

~\Apache24\bin>ab -n 100 -c 10 xxx

The easiest way to test is as follows

ab -n 200 -c 50 -r https://www.apache.org/

And, we can see the result as

This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking www.apache.org (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests


Server Software: Apache
Server Hostname: www.apache.org
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-CHACHA20-POLY1305,2048,256
Server Temp Key: X25519 253 bits
TLS Server Name: www.apache.org

Document Path: /
Document Length: 64380 bytes

Concurrency Level: 50
Time taken for tests: 27.800 seconds
Complete requests: 200
Failed requests: 0
Total transferred: 13086332 bytes
HTML transferred: 12876000 bytes
Requests per second: 7.19 [#/sec] (mean)
Time per request: 6950.095 [ms] (mean)
Time per request: 139.002 [ms] (mean, across all concurrent requests)
Transfer rate: 459.69 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 229 1340 1402.3 888 8962
Processing: 197 3003 2978.5 2142 22173
Waiting: 67 247 308.1 101 1883
Total: 437 4343 3605.0 3539 24386

Percentage of the requests served within a certain time (ms)
50% 3539
66% 5194
75% 5940
80% 6362
90% 7851
95% 9867
98% 18300
99% 21180
100% 24386 (longest request)

If you want to explore the options of ab, just do

ab -help

You will get the usage and options

Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
-n requests Number of requests to perform
-c concurrency Number of multiple requests to make at a time
-t timelimit Seconds to max. to spend on benchmarking
This implies -n 50000
-s timeout Seconds to max. wait for each response
Default is 30 seconds
-b windowsize Size of TCP send/receive buffer, in bytes
-B address Address to bind to when making outgoing connections
-p postfile File containing data to POST. Remember also to set -T
-u putfile File containing data to PUT. Remember also to set -T
-T content-type Content-type header to use for POST/PUT data, eg.
'application/x-www-form-urlencoded'
Default is 'text/plain'
-v verbosity How much troubleshooting info to print
-w Print out results in HTML tables
-i Use HEAD instead of GET
-x attributes String to insert as table attributes
-y attributes String to insert as tr attributes
-z attributes String to insert as td or th attributes
-C attribute Add cookie, eg. 'Apache=1234'. (repeatable)
-H attribute Add Arbitrary header line, eg. 'Accept-Encoding: gzip'
Inserted after all normal header lines. (repeatable)
-A attribute Add Basic WWW Authentication, the attributes
are a colon separated username and password.
-P attribute Add Basic Proxy Authentication, the attributes
are a colon separated username and password.
-X proxy:port Proxyserver and port number to use
-V Print version number and exit
-k Use HTTP KeepAlive feature
-d Do not show percentiles served table.
-S Do not show confidence estimators and warnings.
-q Do not show progress when doing more than 150 requests
-l Accept variable document length (use this for dynamic pages)
-g filename Output collected data to gnuplot format file.
-e filename Output CSV file with percentages served
-r Don't exit on socket receive errors.
-m method Method name
-h Display usage information (this message)
-I Disable TLS Server Name Indication (SNI) extension
-Z ciphersuite Specify SSL/TLS cipher suite (See openssl ciphers)
-f protocol Specify SSL/TLS protocol
(SSL2, TLS1, TLS1.1, TLS1.2 or ALL)
-E certfile Specify optional client certificate chain and private key

Conclusion

In conclusion, while the tools discussed above represent just a fraction of the vast array of open-source options available for stress testing, they stand out for their ease of use and effectiveness. Exploring the full spectrum of tools can be valuable, but these particular tools offer a solid foundation for any development team looking to enhance their software’s resilience. Their accessibility and robust features make them an excellent starting point for integrating stress testing into your development process, ensuring your applications are prepared to meet the demands of real-world use.

--

--

Ahmad Al-Sajid
Ahmad Al-Sajid

Written by Ahmad Al-Sajid

Software Engineer, DevOps, Foodie, Biker

No responses yet