Made use of ssdnode’s 4 CPU KVM VPS with Ubuntu 18.04 LTS and LXD containers to create several LXD containers to compare several CentOS and Ubuntu LEMP stacks, Centmin Mod, EasyEngine, Webinoly, VestaCP and OneInStack. The bnchmark compare Nginx non-HTTPS and HTTP/2 HTTPS based performance.
You can read the full benchmark comparison results below:
- Nginx HTTP/2 HTTPS static HTML benchmarks
- Nginx non-HTTPS static HTML benchmarks
- PHP (php-fpm) HTTP/2 HTTPS benchmarks
Preview of some of the benchmark comparison results
For Nginx HTTP/2 HTTPS static HTML
Combining results for last 2 runs for h2load -t1 vs h2load -t2 for 1,000 concurrent user tests. You can see which Nginx versions are better optimised for multi-threaded Nginx work loads by comparing their respective h2load -t1 vs -t2 results and seeing which -t2 results continue to scale in terms of performance.
LEMP stack installers installed in Ubuntu 18.04 LTS LXD containers on a ssdnode’s 4 CPU, 16GB ram, 80GB disk KVM VPS with Ubuntu 18.04 LTS and their respective performance scaling with from -t1 to -t2 (1 thread to 2 threads)
- Centmin Mod 123.09beta01 beta Nginx 1.15.0 on CentOS 7.5 64bit (default gzip compression = 5) = 39.1% increase in average requests/s and 29.39% increase in min requests/s and 7.03% increase in max requests/s
- Easyengine 3.8.1 using Nginx 1.14.0 on Ubuntu 16.04 LTS (default gzip compression = 6) = 19.5% increase in average requests/s and 22.2% increase in min requests/s and 28.7% increase in max requests/s
- OneInStack Nginx 1.14.0 on Ubuntu 16.04 LTS (default gzip compression = 6) = 45.55% increase in average requests/s and 61.19% increase in min requests/s and 3.5% increase in max requests/s
- OneInStack OpenResty Nginx 1.13.6 on Ubuntu 16.04 LTS (default gzip compression = 6) = 48.8% increase in average requests/s and 92.55% increase in min requests/s and 4.25% decrease in max requests/s
- VestaCP 0.9.8-21 using Nginx 1.15.0 on Ubuntu 16.04 LTS (default gzip compression = 9) = 10.4% increase in average requests/s and 14.96% increase in min requests/s and 12.13% decrease in max requests/s
- Webinoly 1.4.3 using Nginx 1.14.0 on Ubuntu 18.04 LTS (default gzip compression = 6) = 19.6% increase in average requests/s and 6.36% increase in min requests/s and 24.2% increase in max requests/s
Observations
- For average requests/s, Centmin Mod Nginx’s 1 thread results are actually faster than EasyEngine (+10.33%), VestaCP (+15.6%) and Webinoly’s (+53.11%) 2 thread results and within 80-87% of OneInStack’s 2 thread results !
- For minimum requests/s, Centmin Mod Nginx’s 2 thread results are actually faster than the average requests/s for EasyEngine (+0.37%), VestaCP (+5.15%) and Webinoly (+39.28) !
For Nginx non-HTTPS static HTML
Nginx static HTML benchmarks are done using my forked version of wrk, wrk-cmm. Each test configuration was run 2x times. Raw numbers are further below while summary chart is directly below:
wrk-cmm load tests were done at 4 user concurrency levels – 10 users, 100 users, 500 users and 1000 users for 10 second duration using following test parameters
- at 10 user concurrency, Centmin Mod Nginx is 16.7% faster than Easyengine Nginx and 8.76% faster than Webinoly Nginx and 18.39% faster than VestaCP Nginx and Webinoly is 7.3% faster than Easyengine Nginx and 8.86% faster than VestaCP Nginx
- at 100 user concurrency, Centmin Mod Nginx is 55.77% faster than Easyengine Nginx and 32.04% faster than Webinoly Nginx and 56.9% faster than VestaCP Nginx and Webinoly is 17.8% faster than Easyengine Nginx and 18.86% faster than VestaCP Nginx
- at 500 user concurrency, Centmin Mod Nginx is 39.73% faster than Easyengine Nginx and 33.45% faster than Webinoly Nginx and 41.3% faster than VestaCP Nginx and Webinoly is 4.7% faster than Easyengine Nginx and 5.9% faster than VestaCP Nginx
- at 1000 user concurrency, Centmin Mod Nginx is 43.70% faster than Easyengine Nginx and 33.08% faster than Webinoly Nginx and 39.06% faster than VestaCP Nginx and Webinoly is 7.97% faster than Easyengine Nginx and 4.49% faster than VestaCP Nginx
For PHP (php-fpm) HTTP/2 HTTPS benchmarks
Next up is doing h2load HTTP/2 HTTPS PHP-FPM tests against hello.php file at a much higher user concurrency work load of 500 users and 5000 requests. As previously mentioned, using PHP-FPM Unix Sockets (with OneInStack LEMP stacks default config) can be faster but up to a certain point, they’re hit a concurrent work load limit and requests will start to fail. On the other hand, PHP-FPM TCP listeners are slower but scale much better in handling high user concurrent work loads. This can be clearly seen in below test results.
- OneInStack LEMP stacks default to PHP-FPM Unix Sockets unlike other LEMP stacks tested defaulting to TCP listeners. So at 500 user concurrency, OneInStack PHP-FPM configs start to fail under the h2load load tester tool. Between 35-38% of all requests failed which in turn inflates and skews the requests/s and TTFB 99% percentile latency values. Requests per second and latency is based on the time to complete a request and thus failed requests resulted in h2load reporting higher requests/s and lower TTFB 99% percentile latency values. You do not want to be using PHP-FPM Unix Sockets under high concurrent user loads when almost 2/5 requests fail!
- h2load requests/s numbers along won’t show the complete picture until you factor into request latency. In this case I added to the chart the 99% percentile value for Time To First Byte (TTFB). Meaning 99% of the time, requests had such latency response times. Here Webinoly had a decent requests/s but much higher TTFB due to one of the 9x test runs stalling and thus resulting in minimum requests/s dropping to just 265.33. EasyEngine also had one of the 9x test runs stall and thus dropped requests/s to 240.3.
- Only Centmin Mod no-pgo/pgo and VestaCP managed to complete 100% of the requests but VestaCP’s TTFB 99% percentile value was double that of Centmin Mod’s PHP-FPM performance.