Previously Science and wayneseguin published a study looking at the performance of nginx fair proxy. To take that a little further, Science conducted an examination of how Thin and Mongrel compare head-to-head on performance. For kicks we took a look at Rails page template caching facility to see if that significantly impacts performance (it does). Full details follow..
For an idea of what the h/w testing setup looked like, read the previous study (cited above). For these tests, we used 3 instances of Thin or Mongrel (with lots of free ram). nginx fair proxy was turned on. Rails caching was fully enabled (including template caching). The tests all ran for 300 seconds. We pulled HEAD requests only to minimize over the wire throughput variance (since the test were initiated remote of the data center). Thins were wired up to unix sockets and Mongrels were going over IP.
Overall, I’d say that under these testing conditions Thin is 4% faster than Mongrel. That’s not much, and it’s within the standard deviation of each test result, but it was pretty consistent throughout the testing so I’m inclined to believe it. Your results may vary. 
Here is the summary of results:
|Server type||Avg response (s)||Total pages (#)||90% max response (s)|
|Thins, 10 threads||1.72||1734||3.92|
|Mongrels, 10 threads||1.78||1677||3.31|
|Thins, 30 threads||5.08||1738||10.14|
|Mongrels, 30 threads||5.20||1709||10.86|
|Thins, 40 threads||6.69||1753||10.97|
|Mongrels, 40 threads||6.98||1685||13.39|
The full test results (including Standard Deviations) can be found here. We hope the provided measurements meet your requirements. Post a comment here if you’d like more information or background.
- While I thought that much of the performance improvements could be attributed to the unix sockets themselves, many knowledgeable folks including Zed (author of Mongrel) and Marc (author of Thin) assert that performance of IP vs Sockets is really marginal in this day and age. Both Zed and Marc have indicated that any performance differences are probably due to code and architecture differences in the app servers themselves.
- You may also find this study of performance of various ruby app servers interesting: http://wiki.codemongers.com/Main
- Science also has a writeup on optimizing Nginx and Rails page caching which may be of interest to readers.
- Zed Shaw requested a methodology review. The following bullets outline how I conducted all the tests.
- I used Jakarta JMeter 2.3.1 to run the test.
- I pulled HEAD requests to minimize measurement errors that might be caused by over the wire variations in bandwidth
- Rails Code path:
- I ran against two fairly distinct code paths and had several hundred URL variations that hit all over the database within those two code paths (One code path searched for a set of records within a US state, the other searched for a specific record and displayed details about it).
- Rails code used was production scale and quality. By this I mean it is code that runs a full-fledged webserver and it does a lot of work. It may not be the smartest or fastest code, but it provides a lot of user functionality that is probably pretty typical for “read mostly” Rails sites.
- All activity was read-only – no insert/updates were performed. Simulated traffic was typical for this website.
- I had a warmup period before each test, to make sure that all core code was cached before running the actual test. Warmup was generally around 100 seconds. I was not precise on this though.
- Tests all ran for 300 seconds. There was a ramp-up period (to go from 0 threads running to all threads running) on each test that was equal in seconds to half the number of threads – so a 10 thread test had a 5 second ramp-up. A 30 thread test had a 15 sec ramp-up.
- Testing was run from a single dev workstation located on a 6mbs ADSL line (uplink is usually around 600kbs effective).