Skip to content

Instantly share code, notes, and snippets.

@cespare
Created September 27, 2012 11:39
Show Gist options
  • Save cespare/3793565 to your computer and use it in GitHub Desktop.
Save cespare/3793565 to your computer and use it in GitHub Desktop.
A Simple Webserver Comparison

This is a very simple benchmark comparing the response times of a few different webservers for an extremely simple response: just reply with a snippet of static json. It came up in discussion of a real-life service: in the actual server, a long-running thread/process periodically updates the state from a database, but requests will be served with the data directly from memory. It is imperative, though, that the latencies be extremely low for this service.

This comparison was partly inspired by this blog post.

Method

The code for the various servers may be found here. To conduct each test, I ran the server on a Linux desktop machine and then ran ab (apachebench) against the server from another machine connected via gigabit ethernet on the local network.

Server specs:

  • Ubuntu 12.04
  • Intel Core i5-2500K CPU (3.30GHz, 4 cores)
  • 8GB RAM

For each test, I made 20,000 requests with 1, 10, 100, and 1000 concurrent connections. I did 3 warm-up runs before collecting data. Below are the mean, median, 90th percentile, 99th percentile, and max latencies, all in milliseconds.

Contenders

I looked at both using Sinatra and writing Rack applications directly. For each of these options, I tested with Racer, Thin, and Unicorn. (This comparison isn't quite fair because I ran Unicorn with 8 workers, whereas Racer and Thin only have 1 worker. But looking at the 1-request-at-a-time column is still useful for seeing the low-end latencies.) Also in Ruby-land, I tested Goliath.

I was also interested in trying some JRuby servers, so I ran the plain Rack app under Trinidad and mizuno as well.

Additionally, I made a Scala app (using Scalatra) and a Go app (using the net/http standard library package).

Software versions:

  • Ruby 1.9.3-p194
  • JRuby 1.7.0-rc1
  • Scala 2.9.2
  • OpenJDK 1.7.0
  • Go 1.0.3

Numbers!

The c value is the number of concurrent requests. 90% and 99% are the 90th and 95th percentiles. All values are milliseconds.

Serverc = 1
meanmedian90%99%max
Sinatra + Racer12345

Implementation impressions

  • At this time, racer seems much more like a proof of concept than a serious production-ready webserver. Rack
  • is great, because it's super easy to drop in various webservers to run your app. JRuby is really easy to
  • use, and plays well with rbenv and bundler. Deployment for JRuby apps may get complex, what with xml files
  • and tomcat configuration and who knows what. Projects like Warbler that turn your whole project into a war file may help a lot though.
  • JRuby startup time is really annoying. Scala, as usual, is a massive pain to set up and get running. The
  • "minimal" example project for Scalatra required three different tools to set up and configure, and the sbt configuration makes the whole thing a real mess that's very newcomer-unfriendly. There are 13 files in this project, compared with around 2-4 per other implementation.
  • I wanted to try Lift as well but the setup was too daunting. sbt is awful.
  • Go is really great for this kind of thing. The server is dead-simple, configuration is non-existent, and the app is built to a single binary ready to be deployed to a server.

Conclusions

asdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment