Skip to content

Instantly share code, notes, and snippets.

@tswicegood
Forked from toastdriven/gevent_test.py
Created August 6, 2010 16:05
Show Gist options
  • Save tswicegood/511532 to your computer and use it in GitHub Desktop.
Save tswicegood/511532 to your computer and use it in GitHub Desktop.
import datetime
import gevent
from gevent import wsgi, pywsgi
# Configuration bits.
PORT = 8081
LOG_PATH = '/Users/daniellindsley/Desktop/LoggingFramework/benchmarks/receiver/logs/gevent_test.log'
# ============
def write_to_log(env, status_code=200):
log = open(LOG_PATH, 'a')
now = datetime.datetime.now()
message = "%s - [%s] %s\n" % (now, status_code, env['PATH_INFO'])
log.write(message)
log.close()
def error_404(env, start_response):
gevent.spawn(write_to_log, env, 404)
start_response('404 Not Found', [('Content-Type', 'text/html')])
return ['Sorry, you must be thinking of something else.']
def process_request(env, start_response):
gevent.spawn(write_to_log, env, 200)
start_response('200 OK', [('Content-Type', 'text/html')])
return ['OK']
def handle_request(env, start_response):
if env['PATH_INFO'] == '/':
return process_request(env, start_response)
else:
return error_404(env, start_response)
# Fast but non-streaming?
# server = wsgi.WSGIServer(('0.0.0.0', PORT), handle_request)
# Slower but streams responses.
server = pywsgi.WSGIServer(('0.0.0.0', PORT), handle_request, log=open('/dev/null', 'w'))
print('gevent_test listening on http://localhost:%s' % PORT)
server.serve_forever()
// Configuration bits.
var PORT = 8080;
var LOG_PATH = '/Users/travis/node_test.log';
// =============
var fs = require('fs');
var http = require('http');
var sys = require('sys');
var puts = sys.puts;
function write_to_log_sync(request, status_code) {
var log = fs.createWriteStream(LOG_PATH, {'flags': 'a'});
var current_time = new Date();
var message = "" + current_time + " - [" + status_code + "] " + request.url + "\n";
log.write(message);
log.end();
}
function write_to_log(request, status_code) {
fs.open(LOG_PATH, "a", 0666, function(err, fd) {
if (err) {
sys.puts("ERROR?");
return;
}
var current_time = new Date();
var message = "" + current_time + " - [" + status_code + "] " + request.url + "\n";
fs.write(fd, message);
fs.close(fd);
});
}
function error_404(request, response) {
write_to_log(request, 404);
response.writeHead(404, {'Content-Type': 'text/html'});
response.write('Sorry, you must be thinking of something else.');
response.end();
}
function process_request(request, response) {
write_to_log(request, 200);
response.writeHead(200, {'Content-Type': 'text/html'});
response.write('OK');
response.end();
}
http.createServer(function(request, response) {
if(request.url == '/') {
process_request(request, response);
}
else {
error_404(request, response);
}
}).listen(PORT);
sys.puts('node_test listening on http://localhost:'+PORT);

Node vs. Gevent

Specs

  • 2008 MacBook Pro (non-unibody)
  • 2.4 Ghz Core 2 Duo
  • 2 Gb of RAM

How

Fresh reboot Running only:

  • TextMate
  • Terminal.app w/ 2 tabs
  • Activity Monitor (to watch CPU use & memory)

Each test was run 3-4 times, with the middle numbers of the test reported. Not a proper average.

Notes

  • The gevent_test is using gevent.pywsgi instead of gevent.wsgi. Slower but supports streaming to be equivalent to Node's streaming by default.
  • Node is writing to the file in an async manner, while gevent have the file write in a coroutine (might be blocking).

Node

Unloaded RAM: 5.91 Mb

ab options Failed reqs Reqs per sec Secs per req Secs per req (concurrent) Max Mb
-c 10 -n 1000 0 4298.28 2.327 0.233 10.18
-c 25 -n 1000 0 4432.60 5.640 0.226 11.38
-c 50 -n 1000 0 4582.74 10.910 0.218 12.23
-c 100 -n 1000 0 4084.32 24.484 0.245 12.86
-c 200 -n 1000 0 3819.53 52.362 0.262 13.70
-c 300 -n 1000 0 3282.97 91.381 0.305 14.20
-c 400 -n 1000 0 3367.26 118.791 0.297 14.50
-c 500 -n 1000 0 3402.62 146.946 0.294 14.89

Gevent

Unloaded RAM: 8.52 Mb

ab options Failed reqs Reqs per sec Secs per req Secs per req (concurrent) Max Mb
-c 10 -n 1000 0 1501.93 6.658 0.666 8.60
-c 25 -n 1000 0 1300.63 19.221 0.769 8.60?
-c 50 -n 1000 0 1418.63 35.245 0.705 8.60?
-c 100 -n 1000 0 1329.82 75.198 0.752 8.60?
-c 200 -n 1000 0 1223.89 163.414 0.817 8.60?
-c 300 -n 1000 0 1145.65 275.339 0.918 8.60?
-c 400 -n 1000 0[1] 1111.43 359.897 0.900 8.60?
-c 500 -n 1000 N/A[2] N/A N/A N/A N/A
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment