Skip to content

Instantly share code, notes, and snippets.

@membphis
Last active May 20, 2025 06:26
Show Gist options
  • Save membphis/137db97a4bf64d3653aa42f3e016bd01 to your computer and use it in GitHub Desktop.
Save membphis/137db97a4bf64d3653aa42f3e016bd01 to your computer and use it in GitHub Desktop.
Apache apisix benchmark script: https://github.com/iresty/apisix/blob/master/benchmark/run.sh
Kong beanchmark script:
curl -i -X POST \
--url http://localhost:8001/services/ \
--data 'name=example-service' \
--data 'host=127.0.0.1'
curl -i -X POST \
--url http://localhost:8001/services/example-service/routes \
--data 'paths[]=/hello'
curl -i -X POST http://localhost:8001/routes/efd9d857-39bf-4154-85ec-edb7c1f53856/plugins \
--data "name=rate-limiting" \
--data "config.hour=999999999999" \
--data "config.policy=local"
curl -i -X POST http://localhost:8001/routes/efd9d857-39bf-4154-85ec-edb7c1f53856/plugins \
--data "name=prometheus"
curl -i http://127.0.0.1:8000/hello/hello
wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
@Dhruv-Garg79
Copy link

@cboitel have you used any of these after your benchmarks? what is your recommendation for someone looking to adopt any of these?
I am interested in a simple nginx use case + auth for low latency and throughput up to 3000qps in the long run.

@knysfh
Copy link

knysfh commented Jan 11, 2025

@Dhruv-Garg79 Hi, may I ask which API gateway was ultimately chosen?

@membphis
Copy link
Author

you can choose the one you like, both of them are open source, you can know everything if you want to research them.

For me: Apache APISIX is better performance and rich ecosystem, it belongs to the Apache Software Foundation, more open and safe

@zhenguoli
Copy link

Configuration Workers Plugins Requests/sec Transfer/sec Avg Latency Max Latency Notes
APISIX - 1 Worker, No Plugins 1 None ~16,407.59 ~65.53 MB/s ~0.96 ms ~5.47 ms Baseline performance
APISIX - 1 Worker, With Plugins 1 limit-count, prometheus ~16,129.60 ~65.53 MB/s ~0.99 ms ~12.64 ms Minor drop with plugins
APISIX - 2 Workers, No Plugins 2 None ~35,386.00 ~141.33 MB/s ~0.45 ms ~9.50 ms Strong scaling with more workers
APISIX - 2 Workers, With Plugins 2 limit-count, prometheus ~30,633.00 ~124.46 MB/s ~0.52 ms ~10.34 ms Still performs well with plugins
Fake APISIX Server - 1 Worker 1 N/A ~22,504.00 ~89.55 MB/s ~0.71 ms ~4.61 ms Bare OpenResty baseline
Fake APISIX Server - 2 Workers 2 N/A ~46,045.00 ~183.25 MB/s ~0.35 ms ~10.34 ms Max bare-metal performance
Kong - 1 Worker, No Plugins 1 None ~13,728.00 ~55.76 MB/s ~1.19 ms ~6.31 ms Kong baseline
Kong - 2 Workers, No Plugins 2 None ~26,903.00 ~109.27 MB/s ~0.62 ms ~7.76 ms Good multi-worker scaling
Kong - 1 Worker, With Plugins 1 rate-limiting, prometheus ~2,027.00 ~8.66 MB/s ~7.85 ms ~21.36 ms Significant overhead with plugins
Kong - 2 Workers, With Plugins 2 rate-limiting, prometheus ~3,768.00 ~15.91 MB/s ~4.27 ms ~18.91 ms Better than 1 worker but still slow

result table generated by AI for https://gist.github.com/membphis/137db97a4bf64d3653aa42f3e016bd01?permalink_comment_id=3351676#gistcomment-3351676

@Dhruv-Garg79
Copy link

Dhruv-Garg79 commented May 20, 2025

@knysfh After doing a lot of tests, I decided to have everything directly on the server side without a reverse proxy and use native K8S capabilities + changes on the main server side. Resource utilisation and performance both improved in my testing.

Also the reason was that I don't want to many components, it makes everything harder to maintain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment