Skip to content

Instantly share code, notes, and snippets.

@samkeen
Created November 23, 2010 19:08
Orders of magnitude increase in time as you leave the CPU and head up the stack to the Network
L1 cache reference | 0.5 ns
Branch mispredict | 5 ns
L2 cache reference | 7 ns
Mutex lock/unlock | 25 ns
Main memory reference | 100 ns
Compress 1K bytes w/ cheap algorithm | 3,000 ns
Send 2K bytes over 1 Gbps network | 20,000 ns
Read 1 MB sequentially from memory | 250,000 ns
Round trip within same datacenter | 500,000 ns
Disk seek | 10,000,000 ns
Read 1 MB sequentially from disk | 20,000,000 ns
Send packet CA->Netherlands->CA | 150,000,000 ns
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment