Test was performed on $GOPATH
over NFS share /mnt/nfs
:
The NFS:
/mnt/nfs/ $ bonnie++ -d /mnt/nfs/bonnie -r 2048 -u rjeczalik
Using uid:1000, gid:1000.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
rjeczalik-ub64 4G 1108 98 103542 7 52298 5 1953 99 129330 5 +++++ +++
Latency 12531us 2199ms 14366ms 9227us 115ms 6620us
Version 1.97 ------Sequential Create------ --------Random Create--------
rjeczalik-ub64 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 320 5 21422 13 733 7 318 6 1534 6 728 7
Latency 35068us 41094us 58079us 19346us 1804us 15918us
1.97,1.97,rjeczalik-ub64,1,1403775262,4G,,1108,98,103542,7,52298,5,1953,99,129330,5,+++++,+++,16,,,,,320,5,21422,13,733,7,318,6,1534,6,728,7,12531us,2199ms,14366ms,9227us,115ms,6620us,35068us,41094us,58079us,19346us,1804us,15918us
The "killer" $GOPATH
workspace by:
/mnt/nfs $ export GOPATH=$(pwd)
/mnt/nfs $ go get -t github.com/dotcloud/docker/...
/mnt/nfs $ git clone [email protected]:bradfitz/camlistore.git src/github.com/bradfitz/camlistore && ln -s src/github.com/bradfitz/camlistore src/camlistore.org
/mnt/nfs $ go get -t github.com/bradfitz/camlistore/...
/mnt/nfs $ du -hs src/
98M src/
/mnt/nfs $ find src/ -type d | wc -l # directory count
1061
/mnt/nfs $ find src/ -type d | tr -dc / | wc -c
6673
/mnt/nfs $ echo $((6673/1061 + 1)) # poor man average tree depth count
7
/mnt/nfs $ ln -s src/ data # let's fake we have lot's of data to traverse
/mnt/nfs $ for i in {1..3}; do echo $i | sudo tee /proc/sys/vm/drop_caches; done
/mnt/nfs $ go-bindata
go-bindata: Traverse took 2.28536904s, items found len(cfgs)=493
/mnt/nfs $ go-bindata
go-bindata: Traverse took 1.138567143s, items found len(cfgs)=493
/mnt/nfs $ go-bindata
go-bindata: Traverse took 1.151180622s, items found len(cfgs)=493
(main.go was patched to perform bindata.Glob
only)
diff --git a/go-bindata/main.go b/go-bindata/main.go
index ec1f8cf..35df68e 100644
--- a/go-bindata/main.go
+++ b/go-bindata/main.go
@@ -47,10 +47,13 @@ func copycfg(dst, src *bindata.Config) {
func main() {
c, auto := parseArgs()
if auto {
+ t := time.Now()
cfgs, err := bindata.Glob(os.Getenv("GOPATH"))
if err != nil {
die(err)
}
+ fmt.Printf("go-bindata: Traverse took %v, items found len(cfgs)=%d\n", time.Now().Sub(t), len(cfgs))
+ return
for _, cfg := range cfgs {
copycfg(cfg, c)
}
What's more interesting, the traverse times can be dramatically improved by this simple trick:
/mnt/nfs $ file data
data: symbolic link to `src'
/mnt/nfs $ rm -v data
removed ‘data’
/mnt/nfs $ time go-bindata
bindata: bindata: no matching $GOPATH/data directories found or no input ones provided
real 0m0.004s
user 0m0.002s
sys 0m0.003s