-
Star
(185)
You must be signed in to star a gist -
Fork
(12)
You must be signed in to fork a gist
-
-
Save ctechols/ca1035271ad134841284 to your computer and use it in GitHub Desktop.
# On slow systems, checking the cached .zcompdump file to see if it must be | |
# regenerated adds a noticable delay to zsh startup. This little hack restricts | |
# it to once a day. It should be pasted into your own completion file. | |
# | |
# The globbing is a little complicated here: | |
# - '#q' is an explicit glob qualifier that makes globbing work within zsh's [[ ]] construct. | |
# - 'N' makes the glob pattern evaluate to nothing when it doesn't match (rather than throw a globbing error) | |
# - '.' matches "regular files" | |
# - 'mh+24' matches files (or directories or whatever) that are older than 24 hours. | |
autoload -Uz compinit | |
if [[ -n ${ZDOTDIR}/.zcompdump(#qN.mh+24) ]]; then | |
compinit; | |
else | |
compinit -C; | |
fi; | |
This is my take on the problem, it's a tradeoff between efficiency and simplicity:
autoload -Uz compinit; compinit -C # Use cache to reduce startup time by ~0.1s
# Have another thread refresh the cache in the background (subshell to hide output)
(autoload -Uz compinit; compinit &)
Despite the obvious pitfall (having the shell start another thread at startup), I wonder if it's overall a good solution 🤔
Here is an update that combines everyone's suggestions into one function! Add this to your .zshrc
for a fast, (relatively) thread-safe load of your zsh completions! (I left out the zinit suggestion shared above, to keep this more universal)
() {
setopt local_options
local zcompdump="${ZDOTDIR:-$HOME}/.zcompdump"
local zcomp_ttl=1 # how many days to let the zcompdump file live before it must be recompiled
local lock_timeout=1 # register an error if lock-timeout exceeded
local lockfile="${zcompdump}.lock"
autoload -Uz compinit
# check for lockfile — if the lockfile exists, we cannot run a compinit
# if no lockfile, then we will create one, and set a trap on EXIT to remove it;
# the trap will trigger after the rest of the function has run.
if [ -f "${lockfile}" ]
then
# error log if the lockfile outlived its timeout
if [ "$( find "${lockfile}" -mmin $lock_timeout )" ]
then
(
echo "${lockfile} has been held by $(< ${lockfile}) for longer than ${lock_timeout} minute(s)."
echo "This may indicate a problem with compinit"
) >&2
fi
# since the zcompdump is still locked, run compinit without generating a new dump
compinit -D -d "$zcompdump"
# Exit if there's a lockfile; another process is handling things
return 1
else
# Create the lockfile with this shell's PID for debugging
echo $$ > "${lockfile}"
# Ensure the lockfile is removed on exit
trap "rm -f '${lockfile}'" EXIT
fi
# refresh the zcompdump file if needed
if [ ! -f "$zcompdump" -o "$( find "$zcompdump" -mtime "+${zcomp_ttl}" )" ]
then
# if the zcompdump is expired (past its ttl) or absent, we rebuild it
compinit -d "$zcompdump"
else
# load the zcompdump without updating
compinit -CD -d "$zcompdump"
# asynchronously rebuild the zcompdump file
(autoload -Uz compinit; compinit -d "$zcompdump" &);
fi
}
Follow-on to the above, I ran a benchmark using hyperfine and found that a regular compinit is still faster...
Just to be extra fair I compared three versions: standard compinit; the script I shared directly above this message; same as the script above (compinit_subshells.zsh), but using the extendedglob syntax suggested by @thefotios above instead of the subshells (compinit_fast.zsh).
❯ hyperfine --show-output --shell='zsh -l' --warmup 3 --min-runs 30 --setup 'autoload -Uz compinit; compinit;' './compinit_fast.zsh' './compinit_subshells.zsh' 'autoload -Uz compinit; compinit'
Benchmark 1: ./compinit_fast.zsh
Time (mean ± σ): 70.5 ms ± 40.0 ms [User: 0.0 ms, System: 0.0 ms]
Range (min … max): 18.1 ms … 191.8 ms 30 runs
Benchmark 2: ./compinit_subshells.zsh
Time (mean ± σ): 89.1 ms ± 50.7 ms [User: 0.0 ms, System: 0.0 ms]
Range (min … max): 14.3 ms … 229.7 ms 30 runs
Benchmark 3: autoload -Uz compinit; compinit
Time (mean ± σ): 59.1 ms ± 31.1 ms [User: 45.0 ms, System: 17.2 ms]
Range (min … max): 16.4 ms … 119.3 ms 30 runs
Summary
autoload -Uz compinit; compinit ran
1.19 ± 0.92 times faster than ./compinit_fast.zsh
1.51 ± 1.17 times faster than ./compinit_subshells.zsh /1m18.1s
❯ hyperfine --show-output --shell='zsh' --warmup 3 --min-runs 30 --setup 'autoload -Uz compinit; compinit;' './compinit_fast.zsh' './compinit_subshells.zsh' 'autoload -Uz compinit; compinit'
Benchmark 1: ./compinit_fast.zsh
Time (mean ± σ): 69.1 ms ± 6.2 ms [User: 38.9 ms, System: 23.8 ms]
Range (min … max): 61.2 ms … 84.4 ms 32 runs
Benchmark 2: ./compinit_subshells.zsh
Time (mean ± σ): 75.3 ms ± 5.6 ms [User: 39.5 ms, System: 26.8 ms]
Range (min … max): 68.3 ms … 90.2 ms 34 runs
Benchmark 3: autoload -Uz compinit; compinit
Time (mean ± σ): 58.7 ms ± 4.0 ms [User: 44.2 ms, System: 14.3 ms]
Range (min … max): 53.1 ms … 74.5 ms 43 runs
Summary
autoload -Uz compinit; compinit ran
1.18 ± 0.13 times faster than ./compinit_fast.zsh
1.28 ± 0.13 times faster than ./compinit_subshells.zsh /10.7s
This even includes using antidote to load a bunch of plugins, so I know there are a lot of completions to load.
So at this point, the above "fast" implementation may no longer hold value in recent zsh builds. Please feel free to share your own benchmarks, though in case it's useful!
For oh-my-zsh users, I wrote a bit to integrate this check into oh-my-zsh. At some point, I'll submit a PR to omz: ohmyzsh/ohmyzsh@master...forivall:oh-my-zsh:use-cached-compdump
From 3514d7099b68a06659f4882adfff808ab6fa0d51 Mon Sep 17 00:00:00 2001
From: Emily M Klassen <[email protected]>
Date: Fri, 1 Nov 2024 14:12:56 -0700
Subject: [PATCH] feat(completions): add option to use a cached compdump
---
oh-my-zsh.sh | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/oh-my-zsh.sh b/oh-my-zsh.sh
index b1032841c677..447b006f4344 100644
--- a/oh-my-zsh.sh
+++ b/oh-my-zsh.sh
@@ -121,7 +121,11 @@ if ! command grep -q -Fx "$zcompdump_revision" "$ZSH_COMPDUMP" 2>/dev/null \
zcompdump_refresh=1
fi
-if [[ "$ZSH_DISABLE_COMPFIX" != true ]]; then
+if [[ "$ZSH_COMPINIT_CACHE" == true && ! (( $zcompdump_refresh )) ]] \
+ && () { setopt local_options extendedglob; [[ -z "$ZSH_COMPDUMP"(#qN.mh+24) ]] }; then
+ # If the compdump was modified less than 24 hours ago, use the cached compdump, disable autodump
+ compinit -C -d "$ZSH_COMPDUMP" -D
+elif [[ "$ZSH_DISABLE_COMPFIX" != true ]]; then
source "$ZSH/lib/compfix.zsh"
# Load only from secure directories
compinit -i -d "$ZSH_COMPDUMP"
From my ad-hoc testing, this saves about 50ms on startup, on my dotfiles setup, which loads omz through zgenom
If you see random zsh CPU issues with subshells and compinit/background compilation, as certain IDEs like JetBrains (intellij, pycharm), vscode etc all will call this, and I've had instances where zsh got stuck doing funky things, causing CPU usage to max out.
I end up double checking interactive, login (which vscode/intellij both say they are), but also $INTELLIJ_ENVIRONMENT_READER and $TERM_PROGRAM.
Does anybody have a .zcompdump_capture
file being created in their $HOME
directory?
@forivall hello, did you created MR?
This has helped a bit with the start on OSX after they defaulted to zsh. (gonna set my user to go back to ksh93 anyway)
@quyenvsp not yet.
@aztack It helped. 🙇