Skip to content

Instantly share code, notes, and snippets.

@djinn
Created March 10, 2025 11:43
Show Gist options
  • Save djinn/633502b65e43b39e1d2fa1546c6791bc to your computer and use it in GitHub Desktop.
Save djinn/633502b65e43b39e1d2fa1546c6791bc to your computer and use it in GitHub Desktop.
Installing InstructLab on Ubuntu Focal with Nvidia GPU
#!/bin/sh
lsb_release -a
add-apt-repository ppa:deadsnakes/ppa --yes
apt install python3.11 python3.11-venv python3.11-dev
python3.11 -m venv env
source env/bin/activate
pip install packaging wheel
pip install https://download.pytorch.org/whl/cu124/torch-2.6.0%2Bcu124-cp311-cp311-linux_x86_64.whl#sha256=d4c3e9a8d31a7c0fcbb9da17c31a1917e1fac26c566a4cfbd8c9568ad7cade79
pip install --no-build-isolation 'instructlab[cuda]' -C cmake.args="-DLLAMA_CUDA=on" -C cmake.args="-DLLAMA_NATIVE=off"
ilab config init
cd ~/.local/share/instructlab
mkdir -p taxonomy/knowledge/astronomy/constellations/Phoenix/
wget https://raw.githubusercontent.com/instructlab/taxonomy/26b3fe21ccbb95adc06fe8ce76c7c18559e8dd05/knowledge/science/astronomy/constellations/phoenix/qna.yaml
mv qna.yaml taxonomy/knowledge/astronomy/constellations/Phoenix/
ilab taxonomy diff
ilab model download --hf-token <Hugging Face Token>
<modify config.yaml to use llama.cpp, since all downloaded models by default run on llama.cpp>
<modify models to use in config.yaml to defaults>
ilab data generate
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment