- Running It
Install dependencies:
pip install -r requirements.txt
Run LiteLLM:
python run_litellm.py
or
litellm --config litellm_config.yaml --host 0.0.0.0 --port 4000 proxy
⸻
- Accessing from Other Devices on Network
Make sure: • Your firewall allows port 4000. • You access LiteLLM via your computer’s local IP address, e.g., http://192.168.1.123:4000/v1/chat/completions.
⸻
- Test API Call (Curl Example)
curl http://192.168.1.123:4000/v1/chat/completions
-H "Content-Type: application/json"
-H "Authorization: Bearer my-secret-litellm-key"
-d '{
"model": "ollama-llama3",
"messages": [{"role": "user", "content": "Hello"}]
}'
making sure to use the API key in he bearer header