Skip to content

Instantly share code, notes, and snippets.

@spideynolove
Last active November 4, 2025 23:53
Show Gist options
  • Save spideynolove/13785891385ed6916619ebb991b490b9 to your computer and use it in GitHub Desktop.
Save spideynolove/13785891385ed6916619ebb991b490b9 to your computer and use it in GitHub Desktop.
Claude Code Multi-Provider Setup

Claude Code Multi-Provider Setup Guide

Overview

Claude Code is a powerful command-line interface that allows developers to interact with Anthropic's Claude AI models for coding assistance. This guide provides two battle-tested methods to configure Claude Code to work with multiple LLM providers beyond just Anthropic, including models from DeepSeek, z.ai (GLM), Kimi, and OpenRouter.

This empowers you to switch between the best model for any given task without ever leaving your terminal. Two Approaches to Flexibility

  • Shell Functions: A simple, lightweight, and robust method for switching providers before starting a session. Perfect for most use cases.

  • Python Proxy: A more advanced but incredibly flexible solution that allows for switching models within an active session using a simple /model command.

This guide also covers setting up y-router, a local translation service that enables OpenAI-compatible services like OpenRouter to work seamlessly with Claude Code's Anthropic-native API format.

Method 1: Shell Functions (Quick & Easy)

Use simple bash functions to quickly switch between different LLM providers using memorable commands like deepseek, glm, kimi, and openrouter.

Step 1: Add Functions to Your Shell Config

Open your ~/.bashrc (for Bash) or ~/.zshrc (for Zsh) and add the following functions:

# === CLAUDE CODE MULTI-PROVIDER SWITCHER ===
# Assumes 'claude' is in your PATH (e.g., installed via `npm install -g @anthropic-ai/claude-code`)

# --- DeepSeek Configuration ---
# Usage: deepseek
deepseek() {
    export ANTHROPIC_BASE_URL="https://api.deepseek.com/anthropic"
    export ANTHROPIC_AUTH_TOKEN="${DEEPSEEK_API_KEY}"
    export ANTHROPIC_DEFAULT_OPUS_MODEL="deepseek-reasoner"
    export ANTHROPIC_DEFAULT_SONNET_MODEL="deepseek-chat"
    export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
    claude "$@"
}

# --- z.ai (GLM) Configuration ---
# Usage: glm
glm() {
    export ANTHROPIC_BASE_URL="https://api.z.ai/api/anthropic"
    export ANTHROPIC_AUTH_TOKEN="${Z_AI_API_KEY}"
    export ANTHROPIC_DEFAULT_HAIKU_MODEL="glm-4.5-air"
    export ANTHROPIC_DEFAULT_SONNET_MODEL="glm-4.6"
    export ANTHROPIC_DEFAULT_OPUS_MODEL="glm-4.6"
    claude "$@"
}

# --- Kimi (Moonshot AI) Configuration ---
# Usage: kimi
kimi() {
    export ANTHROPIC_BASE_URL="https://api.moonshot.ai/anthropic"
    export ANTHROPIC_AUTH_TOKEN="${KIMI_API_KEY}"
    claude "$@"
}

# --- OpenRouter Configuration (Requires local y-router) ---
# Usage: openrouter
openrouter() {
    export ANTHROPIC_BASE_URL="http://localhost:8787"
    export ANTHROPIC_API_KEY="${OPENROUTER_API_KEY}"
    # y-router uses a custom header for the key
    export ANTHROPIC_CUSTOM_HEADERS="x-api-key: $ANTHROPIC_API_KEY"
    claude "$@"
}

# --- Reset to Default (Local Anthropic) ---
# Usage: claude_reset
claude_reset() {
    unset ANTHROPIC_BASE_URL ANTHROPIC_AUTH_TOKEN ANTHROPIC_API_KEY
    unset ANTHROPIC_CUSTOM_HEADERS ANTHROPIC_MODEL ANTHROPIC_SMALL_FAST_MODEL
    echo "Claude environment has been reset to default."
}

Step 2: Secure Your API Keys

Create a ~/.secrets file to store your API keys securely. Create and open the file

nano ~/.secrets

Add your keys

export DEEPSEEK_API_KEY="your_deepseek_api_key_here"
export Z_AI_API_KEY="your_z_ai_api_key_here"
export KIMI_API_KEY="your_kimi_api_key_here"
export OPENROUTER_API_KEY="your_openrouter_api_key_here"

Set strict permissions so only you can read it

chmod 600 ~/.secrets

Now, add the following line to the top of your ~/.bashrc or ~/.zshrc to load these keys automatically:

if [ -f ~/.secrets ]; then
    source ~/.secrets
fi

Step 3: Set Up y-router for OpenRouter (Optional)

To use OpenRouter, you need to run the y-router translation service locally using Docker.

  1. Clone the y-router repository
git clone https://github.com/luohy15/y-router
cd y-router
  1. Start the service using Docker Compose
docker-compose up -d

The service will now be running at http://localhost:8787, which the openrouter() function is configured to use.

Step 4: Usage

Reload your shell configuration:

source ~/.bashrc
# or source ~/.zshrc

Launch Claude Code with your chosen provider:

deepseek  # Starts Claude Code using the DeepSeek API
glm       # Starts Claude Code using the z.ai GLM API
kimi      # Starts Claude Code using the Kimi API
openrouter  # Starts Claude Code using OpenRouter (make sure Docker is running!)

All arguments are passed through, so you can use flags as normal: deepseek -m "deepseek-coder"

Method 2: Python Proxy (In-Session Switching)

This advanced method runs a local proxy that lets you switch models inside a Claude Code session with a /model /<model_name> command.

Step 1: Create the Proxy Script

Save the Python code in the next file (simple-proxy.py) on your system.

Step 2: Install Dependencies

You'll need fastapi, uvicorn, httpx, and pydantic.

pip install fastapi "uvicorn[standard]" httpx pydantic

Step 3: Run the Proxy and Claude Code

Run y-router (if using OpenRouter):

cd y-router && docker-compose up -d

Start the proxy server:

python /path/to/your/simple-proxy.py

Open a new terminal and configure Claude Code to use the proxy:

export ANTHROPIC_BASE_URL="http://localhost:8787"
export ANTHROPIC_API_KEY="dummy" # The proxy handles auth, so this can be anything
claude

Step 4: Switch Models In-Session

Now, within the running Claude Code session, you can switch models on the fly by typing a message like:

/model deepseek/deepseek-chat
/model openrouter/x-ai/grok-code-fast-1

The proxy will route your next request to the specified provider and model.

#!/usr/bin/env python3
"""
Claude Code Multi-Provider Proxy
A simple local proxy to route Claude Code requests to various LLM providers,
enabling in-session model switching with a `/model` command.
Setup:
1. Save this script as `simple-proxy.py`.
2. Install dependencies: pip install fastapi "uvicorn[standard]" httpx pydantic
3. Set your API keys as environment variables (e.g., in ~/.secrets).
4. Run the script: python simple-proxy.py
5. Configure Claude Code:
export ANTHROPIC_BASE_URL="http://localhost:8787"
export ANTHROPIC_API_KEY="dummy"
claude
"""
import os
import logging
from typing import Dict, Any, Tuple
from contextlib import asynccontextmanager
import httpx
from fastapi import FastAPI, Request, HTTPException
from fastapi.responses import StreamingResponse
# --- Configuration ---
# Maps provider keys to their API endpoint and the environment variable for the API key.
PROVIDERS = {
"deepseek": {
"base_url": "https://api.deepseek.com/anthropic/v1/messages",
"api_key_env": "DEEPSEEK_API_KEY",
},
"zai": {
"base_url": "https://api.z.ai/api/anthropic/v1/messages",
"api_key_env": "Z_AI_API_KEY",
},
"kimi": {
"base_url": "https://api.moonshot.ai/anthropic/v1/messages",
"api_key_env": "KIMI_API_KEY",
},
"openrouter": {
# This connects to your local y-router instance.
"base_url": "http://localhost:8787/v1/messages",
"api_key_env": "OPENROUTER_API_KEY",
},
}
# Default provider if none is specified in the request
DEFAULT_PROVIDER = "openrouter"
DEFAULT_MODEL = "x-ai/grok-code-fast-1"
# Logging setup
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# --- FastAPI Lifespan and App ---
@asynccontextmanager
async def lifespan(app: FastAPI):
"""Handles application startup and shutdown."""
logging.info("Starting up proxy server...")
# You can add initialization logic here if needed
yield
logging.info("Shutting down proxy server...")
app = FastAPI(lifespan=lifespan)
# --- Core Logic ---
def get_provider_config(provider_key: str) -> Dict[str, Any]:
"""Retrieves provider configuration and API key."""
config = PROVIDERS.get(provider_key)
if not config:
raise HTTPException(status_code=400, detail=f"Invalid provider specified: {provider_key}")
api_key = os.getenv(config["api_key_env"])
if not api_key:
raise HTTPException(
status_code=500,
detail=f"API key environment variable '{config['api_key_env']}' not set for provider '{provider_key}'.",
)
return {"base_url": config["base_url"], "api_key": api_key}
def parse_model_command(body: Dict[str, Any]) -> Tuple[str, str] | None:
"""
Parses a `/model <provider>/<model_name>` command from the user's messages.
If found, it modifies the message list to remove the command.
"""
messages = body.get("messages", [])
for i, msg in enumerate(messages):
content = msg.get("content")
if msg.get("role") == "user" and isinstance(content, str) and content.strip().startswith("/model "):
parts = content.strip().split()
if len(parts) == 2:
model_identifier = parts[1]
if "/" in model_identifier:
# Remove the command from the message content
# This prevents the LLM from seeing the routing command.
cleaned_content = content.replace(f"/model {model_identifier}", "").strip()
if cleaned_content:
messages[i]["content"] = cleaned_content
else:
# If the message was only the command, remove the entire message object
messages.pop(i)
return model_identifier.split("/", 1)
return None
# --- API Endpoints ---
@app.post("/v1/messages")
async def messages_proxy(request: Request):
"""
Main endpoint that receives requests from Claude Code, determines the target provider,
and forwards the request.
"""
body = await request.json()
provider_key = DEFAULT_PROVIDER
model_name = body.get("model", DEFAULT_MODEL)
# Check for an in-session /model command
model_command = parse_model_command(body)
if model_command:
provider_key, model_name = model_command
logging.info(f"Switching to model via command: {provider_key}/{model_name}")
body["model"] = model_name
try:
config = get_provider_config(provider_key)
headers = {
"Authorization": f"Bearer {config['api_key']}",
"Content-Type": "application/json",
"x-api-key": config['api_key'], # For y-router compatibility
}
timeout = httpx.Timeout(300.0) # 5-minute timeout for responses
async with httpx.AsyncClient(timeout=timeout) as client:
logging.info(f"Routing request to {provider_key} at {config['base_url']} with model {model_name}")
# Forward the request to the selected provider
response = await client.post(
url=config["base_url"],
json=body,
headers=headers,
)
response.raise_for_status()
# Handle streaming vs. non-streaming responses
if body.get("stream", False):
async def stream_generator():
async for chunk in response.aiter_bytes():
yield chunk
return StreamingResponse(stream_generator(), media_type=response.headers.get("content-type"))
else:
return response.json()
except httpx.HTTPStatusError as e:
logging.error(f"HTTP Error from provider {provider_key}: {e.response.status_code} - {e.response.text}")
raise HTTPException(status_code=e.response.status_code, detail=e.response.text)
except Exception as e:
logging.error(f"An unexpected error occurred: {e}")
raise HTTPException(status_code=500, detail=str(e))
@app.get("/health")
async def health_check():
"""A simple health check endpoint."""
return {"status": "ok"}
# --- Main Execution ---
if __name__ == "__main__":
import uvicorn
# Runs on 127.0.0.1 (localhost) for better security.
uvicorn.run(app, host="127.0.0.1", port=8787)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment