Skip to content

Instantly share code, notes, and snippets.

@AndrewAltimit
Last active July 8, 2025 23:16
Show Gist options
  • Save AndrewAltimit/fc5ba068b73e7002cbe4e9721cebb0f5 to your computer and use it in GitHub Desktop.
Save AndrewAltimit/fc5ba068b73e7002cbe4e9721cebb0f5 to your computer and use it in GitHub Desktop.
Claude Code and Gemini CLI Integration

Gemini CLI Integration for Claude Code MCP Server

A complete setup guide for integrating Google's Gemini CLI with Claude Code through an MCP (Model Context Protocol) server. This provides automatic second opinion consultation when Claude expresses uncertainty or encounters complex technical decisions.

πŸš€ Quick Start

1. Install Gemini CLI (Host-based)

# Switch to Node.js 22.16.0
nvm use 22.16.0

# Install Gemini CLI globally
npm install -g @google/gemini-cli

# Test installation
gemini --help

# Authenticate with Google account (free tier: 60 req/min, 1,000/day)
# Authentication happens automatically on first use

2. Direct Usage (Fastest)

# Direct consultation (no container setup needed)
echo "Your question here" | gemini

# Example: Technical questions
echo "Best practices for microservice authentication?" | gemini -m gemini-2.5-pro

🏠 Host-Based MCP Integration

Architecture Overview

  • Host-Based Setup: Both MCP server and Gemini CLI run on host machine
  • Why Host-Only: Gemini CLI requires interactive authentication and avoids Docker-in-Docker complexity
  • Auto-consultation: Detects uncertainty patterns in Claude responses
  • Manual consultation: On-demand second opinions via MCP tools
  • Response synthesis: Combines both AI perspectives
  • Singleton Pattern: Ensures consistent state management across all tool calls

Key Files Structure

β”œβ”€β”€ mcp-server.py            # Enhanced MCP server with Gemini tools
β”œβ”€β”€ gemini_integration.py    # Core integration module with singleton pattern
β”œβ”€β”€ gemini-config.json       # Gemini configuration
└── setup-gemini-integration.sh  # Setup script

All files should be placed in the same directory for easy deployment.

Host-Based MCP Server Setup

# Start MCP server directly on host
cd your-project
python3 mcp-server.py --project-root .

# Or with environment variables
GEMINI_ENABLED=true \
GEMINI_AUTO_CONSULT=true \
GEMINI_CLI_COMMAND=gemini \
GEMINI_TIMEOUT=200 \
GEMINI_RATE_LIMIT=2 \
python3 mcp-server.py --project-root .

Claude Code Configuration

Create mcp-config.json:

{
  "mcpServers": {
    "project": {
      "command": "python3",
      "args": ["mcp-server.py", "--project-root", "."],
      "cwd": "/path/to/your/project",
      "env": {
        "GEMINI_ENABLED": "true",
        "GEMINI_AUTO_CONSULT": "true", 
        "GEMINI_CLI_COMMAND": "gemini"
      }
    }
  }
}

πŸ€– Core Features

1. Uncertainty Detection

Automatically detects patterns like:

  • "I'm not sure", "I think", "possibly", "probably"
  • "Multiple approaches", "trade-offs", "alternatives"
  • Critical operations: "security", "production", "database migration"

2. MCP Tools Available

  • consult_gemini - Manual consultation with context
  • gemini_status - Check integration status and statistics
  • toggle_gemini_auto_consult - Enable/disable auto-consultation

3. Response Synthesis

  • Identifies agreement/disagreement between Claude and Gemini
  • Provides confidence levels (high/medium/low)
  • Generates combined recommendations

βš™οΈ Configuration

Environment Variables

GEMINI_ENABLED=true                   # Enable integration
GEMINI_AUTO_CONSULT=true              # Auto-consult on uncertainty
GEMINI_CLI_COMMAND=gemini             # CLI command to use
GEMINI_TIMEOUT=200                    # Query timeout in seconds
GEMINI_RATE_LIMIT=5                   # Delay between calls (seconds)
GEMINI_MAX_CONTEXT=                   # Max context length
GEMINI_MODEL=gemini-2.5-flash         # Model to use
GEMINI_API_KEY=                       # Optional (blank for free tier, keys disable free mode!)

Gemini Configuration File

Create gemini-config.json:

{
  "enabled": true,
  "auto_consult": true,
  "cli_command": "gemini",
  "timeout": 300,
  "rate_limit_delay": 5.0,
  "log_consultations": true,
  "model": "gemini-2.5-flash",
  "sandbox_mode": true,
  "debug_mode": false,
  "uncertainty_thresholds": {
    "uncertainty_patterns": true,
    "complex_decisions": true,
    "critical_operations": true
  }
}

🧠 Integration Module Core

Uncertainty Patterns (Python)

UNCERTAINTY_PATTERNS = [
    r"\bI'm not sure\b",
    r"\bI think\b", 
    r"\bpossibly\b",
    r"\bprobably\b",
    r"\bmight be\b",
    r"\bcould be\b",
    # ... more patterns
]

COMPLEX_DECISION_PATTERNS = [
    r"\bmultiple approaches\b",
    r"\bseveral options\b", 
    r"\btrade-offs?\b",
    r"\balternatives?\b",
    # ... more patterns
]

CRITICAL_OPERATION_PATTERNS = [
    r"\bproduction\b",
    r"\bdatabase migration\b",
    r"\bsecurity\b",
    r"\bauthentication\b",
    # ... more patterns
]

Basic Integration Class Structure

class GeminiIntegration:
    def __init__(self, config: Optional[Dict[str, Any]] = None):
        self.config = config or {}
        self.enabled = self.config.get('enabled', True)
        self.auto_consult = self.config.get('auto_consult', True)
        self.cli_command = self.config.get('cli_command', 'gemini')
        self.timeout = self.config.get('timeout', 30)
        self.rate_limit_delay = self.config.get('rate_limit_delay', 1)
        
    async def consult_gemini(self, query: str, context: str = "") -> Dict[str, Any]:
        """Consult Gemini CLI for second opinion"""
        # Rate limiting
        await self._enforce_rate_limit()
        
        # Prepare query with context
        full_query = self._prepare_query(query, context)
        
        # Execute Gemini CLI command
        result = await self._execute_gemini_command(full_query)
        
        return result
        
    def detect_uncertainty(self, text: str) -> bool:
        """Detect if text contains uncertainty patterns"""
        return any(re.search(pattern, text, re.IGNORECASE) 
                  for pattern in UNCERTAINTY_PATTERNS)

# Singleton pattern implementation
_integration = None

def get_integration(config: Optional[Dict[str, Any]] = None) -> GeminiIntegration:
    """Get or create the global Gemini integration instance"""
    global _integration
    if _integration is None:
        _integration = GeminiIntegration(config)
    return _integration

Singleton Pattern Benefits

The singleton pattern ensures:

  • Consistent Rate Limiting: All MCP tool calls share the same rate limiter
  • Unified Configuration: Changes to config affect all usage points
  • State Persistence: Consultation history and statistics are maintained
  • Resource Efficiency: Only one instance manages the Gemini CLI connection

Usage in MCP Server

from gemini_integration import get_integration

# Get the singleton instance
self.gemini = get_integration(config)

πŸ“‹ Example Workflows

Manual Consultation

# In Claude Code
Use the consult_gemini tool with:
query: "Should I use WebSockets or gRPC for real-time communication?"
context: "Building a multiplayer application with real-time updates"

Automatic Consultation Flow

User: "How should I handle authentication?"

Claude: "I think OAuth might work, but I'm not certain about the security implications..."

[Auto-consultation triggered]

Gemini: "For authentication, consider these approaches: 1) OAuth 2.0 with PKCE for web apps..."

Synthesis: Both suggest OAuth but Claude uncertain about security. Gemini provides specific implementation details. Recommendation: Follow Gemini's OAuth 2.0 with PKCE approach.

πŸ”§ MCP Server Integration

Tool Definitions

@server.list_tools()
async def handle_list_tools():
    return [
        types.Tool(
            name="consult_gemini",
            description="Consult Gemini for a second opinion",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "Question for Gemini"},
                    "context": {"type": "string", "description": "Additional context"}
                },
                "required": ["query"]
            }
        ),
        types.Tool(
            name="gemini_status", 
            description="Check Gemini integration status"
        ),
        types.Tool(
            name="toggle_gemini_auto_consult",
            description="Enable/disable automatic consultation",
            inputSchema={
                "type": "object", 
                "properties": {
                    "enable": {"type": "boolean", "description": "Enable or disable"}
                }
            }
        )
    ]

🚨 Troubleshooting

Issue Solution
Gemini CLI not found Install Node.js 18+ and npm install -g @google/gemini-cli
Authentication errors Run gemini and sign in with Google account
Node version issues Use nvm use 22.16.0
Timeout errors Increase GEMINI_TIMEOUT (default: 60s)
Auto-consult not working Check GEMINI_AUTO_CONSULT=true
Rate limiting Adjust GEMINI_RATE_LIMIT (default: 2s)

πŸ” Security Considerations

  1. API Credentials: Store securely, use environment variables
  2. Data Privacy: Be cautious about sending proprietary code
  3. Input Sanitization: Sanitize queries before sending
  4. Rate Limiting: Respect API limits (free tier: 60/min, 1000/day)
  5. Host-Based Architecture: Both Gemini CLI and MCP server run on host for auth compatibility and simplicity

πŸ“ˆ Best Practices

  1. Rate Limiting: Implement appropriate delays between calls
  2. Context Management: Keep context concise and relevant
  3. Error Handling: Always handle Gemini failures gracefully
  4. User Control: Allow users to disable auto-consultation
  5. Logging: Log consultations for debugging and analysis
  6. Caching: Cache similar queries to reduce API calls

🎯 Use Cases

  • Architecture Decisions: Get second opinions on design choices
  • Security Reviews: Validate security implementations
  • Performance Optimization: Compare optimization strategies
  • Code Quality: Review complex algorithms or patterns
  • Troubleshooting: Debug complex technical issues
{
"enabled": true,
"auto_consult": true,
"cli_command": "gemini",
"timeout": 30,
"rate_limit_delay": 5.0,
"log_consultations": true,
"model": "gemini-2.5-pro",
"sandbox_mode": true,
"debug_mode": false,
"uncertainty_thresholds": {
"uncertainty_patterns": true,
"complex_decisions": true,
"critical_operations": true
}
}
#!/usr/bin/env python3
"""
Gemini CLI Integration Module
Provides automatic consultation with Gemini for second opinions and validation
"""
import asyncio
import json
import logging
import re
import subprocess
import time
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
# Setup logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Uncertainty patterns that trigger automatic Gemini consultation
UNCERTAINTY_PATTERNS = [
r"\bI'm not sure\b", r"\bI think\b", r"\bpossibly\b", r"\bprobably\b",
r"\bmight be\b", r"\bcould be\b", r"\bI believe\b", r"\bIt seems\b",
r"\bappears to be\b", r"\buncertain\b", r"\bI would guess\b",
r"\blikely\b", r"\bperhaps\b", r"\bmaybe\b", r"\bI assume\b"
]
# Complex decision patterns that benefit from second opinions
COMPLEX_DECISION_PATTERNS = [
r"\bmultiple approaches\b", r"\bseveral options\b", r"\btrade-offs?\b",
r"\bconsider(?:ing)?\b", r"\balternatives?\b", r"\bpros and cons\b",
r"\bweigh(?:ing)? the options\b", r"\bchoice between\b", r"\bdecision\b"
]
# Critical operations that should trigger consultation
CRITICAL_OPERATION_PATTERNS = [
r"\bproduction\b", r"\bdatabase migration\b", r"\bsecurity\b",
r"\bauthentication\b", r"\bencryption\b", r"\bAPI key\b",
r"\bcredentials?\b", r"\bperformance\s+critical\b"
]
class GeminiIntegration:
"""Handles Gemini CLI integration for second opinions and validation"""
def __init__(self, config: Optional[Dict[str, Any]] = None):
self.config = config or {}
self.enabled = self.config.get('enabled', True)
self.auto_consult = self.config.get('auto_consult', True)
self.cli_command = self.config.get('cli_command', 'gemini')
self.timeout = self.config.get('timeout', 60)
self.rate_limit_delay = self.config.get('rate_limit_delay', 2.0)
self.last_consultation = 0
self.consultation_log = []
self.max_context_length = self.config.get('max_context_length', 4000)
self.model = self.config.get('model', 'gemini-2.5-flash')
async def consult_gemini(self, query: str, context: str = "",
comparison_mode: bool = True,
force_consult: bool = False) -> Dict[str, Any]:
"""Consult Gemini CLI for second opinion"""
if not self.enabled:
return {'status': 'disabled', 'message': 'Gemini integration is disabled'}
if not force_consult:
await self._enforce_rate_limit()
consultation_id = f"consult_{int(time.time())}"
try:
# Prepare query with context
full_query = self._prepare_query(query, context, comparison_mode)
# Execute Gemini CLI command
result = await self._execute_gemini_cli(full_query)
# Log consultation
if self.config.get('log_consultations', True):
self.consultation_log.append({
'id': consultation_id,
'timestamp': datetime.now().isoformat(),
'query': query[:200] + "..." if len(query) > 200 else query,
'status': 'success',
'execution_time': result.get('execution_time', 0)
})
return {
'status': 'success',
'response': result['output'],
'execution_time': result['execution_time'],
'consultation_id': consultation_id,
'timestamp': datetime.now().isoformat()
}
except Exception as e:
logger.error(f"Error consulting Gemini: {str(e)}")
return {
'status': 'error',
'error': str(e),
'consultation_id': consultation_id
}
def detect_uncertainty(self, text: str) -> Tuple[bool, List[str]]:
"""Detect if text contains uncertainty patterns"""
found_patterns = []
# Check uncertainty patterns
for pattern in UNCERTAINTY_PATTERNS:
if re.search(pattern, text, re.IGNORECASE):
found_patterns.append(f"uncertainty: {pattern}")
# Check complex decision patterns
for pattern in COMPLEX_DECISION_PATTERNS:
if re.search(pattern, text, re.IGNORECASE):
found_patterns.append(f"complex_decision: {pattern}")
# Check critical operation patterns
for pattern in CRITICAL_OPERATION_PATTERNS:
if re.search(pattern, text, re.IGNORECASE):
found_patterns.append(f"critical_operation: {pattern}")
return len(found_patterns) > 0, found_patterns
async def _enforce_rate_limit(self):
"""Enforce rate limiting between consultations"""
current_time = time.time()
time_since_last = current_time - self.last_consultation
if time_since_last < self.rate_limit_delay:
sleep_time = self.rate_limit_delay - time_since_last
await asyncio.sleep(sleep_time)
self.last_consultation = time.time()
def _prepare_query(self, query: str, context: str, comparison_mode: bool) -> str:
"""Prepare the full query for Gemini CLI"""
if len(context) > self.max_context_length:
context = context[:self.max_context_length] + "\n[Context truncated...]"
parts = []
if comparison_mode:
parts.append("Please provide a technical analysis and second opinion:")
parts.append("")
if context:
parts.append("Context:")
parts.append(context)
parts.append("")
parts.append("Question/Topic:")
parts.append(query)
if comparison_mode:
parts.extend([
"",
"Please structure your response with:",
"1. Your analysis and understanding",
"2. Recommendations or approach",
"3. Any concerns or considerations",
"4. Alternative approaches (if applicable)"
])
return "\n".join(parts)
async def _execute_gemini_cli(self, query: str) -> Dict[str, Any]:
"""Execute Gemini CLI command and return results"""
start_time = time.time()
# Build command
cmd = [self.cli_command]
if self.model:
cmd.extend(['-m', self.model])
cmd.extend(['-p', query]) # Non-interactive mode
try:
process = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await asyncio.wait_for(
process.communicate(),
timeout=self.timeout
)
execution_time = time.time() - start_time
if process.returncode != 0:
error_msg = stderr.decode() if stderr else "Unknown error"
if "authentication" in error_msg.lower():
error_msg += "\nTip: Run 'gemini' interactively to authenticate"
raise Exception(f"Gemini CLI failed: {error_msg}")
return {
'output': stdout.decode().strip(),
'execution_time': execution_time
}
except asyncio.TimeoutError:
raise Exception(f"Gemini CLI timed out after {self.timeout} seconds")
# Singleton pattern implementation
_integration = None
def get_integration(config: Optional[Dict[str, Any]] = None) -> GeminiIntegration:
"""
Get or create the global Gemini integration instance.
This ensures that all parts of the application share the same instance,
maintaining consistent state for rate limiting, consultation history,
and configuration across all tool calls.
Args:
config: Optional configuration dict. Only used on first call.
Returns:
The singleton GeminiIntegration instance
"""
global _integration
if _integration is None:
_integration = GeminiIntegration(config)
return _integration
#!/usr/bin/env python3
"""
MCP Server with Gemini Integration
Provides development workflow automation with AI second opinions
"""
import asyncio
import json
import os
import sys
from pathlib import Path
from typing import Any, Dict, List
import mcp.server.stdio
import mcp.types as types
from mcp.server import Server
# Import Gemini integration
from gemini_integration import get_integration
class MCPServer:
def __init__(self, project_root: str = None):
self.project_root = Path(project_root) if project_root else Path.cwd()
self.server = Server("mcp-server")
# Initialize Gemini integration with singleton pattern
self.gemini_config = self._load_gemini_config()
# Get the singleton instance, passing config on first call
self.gemini = get_integration(self.gemini_config)
self._setup_tools()
def _load_gemini_config(self) -> Dict[str, Any]:
"""Load Gemini configuration from file and environment"""
config = {}
# Load from config file if exists
config_file = self.project_root / "gemini-config.json"
if config_file.exists():
with open(config_file) as f:
config = json.load(f)
# Override with environment variables
env_mapping = {
'GEMINI_ENABLED': ('enabled', lambda x: x.lower() == 'true'),
'GEMINI_AUTO_CONSULT': ('auto_consult', lambda x: x.lower() == 'true'),
'GEMINI_CLI_COMMAND': ('cli_command', str),
'GEMINI_TIMEOUT': ('timeout', int),
'GEMINI_RATE_LIMIT': ('rate_limit_delay', float),
'GEMINI_MODEL': ('model', str),
}
for env_key, (config_key, converter) in env_mapping.items():
value = os.getenv(env_key)
if value is not None:
config[config_key] = converter(value)
return config
def _setup_tools(self):
"""Register all MCP tools"""
@self.server.list_tools()
async def handle_list_tools():
return [
types.Tool(
name="consult_gemini",
description="Consult Gemini for a second opinion or validation",
inputSchema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The question or topic to consult Gemini about"
},
"context": {
"type": "string",
"description": "Additional context for the consultation"
},
"comparison_mode": {
"type": "boolean",
"description": "Whether to request structured comparison format",
"default": True
}
},
"required": ["query"]
}
),
types.Tool(
name="gemini_status",
description="Check Gemini integration status and statistics"
),
types.Tool(
name="toggle_gemini_auto_consult",
description="Enable or disable automatic Gemini consultation",
inputSchema={
"type": "object",
"properties": {
"enable": {
"type": "boolean",
"description": "Enable (true) or disable (false) auto-consultation"
}
}
}
)
]
@self.server.call_tool()
async def handle_call_tool(name: str, arguments: Dict[str, Any]):
if name == "consult_gemini":
return await self._handle_consult_gemini(arguments)
elif name == "gemini_status":
return await self._handle_gemini_status(arguments)
elif name == "toggle_gemini_auto_consult":
return await self._handle_toggle_auto_consult(arguments)
else:
raise ValueError(f"Unknown tool: {name}")
async def _handle_consult_gemini(self, arguments: Dict[str, Any]) -> List[types.TextContent]:
"""Handle Gemini consultation requests"""
query = arguments.get('query', '')
context = arguments.get('context', '')
comparison_mode = arguments.get('comparison_mode', True)
if not query:
return [types.TextContent(
type="text",
text="❌ Error: 'query' parameter is required for Gemini consultation"
)]
result = await self.gemini.consult_gemini(
query=query,
context=context,
comparison_mode=comparison_mode
)
if result['status'] == 'success':
response_text = f"πŸ€– **Gemini Second Opinion**\n\n{result['response']}\n\n"
response_text += f"⏱️ *Consultation completed in {result['execution_time']:.2f}s*"
else:
response_text = f"❌ **Gemini Consultation Failed**\n\nError: {result.get('error', 'Unknown error')}"
return [types.TextContent(type="text", text=response_text)]
async def _handle_gemini_status(self, arguments: Dict[str, Any]) -> List[types.TextContent]:
"""Handle Gemini status requests"""
status_lines = [
"πŸ€– **Gemini Integration Status**",
"",
f"β€’ **Enabled**: {'βœ… Yes' if self.gemini.enabled else '❌ No'}",
f"β€’ **Auto-consult**: {'βœ… Yes' if self.gemini.auto_consult else '❌ No'}",
f"β€’ **CLI Command**: `{self.gemini.cli_command}`",
f"β€’ **Model**: {self.gemini.model}",
f"β€’ **Rate Limit**: {self.gemini.rate_limit_delay}s between calls",
f"β€’ **Timeout**: {self.gemini.timeout}s",
"",
f"πŸ“Š **Statistics**:",
f"β€’ **Total Consultations**: {len(self.gemini.consultation_log)}",
]
if self.gemini.consultation_log:
recent = self.gemini.consultation_log[-1]
status_lines.append(f"β€’ **Last Consultation**: {recent['timestamp']}")
return [types.TextContent(type="text", text="\n".join(status_lines))]
async def _handle_toggle_auto_consult(self, arguments: Dict[str, Any]) -> List[types.TextContent]:
"""Handle toggle auto-consultation requests"""
enable = arguments.get('enable')
if enable is None:
# Toggle current state
self.gemini.auto_consult = not self.gemini.auto_consult
else:
self.gemini.auto_consult = enable
status = "enabled" if self.gemini.auto_consult else "disabled"
return [types.TextContent(
type="text",
text=f"πŸ”„ Auto-consultation has been **{status}**"
)]
async def run(self):
"""Run the MCP server"""
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await self.server.run(
read_stream,
write_stream,
self.server.create_initialization_options()
)
async def main():
import argparse
parser = argparse.ArgumentParser(description="MCP Server with Gemini Integration")
parser.add_argument("--project-root", type=str, default=".",
help="Project root directory")
args = parser.parse_args()
server = MCPServer(project_root=args.project_root)
await server.run()
if __name__ == "__main__":
asyncio.run(main())
#!/bin/bash
set -e
echo "πŸš€ Setting up Gemini CLI Integration..."
# Check Node.js version
if ! command -v node &> /dev/null; then
echo "❌ Node.js not found. Please install Node.js 18+ first."
exit 1
fi
NODE_VERSION=$(node --version | cut -d'v' -f2 | cut -d'.' -f1)
if [ "$NODE_VERSION" -lt 18 ]; then
echo "❌ Node.js version $NODE_VERSION found. Please use Node.js 18+ (recommended: 22.16.0)"
echo " Use: nvm install 22.16.0 && nvm use 22.16.0"
exit 1
fi
echo "βœ… Node.js version check passed"
# Install Gemini CLI
echo "πŸ“¦ Installing Gemini CLI..."
npm install -g @google/gemini-cli
# Test installation
echo "πŸ§ͺ Testing Gemini CLI installation..."
if gemini --help > /dev/null 2>&1; then
echo "βœ… Gemini CLI installed successfully"
else
echo "❌ Gemini CLI installation failed"
exit 1
fi
# Files can be placed in the same directory - no complex structure needed
echo "πŸ“ Setting up in current directory..."
# Create default configuration
echo "βš™οΈ Creating default configuration..."
cat > gemini-config.json << 'EOF'
{
"enabled": true,
"auto_consult": true,
"cli_command": "gemini",
"timeout": 60,
"rate_limit_delay": 2.0,
"max_context_length": 4000,
"log_consultations": true,
"model": "gemini-2.5-flash",
"sandbox_mode": false,
"debug_mode": false
}
EOF
# Create MCP configuration for Claude Code
echo "πŸ”§ Creating Claude Code MCP configuration..."
cat > mcp-config.json << 'EOF'
{
"mcpServers": {
"project": {
"command": "python3",
"args": ["mcp-server.py", "--project-root", "."],
"env": {
"GEMINI_ENABLED": "true",
"GEMINI_AUTO_CONSULT": "true"
}
}
}
}
EOF
echo ""
echo "πŸŽ‰ Gemini CLI Integration setup complete!"
echo ""
echo "πŸ“‹ Next steps:"
echo "1. Copy the provided code files to your project:"
echo " - gemini_integration.py"
echo " - mcp-server.py"
echo "2. Install Python dependencies: pip install mcp pydantic"
echo "3. Test with: python3 mcp-server.py --project-root ."
echo "4. Configure Claude Code to use the MCP server"
echo ""
echo "πŸ’‘ Tip: First run 'gemini' command to authenticate with your Google account"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment