Skip to content

Instantly share code, notes, and snippets.

@mpalpha
Last active December 5, 2025 22:21
Show Gist options
  • Select an option

  • Save mpalpha/c2f1723868c86343e590ed38e80f264d to your computer and use it in GitHub Desktop.

Select an option

Save mpalpha/c2f1723868c86343e590ed38e80f264d to your computer and use it in GitHub Desktop.
protocol enforcer mcp server - "setup protocol enforemcent using https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d"

Protocol Enforcer MCP Server

A Model Context Protocol (MCP) server that enforces custom workflow protocols before allowing AI agents to perform file operations.

Author: Jason Lusk [email protected] License: MIT Gist URL: https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d

What This Does

Universal gatekeeper for AI coding assistants supporting Model Context Protocol:

  • ✅ Works with any MCP-compatible client (Claude Code, Cursor, Cline, Zed, Continue)
  • ✅ Enforces custom protocol steps before planning/coding
  • ✅ Tracks required checklist items specific to your project
  • ✅ Records compliance violations over time
  • ✅ Fully configurable - adapt to any workflow
  • ✅ Runs from npx - no installation needed

Platform Support

Platform Config File Hook Support Enforcement
Claude Code .mcp.json or ~/.claude.json ✅ Full (all 5 hooks) Automatic blocking
Cursor ~/.cursor/mcp.json ✅ Standard (PreToolUse) Automatic blocking
Cline ~/.cline/mcp.json ⚠️ Limited (PostToolUse) Audit only
Zed ~/.config/zed/mcp.json ❌ None Voluntary
Continue ~/.continue/mcp.json ⚠️ Limited Voluntary

Available Hooks: user_prompt_submit, session_start, pre_tool_use, post_tool_use, stop


Quick Installation (Users)

1. Add MCP Server

Add to your platform's MCP config file (paths above):

{
  "mcpServers": {
    "protocol-enforcer": {
      "command": "npx",
      "args": ["-y", "https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d"]
    }
  }
}

Claude Code only: If using .claude/settings.local.json with enabledMcpjsonServers, add "protocol-enforcer".

2. Create Configuration

Create .protocol-enforcer.json in project root (or ~/.protocol-enforcer.json for user-level):

{
  "enforced_rules": {
    "require_protocol_steps": [
      {
        "name": "planning",
        "hook": "pre_tool_use",
        "applies_to": ["Write", "Edit"]
      }
    ],
    "require_checklist_confirmation": true,
    "minimum_checklist_items": 2
  },
  "checklist_items": [
    {
      "text": "Requirements gathered",
      "hook": "pre_tool_use"
    },
    {
      "text": "Existing patterns analyzed",
      "hook": "pre_tool_use"
    },
    {
      "text": "Linting passed",
      "hook": "post_tool_use"
    }
  ]
}

See: Example Configurations for minimal, development, and behavioral workflows.

3. Reload IDE

  • Claude Code/Cursor/VSCode: Cmd+Shift+P → "Developer: Reload Window"
  • Zed: Restart Zed

4. Verify

await mcp__protocol_enforcer__get_protocol_config();

Installation for AI Agents

MANDATORY INSTALLATION PROTOCOL - Complete each step and show evidence before proceeding.

STEP 1: Platform Detection (SHOW OUTPUT)

Execute this command and paste the complete output:

macOS/Linux:

ps aux | grep -E "cursor|vscode|code|claude|cline|zed" | grep -v grep | head -5

Windows:

Get-Process | Where-Object {$_.ProcessName -match "cursor|code|claude|cline|zed"}

Then state explicitly:

  • Detected IDE: [name from process]
  • Evidence: [process name/path from output]

STEP 2: Verify Platform Capabilities (LOOKUP & STATE)

Look up your detected platform in the Platform Support table above.

State explicitly:

  • Platform: [name]
  • Supported hooks: [list ALL hooks this platform supports]
  • Blocking capability: [Automatic blocking / Audit only / Voluntary]
  • Config file location: [path from table]

STEP 3: Read Project Requirements (READ & EXTRACT)

Read ALL project rule files (.cursor/rules/**/*, .cursorrules, .github/**/*, docs/**/*, etc.)

State explicitly:

  • Files analyzed: [count and key files]
  • Protocol steps found: [list with hook assignments]
  • Critical checklist items: [list items marked critical/mandatory]
  • Tool requirements: [any required MCP tool calls]

STEP 4: Gap Analysis (COMPARE & IDENTIFY)

Compare Step 2 (platform capabilities) vs Step 3 (project requirements).

State explicitly:

  • ✅ Hooks that CAN be automatically enforced: [list]
  • ⚠️ Hooks that CANNOT be automatically enforced: [list]
  • 🚨 Critical limitations: [describe which critical requirements are unenforceable on this platform]

STEP 5: Propose Configuration (PRESENT, DON'T IMPLEMENT)

Show the complete configuration you will create:

  • MCP server entry for config file
  • .protocol-enforcer.json structure
  • Hook scripts to create (if supported)
  • Platform-specific settings

Explicitly note:

  • This configuration enforces: [X hooks automatically]
  • This configuration cannot enforce: [Y hooks - requires voluntary AI compliance]
  • Critical gap: [Any critical requirements that can't be enforced]

STOP HERE. DO NOT CREATE ANY FILES YET.

STEP 6: Wait for User Approval

Present: "This configuration has the following limitations: [list]. Proceed with implementation?"

Only after explicit user approval: implement the proposed configuration.

Detailed guide: See Appendix A: AI Agent Installation Guide


Configuration Reference

Required Format

All protocol steps and checklist items must be objects with a hook property (string format not supported).

Protocol Step Object:

{
  "name": "step_name",
  "hook": "pre_tool_use",
  "applies_to": ["Write", "Edit"]  // Optional: tool-specific filtering
}

Checklist Item Object:

{
  "text": "Item description",
  "hook": "pre_tool_use",
  "applies_to": ["Write", "Edit"]  // Optional: tool-specific filtering
}

Available Hooks

Hook When Use Case
user_prompt_submit Before processing user message Pre-response checks, sequential thinking
session_start At session initialization Display requirements, initialize tracking
pre_tool_use Before tool execution Primary enforcement point for file operations
post_tool_use After tool execution Validation, linting, audit logging
stop Before session termination Compliance reporting, cleanup

Example Configurations

Minimal (3 items):

{
  "enforced_rules": {
    "require_protocol_steps": [
      { "name": "sequential_thinking", "hook": "user_prompt_submit" },
      { "name": "planning", "hook": "pre_tool_use", "applies_to": ["Write", "Edit"] }
    ],
    "require_checklist_confirmation": true,
    "minimum_checklist_items": 2
  },
  "checklist_items": [
    { "text": "Sequential thinking completed FIRST", "hook": "user_prompt_submit" },
    { "text": "Plan created and confirmed", "hook": "pre_tool_use" },
    { "text": "Completion verified", "hook": "post_tool_use" }
  ]
}

See also:

  • config.minimal.json - Basic workflow (6 items)
  • config.development.json - Full development workflow (17 items)
  • config.behavioral.json - LLM behavioral corrections (12 items)

MCP Tools

1. verify_protocol_compliance

Verify protocol steps completed for a specific hook.

Parameters:

Parameter Type Required Description
hook string ✅ Yes Lifecycle point: user_prompt_submit, session_start, pre_tool_use, post_tool_use, stop
tool_name string No Tool being called (Write, Edit) for tool-specific filtering
protocol_steps_completed string[] ✅ Yes Completed step names from config
checklist_items_checked string[] ✅ Yes Verified checklist items from config

Example:

const verification = await mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "pre_tool_use",
  tool_name: "Write",
  protocol_steps_completed: ["planning", "analysis"],
  checklist_items_checked: ["Plan confirmed", "Patterns analyzed"]
});

// Returns: { compliant: true, operation_token: "abc123...", token_expires_in_seconds: 60 }
// Or: { compliant: false, violations: [...] }

2. authorize_file_operation

MANDATORY before Write/Edit (when using PreToolUse hooks).

Parameters:

Parameter Type Required Description
operation_token string ✅ Yes Token from verify_protocol_compliance

Token rules: Single-use, 60-second expiration, writes ~/.protocol-enforcer-token for hook verification.

3. get_protocol_config

Get current configuration.

Returns: { config_path: "...", config: {...} }

4. get_compliance_status

Get compliance statistics and recent violations.

Returns: { total_checks: N, passed: N, failed: N, recent_violations: [...] }

5. initialize_protocol_config

Create new config file.

Parameters: scope: "project" | "user"


Usage Workflow

Complete Example (All Hooks)

// 1. At user message (user_prompt_submit hook)
await mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "user_prompt_submit",
  protocol_steps_completed: ["sequential_thinking"],
  checklist_items_checked: ["Sequential thinking completed FIRST"]
});

// 2. Before file operations (pre_tool_use hook)
const verification = await mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "pre_tool_use",
  tool_name: "Write",
  protocol_steps_completed: ["planning", "analysis"],
  checklist_items_checked: ["Plan confirmed", "Patterns analyzed"]
});

// 3. Authorize file operation
if (verification.compliant) {
  await mcp__protocol_enforcer__authorize_file_operation({
    operation_token: verification.operation_token
  });

  // Now Write/Edit operations allowed
}

// 4. After file operations (post_tool_use hook)
await mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "post_tool_use",
  tool_name: "Write",
  protocol_steps_completed: ["execution"],
  checklist_items_checked: ["Linting passed", "Types checked"]
});

Hook Filtering

  • Only rules with matching hook value are checked
  • If applies_to specified, tool name must match
  • Enables context-specific enforcement at different lifecycle points

Hook-Based Enforcement

For automatic blocking of unauthorized file operations (Claude Code, Cursor only).

Installation

  1. Create hooks directory:
mkdir -p .cursor/hooks
  1. Create hook scripts from templates (see Appendix C)

  2. Make executable:

chmod +x .cursor/hooks/*.js
  1. Configure platform:

Claude Code - Add to .claude/settings.local.json:

{
  "hooks": {
    "PreToolUse": [{
      "matcher": "Edit|Write|NotebookEdit",
      "hooks": [{ "type": "command", "command": "/absolute/path/.cursor/hooks/pre-tool-use.js" }]
    }],
    "PostToolUse": [{
      "matcher": "Edit|Write|NotebookEdit",
      "hooks": [{ "type": "command", "command": "/absolute/path/.cursor/hooks/post-tool-use.js" }]
    }]
  }
}

Cursor - Add to ~/.cursor/settings.json:

{
  "claude.hooks": {
    "PreToolUse": [{
      "matcher": "Edit|Write|NotebookEdit",
      "command": "/absolute/path/.cursor/hooks/pre-tool-use.js"
    }]
  }
}

Replace /absolute/path/ with your actual project path.

Token Lifecycle

1. AI calls verify_protocol_compliance → receives operation_token (60s expiration)
2. AI calls authorize_file_operation(token) → writes ~/.protocol-enforcer-token
3. AI attempts Write/Edit → PreToolUse hook intercepts
   - Token found → consume (delete), allow operation
   - Token missing → block operation (exit 2)
4. Next Write/Edit → Token missing → blocked

Result: One verification per file operation.

Integration with Supervisor Protocols

Add to your project's supervisor rules:

Claude Code: .cursor/rules/protocol-enforcer.mdc Cursor: .cursorrules Cline: .clinerules Continue: .continuerules

## Protocol Enforcer Integration (MANDATORY)

Before ANY file write/edit operation:
1. Complete required protocol steps from `.protocol-enforcer.json`
2. Call `mcp__protocol_enforcer__verify_protocol_compliance` with:
   - `hook`: lifecycle point (e.g., "pre_tool_use")
   - `protocol_steps_completed`: completed step names
   - `checklist_items_checked`: verified items
3. If `compliant: false`, fix violations and retry
4. Call `mcp__protocol_enforcer__authorize_file_operation` with token
5. Only proceed if `authorized: true`

**No exceptions allowed.**

See: Appendix B: Complete Supervisor Examples for platform-specific examples.


Troubleshooting

Issue Solution
Server not appearing Check config file syntax, gist URL, file location, reload IDE
Configuration not loading Verify .protocol-enforcer.json filename, check JSON syntax
Tools not working Test with get_protocol_config, check tool names (must use full mcp__protocol-enforcer__*)
Hook not blocking Verify platform support, check hook executable (chmod +x), verify absolute path, reload IDE
Token errors Check ~/.protocol-enforcer-token exists after authorize_file_operation

Claude Code only: Add "protocol-enforcer" to enabledMcpjsonServers if using allowlist.


Why This Exists

AI assistants bypass project protocols under pressure or context limits. This server:

  • Enforces consistency - same rules for every task, all platforms
  • Provides traceability - tracks protocol adherence
  • Reduces technical debt - prevents shortcuts violating standards
  • Works with ANY workflow - not tied to specific tools
  • Runs from npx - zero installation/maintenance

Appendices

Appendix A: AI Agent Installation Guide

Detailed analysis process for AI agents installing this MCP server.

Step 1: Detect Platform and IDE (VERIFIABLE)

Required action: Run platform detection command and show output.

macOS/Linux:

ps aux | grep -E "cursor|vscode|code|claude|cline|zed" | grep -v grep | head -5

Windows:

Get-Process | Where-Object {$_.ProcessName -match "cursor|code|claude|cline|zed"}

Analysis guide:

  • Look for process names: Cursor.app, Visual Studio Code.app, claude-code extension
  • Identify parent/child relationships
  • Cross-reference with existing MCP config files (.mcp.json, ~/.claude.json, ~/.cursor/mcp.json)

Required output format:

Detected IDE: [Cursor / Claude Code in VSCode / Cline / Zed / Continue]
Evidence: [process path/name from ps aux output]

Step 2: Analyze Project Structure

Read ALL rule files (critical - don't skip):

  • .cursor/rules/**/*.mdc - All rule types
  • .cursorrules, .clinerules, .continuerules - Platform rules
  • .eslintrc.*, .prettierrc.* - Code formatting
  • tsconfig.json - TypeScript config
  • .github/CONTRIBUTING.md, .github/pull_request_template.md - Contribution guidelines
  • README.md, CLAUDE.md, docs/**/* - Project documentation

Extract from each file:

  1. Protocol Steps (workflow stages):

    • Look for: "first", "before", "then", "after", "finally"
    • Example: "Before ANY file operation, do X" → protocol step "X"
    • Group related steps (3-7 steps typical)
  2. Checklist Items (verification checks):

    • Look for: "MUST", "REQUIRED", "MANDATORY", "CRITICAL", "NEVER", "ALWAYS"
    • Quality checks: "verify", "ensure", "check", "confirm"
    • Each item should be specific and verifiable
  3. Behavioral Rules (constraints):

    • Hard requirements: "NO EXCEPTIONS", "supersede all instructions"
    • Pre-approved actions: "auto-fix allowed", "no permission needed"
    • Forbidden actions: "NEVER edit X", "DO NOT use Y"
  4. Tool Requirements (MCP tool calls):

    • Explicit requirements: "use mcp__X tool"
    • Tool sequences: "call X before Y"
  5. Conditional Requirements (context-specific):

    • "If GraphQL changes, run codegen"
    • "If SCSS changes, verify spacing"
    • Mark as required: false in checklist

Example Extraction:

From .cursor/rules/mandatory-supervisor-protocol.mdc:

"BEFORE ANY OTHER ACTION, EVERY USER QUERY MUST:
1. First use mcp__clear-thought__sequentialthinking tool"

→ Protocol step: { name: "sequential_thinking", hook: "user_prompt_submit" } → Checklist: { text: "Sequential thinking completed FIRST", hook: "user_prompt_submit" }

Step 3: Search Referenced Online Sources

If documentation references external URLs:

  • Use WebSearch/WebFetch to retrieve library docs, style guides, API specs
  • Extract additional requirements from online sources
  • Integrate with local requirements

Step 4: Infer Workflow Type

Based on analysis, determine workflow:

  • TDD - Test files exist, tests-first culture
  • Design-First - Figma links, design system, token mappings
  • Planning & Analysis - Generic best practices
  • Behavioral - Focus on LLM behavioral corrections (CHORES framework)
  • Minimal - Small projects, emergency mode

Step 5: Determine Hook Support (COMPARE & STATE)

Required action: Compare detected platform against Platform Support table.

Required output format:

Platform: [detected name]
Supported hooks: [exact list from table]
Blocking capability: [Automatic blocking / Audit only / Voluntary]
Project requires: [hooks from Step 2 analysis]

Gap Analysis:
✅ CAN enforce automatically: [hooks that match]
⚠️ CANNOT enforce automatically: [hooks platform doesn't support]
🚨 Critical limitation: [impact of unenforceable hooks]

Configuration strategy:

Platform Recommended Hooks Strategy
Claude Code All 5 hooks Maximum enforcement
Cursor PreToolUse only Standard enforcement (PostToolUse not reliably supported)
Cline PostToolUse only Audit logging
Zed/Continue None Voluntary compliance

Step 6: Propose Configuration (PRESENT, DON'T IMPLEMENT)

Required action: Show complete proposed configuration WITHOUT creating files.

Required format:

  1. Analysis summary: "I've analyzed [N] rule files and detected [workflow type]. Your platform ([platform]) supports [hooks]."

  2. Proposed configuration structure:

    // Show complete .protocol-enforcer.json
    // Show MCP server entry
    // Show hook scripts (if applicable)
  3. Explicit limitations:

    • "This configuration enforces: [X hooks] automatically"
    • "This configuration cannot enforce: [Y hooks] - requires voluntary AI compliance"
    • "Critical gap: [specific unenforceable requirements]"
  4. Wait for approval: "Proceed with this configuration?"

DO NOT create any files until user explicitly approves.

Step 7: Create Files

  1. Add MCP server to config file
  2. Create .protocol-enforcer.json with tailored configuration
  3. Create hook scripts if platform supports them
  4. Update supervisor protocol files with integration instructions
  5. Reload IDE

Appendix B: Complete Supervisor Examples

Example 1: Planning & Analysis (Claude Code)

File: .cursor/rules/protocol-enforcer.mdc

---
description: Planning & Analysis Protocol with PreToolUse Hooks
globs:
alwaysApply: true
---

## Protocol Enforcer Integration (MANDATORY)

### Required Steps (from .protocol-enforcer.json):
1. **sequential_thinking** - Complete before responding
2. **planning** - Plan implementation with objectives
3. **analysis** - Analyze codebase for reusable patterns

### Required Checklist:
- Sequential thinking completed FIRST
- Searched for reusable components/utilities
- Matched existing code patterns
- Plan confirmed by user

### Workflow:

**CRITICAL OVERRIDE RULE:**
BEFORE ANY ACTION, call `mcp__clear-thought__sequentialthinking` then `mcp__protocol_enforcer__verify_protocol_compliance`.
NO EXCEPTIONS.

**Process:**

1. **Sequential Thinking** (user_prompt_submit hook)
   - Use sequentialthinking tool
   - Verify: `mcp__protocol_enforcer__verify_protocol_compliance({ hook: "user_prompt_submit", ... })`

2. **Planning**
   - Define objectives, files to modify, dependencies
   - Mark `planning` complete

3. **Analysis**
   - Search codebase for similar features
   - Review `src/components/`, `src/hooks/`, `src/utils/`
   - Mark `analysis` complete

4. **Verify Compliance**
   ```typescript
   const v = await mcp__protocol_enforcer__verify_protocol_compliance({
     hook: "pre_tool_use",
     tool_name: "Write",
     protocol_steps_completed: ["planning", "analysis"],
     checklist_items_checked: [
       "Searched for reusable components/utilities",
       "Matched existing code patterns",
       "Plan confirmed by user"
     ]
   });
  1. Authorize

    await mcp__protocol_enforcer__authorize_file_operation({
      operation_token: v.operation_token
    });
  2. Implement

    • Only after authorization
    • Minimal changes only
    • No scope creep

Enforcement:

PreToolUse hooks block unauthorized file operations. Token required per file change (60s expiration).


**Config:** `config.development.json`

---

#### Example 2: Design-First (Cursor)

**File:** `.cursorrules`

Design-First Development Protocol

Required Steps:

  1. design_review - Review Figma specs
  2. component_mapping - Map to existing/new components

Required Checklist:

  • Design tokens mapped to SCSS variables
  • Figma specs reviewed
  • Accessibility requirements checked
  • Responsive breakpoints defined

Workflow:

1. Design Review

  • Open Figma, extract design tokens (colors, spacing, typography)
  • Note accessibility (ARIA, keyboard nav)
  • Document responsive breakpoints

2. Component Mapping

  • Search for similar components
  • Decide: reuse, extend, or create
  • Map Figma tokens to SCSS variables

3. Verify Compliance

mcp__protocol_enforcer__verify_protocol_compliance({
  hook: "pre_tool_use",
  tool_name: "Write",
  protocol_steps_completed: ["design_review", "component_mapping"],
  checklist_items_checked: [
    "Design tokens mapped to SCSS variables",
    "Figma specs reviewed",
    "Accessibility requirements checked"
  ]
})

4. Authorize & Implement

After verification, authorize then proceed with component implementation.


**Config:** Custom design-focused config with `design_review` and `component_mapping` steps.

---

#### Example 3: Behavioral Corrections (Any Platform)

**File:** `.cursor/rules/behavioral-protocol.mdc`

```markdown
---
description: LLM Behavioral Corrections (MODEL Framework CHORES)
alwaysApply: true
---

## Protocol Enforcer Integration (MANDATORY)

Enforces behavioral corrections from MODEL Framework CHORES analysis.

### Required Steps:
1. **analyze_behavior** - Analyze response for CHORES issues
2. **apply_chores_fixes** - Apply corrections before file operations

### Required Checklist (CHORES):
- **C**onstraint issues addressed (structure/format adherence)
- **H**allucination issues addressed (no false information)
- **O**verconfidence addressed (uncertainty when appropriate)
- **R**easoning issues addressed (logical consistency)
- **E**thical/Safety issues addressed (no harmful content)
- **S**ycophancy addressed (truthfulness over agreement)

### Workflow:

1. **Analyze Behavior** (user_prompt_submit)
   - Review response for CHORES issues
   - Verify: `mcp__protocol_enforcer__verify_protocol_compliance({ hook: "user_prompt_submit", ... })`

2. **Apply Fixes** (pre_tool_use)
   - Address identified CHORES issues
   - Verify all checklist items before file ops
   - Authorize with token

### Enforcement:
This config uses the default behavioral corrections from `index.js` DEFAULT_CONFIG.

Config: config.behavioral.json


Example 4: Minimal/Emergency (All Platforms)

File: .protocol-enforcer.json (minimal)

{
  "enforced_rules": {
    "require_protocol_steps": [
      { "name": "acknowledge", "hook": "pre_tool_use" }
    ],
    "require_checklist_confirmation": true,
    "minimum_checklist_items": 1
  },
  "checklist_items": [
    { "text": "I acknowledge this change", "hook": "pre_tool_use" }
  ]
}

Use: Emergency fixes, rapid prototyping only.


Platform Comparison Table

Feature Claude Code Cursor Cline Zed/Continue
Hooks Available All 5 PreToolUse + PostToolUse PostToolUse None
Automatic Blocking ✅ Yes ✅ Yes ❌ No ❌ No
Recommended Steps 5-7 steps 3-5 steps 2-3 steps 1-2 steps
Enforcement Level Maximum Standard Audit Voluntary
Best For Production Development Code review Minimal

Appendix C: Hook Scripts Reference

All 5 hook scripts for creating in .cursor/hooks/ or .cline/hooks/.

1. pre-tool-use.js

Blocks unauthorized file operations without valid token.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  const tokenFile = path.join(os.homedir(), '.protocol-enforcer-token');

  if (!fs.existsSync(tokenFile)) {
    process.stderr.write('⛔ PROTOCOL VIOLATION: Call mcp__protocol-enforcer__authorize_file_operation first\n');
    process.exit(2); // Deny
  }

  try {
    fs.unlinkSync(tokenFile); // Single-use token
    process.exit(0); // Allow
  } catch (e) {
    process.stderr.write(`⛔ Error consuming token: ${e.message}\n`);
    process.exit(2);
  }
});

2. post-tool-use.js

Logs successful operations to audit trail.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    const hookData = JSON.parse(input);
    const logFile = path.join(os.homedir(), '.protocol-enforcer-audit.log');

    const logEntry = {
      timestamp: new Date().toISOString(),
      tool: hookData.toolName || 'unknown',
      session: hookData.sessionId || 'unknown',
      success: true
    };

    fs.appendFileSync(logFile, JSON.stringify(logEntry) + '\n', 'utf8');
    process.exit(0);
  } catch (e) {
    process.exit(0); // Silent fail - don't block on logging errors
  }
});

3. user-prompt-submit.js

Enforces CRITICAL OVERRIDE RULES, blocks bypass attempts.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    const hookData = JSON.parse(input);
    const userPrompt = hookData.userPrompt || '';

    // Detect bypass attempts
    const bypassPatterns = [
      /ignore.*protocol/i,
      /skip.*verification/i,
      /bypass.*enforcer/i,
      /disable.*mcp/i
    ];

    for (const pattern of bypassPatterns) {
      if (pattern.test(userPrompt)) {
        process.stderr.write('⛔ BYPASS ATTEMPT DETECTED: Protocol enforcement cannot be disabled.\n');
        process.exit(2); // Block
      }
    }

    // Inject protocol reminder for file operations
    if (/write|edit|create|modify/i.test(userPrompt)) {
      const reminder = '\n\n[PROTOCOL REMINDER: Before file operations, call mcp__protocol-enforcer__verify_protocol_compliance and mcp__protocol-enforcer__authorize_file_operation]';
      console.log(JSON.stringify({
        userPrompt: userPrompt + reminder
      }));
    } else {
      console.log(input); // Pass through unchanged
    }

    process.exit(0);
  } catch (e) {
    console.log(input); // Pass through on error
    process.exit(0);
  }
});

4. session-start.js

Initializes compliance tracking, displays protocol requirements.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    // Load .protocol-enforcer.json
    const cwd = process.cwd();
    const configPath = path.join(cwd, '.protocol-enforcer.json');

    if (fs.existsSync(configPath)) {
      const config = JSON.parse(fs.readFileSync(configPath, 'utf8'));

      console.error('\n📋 Protocol Enforcer Active\n');
      console.error('Required Protocol Steps:');
      config.enforced_rules.require_protocol_steps.forEach(step => {
        console.error(`  - ${step.name} (hook: ${step.hook})`);
      });
      console.error(`\nMinimum Checklist Items: ${config.enforced_rules.minimum_checklist_items}\n`);
    }

    process.exit(0);
  } catch (e) {
    process.exit(0); // Silent fail
  }
});

5. stop.js

Generates compliance report at end of response.

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const os = require('os');

let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
  try {
    // Check for unused tokens
    const tokenFile = path.join(os.homedir(), '.protocol-enforcer-token');

    if (fs.existsSync(tokenFile)) {
      console.error('\n⚠️  Unused authorization token detected - was file operation skipped?\n');
      fs.unlinkSync(tokenFile); // Cleanup
    }

    // Read audit log for session summary
    const logFile = path.join(os.homedir(), '.protocol-enforcer-audit.log');

    if (fs.existsSync(logFile)) {
      const logs = fs.readFileSync(logFile, 'utf8').trim().split('\n');
      const recentLogs = logs.slice(-10); // Last 10 operations

      console.error('\n📊 Session Compliance Summary:');
      console.error(`Total operations logged: ${recentLogs.length}`);
    }

    process.exit(0);
  } catch (e) {
    process.exit(0); // Silent fail
  }
});

License

MIT License - Copyright (c) 2025 Jason Lusk

{
"enforced_rules": {
"require_protocol_steps": [
{
"name": "sequential_thinking",
"hook": "user_prompt_submit"
},
{
"name": "analyze_behavior",
"hook": "user_prompt_submit"
},
{
"name": "apply_chores_fixes",
"hook": "pre_tool_use"
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 5
},
"checklist_items": [
{
"text": "Sequential thinking completed FIRST",
"hook": "user_prompt_submit"
},
{
"text": "Behavioral patterns analyzed systematically",
"hook": "user_prompt_submit"
},
{
"text": "Constraint issues addressed (structure/format adherence)",
"hook": "pre_tool_use"
},
{
"text": "Hallucination issues addressed (no false information)",
"hook": "pre_tool_use"
},
{
"text": "Overconfidence issues addressed (uncertainty expressed when appropriate)",
"hook": "pre_tool_use"
},
{
"text": "Reasoning issues addressed (logical consistency verified)",
"hook": "pre_tool_use"
},
{
"text": "Technical precision maintained (state 'I don't know' when uncertain)",
"hook": "pre_tool_use"
},
{
"text": "Zero deflection policy (attempt tools before claiming unavailable)",
"hook": "pre_tool_use"
},
{
"text": "No 'likely' explanations without verification",
"hook": "pre_tool_use"
},
{
"text": "Ethical/Safety issues addressed (harmful content prevented)",
"hook": "pre_tool_use"
},
{
"text": "Sycophancy issues addressed (truthfulness over false agreement)",
"hook": "pre_tool_use"
},
{
"text": "Professional objectivity maintained (facts over validation)",
"hook": "post_tool_use"
}
]
}
{
"enforced_rules": {
"require_protocol_steps": [
{
"name": "sequential_thinking",
"hook": "user_prompt_submit"
},
{
"name": "task_initiation",
"hook": "user_prompt_submit"
},
{
"name": "pre_planning_analysis",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "plan_generation",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 5
},
"checklist_items": [
{
"text": "Pre-response compliance audit completed",
"hook": "user_prompt_submit"
},
{
"text": "Sequential thinking completed FIRST (mcp__clear-thought__sequentialthinking)",
"hook": "user_prompt_submit"
},
{
"text": "User request restated with ALL ambiguities clarified",
"hook": "user_prompt_submit"
},
{
"text": "Requirements gathered (tickets, designs, data sources)",
"hook": "pre_tool_use"
},
{
"text": "Existing patterns analyzed for reuse (components, hooks, utils)",
"hook": "pre_tool_use"
},
{
"text": "Dependencies identified (files to modify/create, reusable code)",
"hook": "pre_tool_use"
},
{
"text": "PLAN comprehensive (objective, files, dependencies, tools, confidence, risks)",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "PLAN confirmed by user BEFORE executing code",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Technical precision maintained (no 'likely' explanations)",
"hook": "pre_tool_use"
},
{
"text": "Pattern matching verified (file structure, naming, code style)",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Reuse verified (searched before creating)",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "TypeScript strict mode (no 'any' without justification)",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Linting passed",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Type check passed",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "No console.log() statements",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Import order correct",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Acceptance criteria verified",
"hook": "post_tool_use"
},
{
"text": "Completion summarized with deviations justified",
"hook": "post_tool_use"
}
]
}
{
"enforced_rules": {
"require_protocol_steps": [
{
"name": "sequential_thinking",
"hook": "user_prompt_submit"
},
{
"name": "planning",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"name": "execution",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
}
],
"require_checklist_confirmation": true,
"minimum_checklist_items": 3
},
"checklist_items": [
{
"text": "Sequential thinking completed FIRST",
"hook": "user_prompt_submit"
},
{
"text": "Task requirements clarified with user",
"hook": "pre_tool_use"
},
{
"text": "Plan created and confirmed before execution",
"hook": "pre_tool_use",
"applies_to": ["Write", "Edit"]
},
{
"text": "Technical precision maintained (state 'I don't know' when uncertain)",
"hook": "pre_tool_use"
},
{
"text": "Zero deflection policy (attempt available tools before claiming unavailable)",
"hook": "pre_tool_use"
},
{
"text": "Completion verified against stated objectives",
"hook": "post_tool_use",
"applies_to": ["Write", "Edit"]
}
]
}
#!/usr/bin/env node
/**
* Protocol Enforcer MCP Server
* Enforces custom workflow protocol compliance before allowing file operations
*
* Author: Jason Lusk <[email protected]>
* License: MIT
*/
const fs = require('fs');
const path = require('path');
const os = require('os');
const readline = require('readline');
// State tracking
const state = {
configPath: null,
config: null,
complianceChecks: [],
operationTokens: new Map(), // Map<token, {expires: timestamp, used: boolean}>
tokenTimeout: 60000 // 60 seconds
};
// Default configuration (MODEL Framework CHORES behavioral fixes)
const DEFAULT_CONFIG = {
enforced_rules: {
require_protocol_steps: [
{
name: "analyze_behavior",
hook: "user_prompt_submit"
},
{
name: "apply_chores_fixes",
hook: "pre_tool_use",
applies_to: ["Write", "Edit"]
}
],
require_checklist_confirmation: true,
minimum_checklist_items: 3
},
checklist_items: [
{
text: "Constraint issues addressed (structure/format adherence)",
hook: "pre_tool_use"
},
{
text: "Hallucination issues addressed (no false information)",
hook: "pre_tool_use"
},
{
text: "Overconfidence issues addressed (uncertainty expressed when appropriate)",
hook: "pre_tool_use"
},
{
text: "Reasoning issues addressed (logical consistency verified)",
hook: "pre_tool_use"
},
{
text: "Ethical/Safety issues addressed (harmful content prevented)",
hook: "pre_tool_use"
},
{
text: "Sycophancy issues addressed (truthfulness over false agreement)",
hook: "pre_tool_use"
}
]
};
// Find config file (project scope takes precedence)
function findConfigFile() {
const cwd = process.cwd();
const projectConfig = path.join(cwd, '.protocol-enforcer.json');
const homeConfig = path.join(process.env.HOME || process.env.USERPROFILE, '.protocol-enforcer.json');
if (fs.existsSync(projectConfig)) {
return projectConfig;
}
if (fs.existsSync(homeConfig)) {
return homeConfig;
}
return null;
}
// Load configuration
function loadConfig() {
const configPath = findConfigFile();
if (configPath) {
state.configPath = configPath;
const rawConfig = JSON.parse(fs.readFileSync(configPath, 'utf8'));
state.config = validateConfig(rawConfig);
return state.config;
}
state.config = DEFAULT_CONFIG;
return null;
}
// Tool: initialize_protocol_config
async function initializeProtocolConfig(args) {
const scope = args.scope || 'project';
let configPath;
if (scope === 'project') {
configPath = path.join(process.cwd(), '.protocol-enforcer.json');
} else if (scope === 'user') {
configPath = path.join(process.env.HOME || process.env.USERPROFILE, '.protocol-enforcer.json');
} else {
return {
content: [{
type: 'text',
text: JSON.stringify({ error: 'Invalid scope. Must be "project" or "user".' }, null, 2)
}]
};
}
if (fs.existsSync(configPath)) {
return {
content: [{
type: 'text',
text: JSON.stringify({
error: 'Configuration file already exists',
path: configPath
}, null, 2)
}]
};
}
fs.writeFileSync(configPath, JSON.stringify(DEFAULT_CONFIG, null, 2), 'utf8');
state.configPath = configPath;
state.config = DEFAULT_CONFIG;
return {
content: [{
type: 'text',
text: JSON.stringify({
success: true,
message: `Configuration file created at ${configPath}`,
config: DEFAULT_CONFIG
}, null, 2)
}]
};
}
// Validate config format
function validateConfig(config) {
const errors = [];
// Validate protocol steps
if (config.enforced_rules.require_protocol_steps) {
config.enforced_rules.require_protocol_steps.forEach((step, idx) => {
if (typeof step === 'string') {
errors.push(`Protocol step at index ${idx} is a string. Must be an object with 'name' and 'hook' properties.`);
} else if (!step.name || !step.hook) {
errors.push(`Protocol step at index ${idx} missing required 'name' or 'hook' property.`);
}
});
}
// Validate checklist items
if (config.checklist_items) {
config.checklist_items.forEach((item, idx) => {
if (typeof item === 'string') {
errors.push(`Checklist item at index ${idx} is a string. Must be an object with 'text' and 'hook' properties.`);
} else if (!item.text || !item.hook) {
errors.push(`Checklist item at index ${idx} missing required 'text' or 'hook' property.`);
}
});
}
if (errors.length > 0) {
throw new Error(`Invalid configuration format:\n${errors.join('\n')}\n\nSee README.md Configuration Reference section for correct format.`);
}
return config;
}
// Filter rules by hook and tool name
function filterByHook(items, hook, toolName = null) {
return items.filter(item => {
// Check if hook matches
if (item.hook !== hook) return false;
// Check if tool-specific filtering applies
if (item.applies_to && toolName) {
return item.applies_to.includes(toolName);
}
return true;
});
}
// Tool: verify_protocol_compliance
async function verifyProtocolCompliance(args) {
const rawConfig = state.config || loadConfig() || DEFAULT_CONFIG;
// Validate config format (throws if invalid)
const config = validateConfig(rawConfig);
const violations = [];
const hook = args.hook;
const toolName = args.tool_name || null;
if (!hook) {
return {
content: [{
type: 'text',
text: JSON.stringify({
error: 'Missing required parameter: hook. Must specify which hook is calling (e.g., "user_prompt_submit", "pre_tool_use", "post_tool_use").'
}, null, 2)
}]
};
}
// Check required protocol steps (filtered by hook)
if (config.enforced_rules.require_protocol_steps && Array.isArray(config.enforced_rules.require_protocol_steps)) {
const allRequiredSteps = config.enforced_rules.require_protocol_steps;
const hookFilteredSteps = filterByHook(allRequiredSteps, hook, toolName);
const completedSteps = args.protocol_steps_completed || [];
const missingSteps = hookFilteredSteps.filter(step => !completedSteps.includes(step.name));
if (missingSteps.length > 0) {
missingSteps.forEach(step => {
violations.push(`VIOLATION: Required protocol step not completed: ${step.name} (hook: ${hook})`);
});
}
}
// Check checklist confirmation (filtered by hook)
if (config.enforced_rules.require_checklist_confirmation) {
const checkedItems = args.checklist_items_checked || [];
const allRequiredItems = config.checklist_items || [];
const hookFilteredItems = filterByHook(allRequiredItems, hook, toolName);
const minItems = config.enforced_rules.minimum_checklist_items || 0;
// Count only items applicable to this hook
const applicableMinItems = Math.min(minItems, hookFilteredItems.length);
if (checkedItems.length < applicableMinItems) {
violations.push(`VIOLATION: Only ${checkedItems.length} checklist items checked, minimum ${applicableMinItems} required for hook '${hook}'`);
}
const uncheckedRequired = hookFilteredItems.filter(item => !checkedItems.includes(item.text));
if (uncheckedRequired.length > 0) {
violations.push(`VIOLATION: Required checklist items not confirmed for hook '${hook}': ${uncheckedRequired.map(i => i.text).join(', ')}`);
}
}
// Record check
state.complianceChecks.push({
timestamp: new Date().toISOString(),
passed: violations.length === 0,
violations: violations,
args: args
});
if (violations.length > 0) {
return {
content: [{
type: 'text',
text: JSON.stringify({
compliant: false,
violations: violations,
message: 'Protocol compliance check FAILED. Fix violations before proceeding.'
}, null, 2)
}]
};
}
// Generate single-use operation token
const crypto = require('crypto');
const token = crypto.randomBytes(32).toString('hex');
const expires = Date.now() + state.tokenTimeout;
state.operationTokens.set(token, { expires, used: false });
// Clean up expired tokens
for (const [key, value] of state.operationTokens.entries()) {
if (value.expires < Date.now() || value.used) {
state.operationTokens.delete(key);
}
}
return {
content: [{
type: 'text',
text: JSON.stringify({
compliant: true,
operation_token: token,
token_expires_in_seconds: state.tokenTimeout / 1000,
message: 'Protocol compliance verified. Use the operation_token with authorize_file_operation before proceeding.'
}, null, 2)
}]
};
}
// Tool: get_compliance_status
async function getComplianceStatus() {
const recentChecks = state.complianceChecks.slice(-10);
const passedCount = recentChecks.filter(c => c.passed).length;
const failedCount = recentChecks.length - passedCount;
return {
content: [{
type: 'text',
text: JSON.stringify({
total_checks: state.complianceChecks.length,
recent_checks: recentChecks.length,
passed: passedCount,
failed: failedCount,
recent_violations: recentChecks
.filter(c => !c.passed)
.map(c => ({ timestamp: c.timestamp, violations: c.violations }))
}, null, 2)
}]
};
}
// Tool: get_protocol_config
async function getProtocolConfig() {
const config = state.config || loadConfig() || DEFAULT_CONFIG;
return {
content: [{
type: 'text',
text: JSON.stringify({
config_path: state.configPath || 'Using default configuration',
config: config
}, null, 2)
}]
};
}
// Tool: authorize_file_operation
async function authorizeFileOperation(args) {
const token = args.operation_token;
if (!token) {
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: 'No operation token provided. You must call verify_protocol_compliance first to obtain a token.'
}, null, 2)
}]
};
}
const tokenData = state.operationTokens.get(token);
if (!tokenData) {
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: 'Invalid or expired operation token. Call verify_protocol_compliance again to obtain a new token.'
}, null, 2)
}]
};
}
if (tokenData.used) {
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: 'Operation token already used. Each token is single-use only. Call verify_protocol_compliance again.'
}, null, 2)
}]
};
}
if (tokenData.expires < Date.now()) {
state.operationTokens.delete(token);
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: 'Operation token expired. Call verify_protocol_compliance again to obtain a new token.'
}, null, 2)
}]
};
}
// Mark token as used
tokenData.used = true;
// Write token file for PreToolUse hook verification
const tokenFile = path.join(os.homedir(), '.protocol-enforcer-token');
try {
fs.writeFileSync(tokenFile, token, 'utf8');
} catch (err) {
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: false,
error: `Failed to write token file: ${err.message}`
}, null, 2)
}]
};
}
return {
content: [{
type: 'text',
text: JSON.stringify({
authorized: true,
message: 'File operation authorized. Token file written for hook verification. You may now proceed with Write/Edit operations.'
}, null, 2)
}]
};
}
// MCP Protocol Handler
const tools = [
{
name: 'initialize_protocol_config',
description: 'Create a new protocol enforcer configuration file at project or user scope',
inputSchema: {
type: 'object',
properties: {
scope: {
type: 'string',
enum: ['project', 'user'],
description: 'Where to create the config file: "project" (.protocol-enforcer.json in current directory) or "user" (~/.protocol-enforcer.json)'
}
},
required: ['scope']
}
},
{
name: 'verify_protocol_compliance',
description: 'Verify that mandatory protocol steps have been completed before allowing file operations. This is a generic tool - protocol steps and checklist items are defined in your .protocol-enforcer.json configuration file. Supports hook-specific filtering.',
inputSchema: {
type: 'object',
properties: {
hook: {
type: 'string',
enum: ['user_prompt_submit', 'session_start', 'pre_tool_use', 'post_tool_use', 'stop'],
description: 'REQUIRED: Which hook is calling this verification (e.g., "pre_tool_use", "user_prompt_submit"). Filters rules to only those applicable to this hook.'
},
tool_name: {
type: 'string',
description: 'Optional: name of the tool being called (e.g., "Write", "Edit"). Used for tool-specific filtering when combined with hook. Only applies when hook is "pre_tool_use" or "post_tool_use".'
},
protocol_steps_completed: {
type: 'array',
items: { type: 'string' },
description: 'List of protocol step names that have been completed (e.g., ["planning", "analysis"]). Step names must match those defined in your .protocol-enforcer.json config.'
},
checklist_items_checked: {
type: 'array',
items: { type: 'string' },
description: 'List of checklist items that were verified. Items should match those defined in your .protocol-enforcer.json config.'
}
},
required: ['hook', 'protocol_steps_completed', 'checklist_items_checked']
}
},
{
name: 'get_compliance_status',
description: 'Get current compliance check statistics and recent violations',
inputSchema: {
type: 'object',
properties: {}
}
},
{
name: 'get_protocol_config',
description: 'Get the current protocol enforcer configuration',
inputSchema: {
type: 'object',
properties: {}
}
},
{
name: 'authorize_file_operation',
description: 'MANDATORY before ANY file write/edit operation. Validates the operation token from verify_protocol_compliance. Single-use token that expires in 60 seconds.',
inputSchema: {
type: 'object',
properties: {
operation_token: {
type: 'string',
description: 'The operation token received from verify_protocol_compliance. Required for authorization.'
}
},
required: ['operation_token']
}
}
];
// Main MCP message handler
async function handleMessage(message) {
const { method, params, id } = message;
switch (method) {
case 'initialize':
return {
jsonrpc: '2.0',
id,
result: {
protocolVersion: '2024-11-05',
serverInfo: {
name: 'protocol-enforcer',
version: '1.0.0'
},
capabilities: {
tools: {}
}
}
};
case 'tools/list':
return {
jsonrpc: '2.0',
id,
result: { tools }
};
case 'tools/call':
const { name, arguments: args } = params;
let result;
switch (name) {
case 'initialize_protocol_config':
result = await initializeProtocolConfig(args || {});
break;
case 'verify_protocol_compliance':
result = await verifyProtocolCompliance(args || {});
break;
case 'get_compliance_status':
result = await getComplianceStatus();
break;
case 'get_protocol_config':
result = await getProtocolConfig();
break;
case 'authorize_file_operation':
result = await authorizeFileOperation(args || {});
break;
default:
result = {
content: [{
type: 'text',
text: JSON.stringify({ error: `Unknown tool: ${name}` })
}],
isError: true
};
}
return {
jsonrpc: '2.0',
id,
result
};
default:
return {
jsonrpc: '2.0',
id,
error: {
code: -32601,
message: `Method not found: ${method}`
}
};
}
}
// Stdio transport
async function main() {
loadConfig();
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
rl.on('line', async (line) => {
try {
const message = JSON.parse(line);
const response = await handleMessage(message);
console.log(JSON.stringify(response));
} catch (error) {
console.error(JSON.stringify({
error: error.message,
stack: error.stack
}));
}
});
// Keep process alive indefinitely for MCP persistent connection
// This interval ensures the event loop never exits
setInterval(() => {
// Keep-alive: This keeps the process running
// MCP servers must maintain persistent stdio connection
}, 2147483647); // Maximum safe timeout (~24.8 days)
}
main().catch(console.error);
{
"name": "protocol-enforcer-mcp",
"version": "1.0.0",
"description": "MCP server that enforces mandatory supervisor protocol compliance before allowing file operations",
"author": "Jason Lusk <[email protected]>",
"license": "MIT",
"main": "index.js",
"bin": {
"protocol-enforcer": "./index.js"
},
"engines": {
"node": ">=14.0.0"
},
"keywords": [
"mcp",
"model-context-protocol",
"protocol-enforcer",
"ai-assistant",
"code-quality",
"compliance"
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment