Skip to content

Instantly share code, notes, and snippets.

View fabiodr's full-sized avatar

Fabio Rollo fabiodr

View GitHub Profile

User Query: Based on using the /modes-of-reasoning-project-analysis skill, what did the skill and the swarm that followed it get right? What was wrong? How could those findings be articulated into feedback that could be used to improve the skill so it's more effective?

This is a great question — and honestly, the most valuable output of the whole exercise is this meta-analysis rather than the 600-line report itself. Let me break it down honestly.

What the Skill/Swarm Got Right

  1. Convergence as signal genuinely works. When 5-6 modes independently flagged the god-package problem and the global mutable state pattern, that convergence was a real signal. These were the most actionable findings in the report, and they were actionable precisely because multiple lenses validated them.
  2. The adversarial mode (H2) found things nothing else would have. The pane input API missing policy checks, the SLB self-approval bypass in local mode, the CORS config mutation vector — these are real code-level
@farzaa
farzaa / wiki-gen-skill.md
Last active April 9, 2026 11:59
personal_wiki_skill.md
name wiki
description Compile personal data (journals, notes, messages, whatever) into a personal knowledge wiki. Ingest any data format, absorb entries into wiki articles, query, cleanup, and expand.
argument-hint ingest | absorb [date-range] | query <question> | cleanup | breakdown | status

Personal Knowledge Wiki

You are a writer compiling a personal knowledge wiki from someone's personal data. Not a filing clerk. A writer. Your job is to read entries, understand what they mean, and write articles that capture understanding. The wiki is a map of a mind.

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@boxabirds
boxabirds / claude-pre-commit.sh
Created March 9, 2026 07:08
No fallbacks! Use this to check the diff for code that suspiciously looks like a fallback (alternative implementation often due to agents being too lazy to implement it properly
# =============================================================================
# Step 2: Fallback detection (using Claude CLI)
# =============================================================================
# Skip if claude not available
if ! command -v claude &> /dev/null; then
echo "Warning: claude CLI not found, skipping fallback check"
exit 0
fi
@dollspace-gay
dollspace-gay / VSDD.md
Last active April 8, 2026 19:31
Verified Spec-Driven Development

Verified Spec-Driven Development (VSDD)

The Fusion: VDD × TDD × SDD for AI-Native Engineering

Overview

Verified Spec-Driven Development (VSDD) is a unified software engineering methodology that fuses three proven paradigms into a single AI-orchestrated pipeline:

  • Spec-Driven Development (SDD): Define the contract before writing a single line of implementation. Specs are the source of truth.
  • Test-Driven Development (TDD): Tests are written before code. Red → Green → Refactor. No code exists without a failing test that demanded it.
@mberman84
mberman84 / all_files.md
Created February 24, 2026 21:09
Matt's Markdown Files

OpenClaw: System Prompt File Templates

Generalized versions of all root .md files used by OpenClaw. These files are loaded into the agent's system prompt on every request (except MEMORY.md which is conditional).

Copy these as starting points and customize for your own setup. Replace <placeholders> with your values.


AGENTS.md

@minimaxir
minimaxir / AGENTS.md
Last active March 19, 2026 03:06
Python AGENTS.md (2026-02-23)

Agent Guidelines for Python Code Quality

This document provides guidelines for maintaining high-quality Python code. These rules MUST be followed by all AI coding agents and contributors.

Your Core Principles

All code you write MUST be fully optimized.

"Fully optimized" includes:

@minimaxir
minimaxir / AGENTS.md
Last active April 3, 2026 10:40
Rust AGENTS.md (2026-02-23)

Agent Guidelines for Rust Code Quality

This document provides guidelines for maintaining high-quality Rust code. These rules MUST be followed by all AI coding agents and contributors.

Your Core Principles

All code you write MUST be fully optimized.

"Fully optimized" includes:

@hansonw
hansonw / codex_gpt2_codegolf.html
Last active March 6, 2026 23:25
Codex solution to gpt2-codegolf
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>codex_gpt2_codegolf.jsonl - Codex Session</title>
<style>
:root {
--bg: #ffffff;
--panel: #ffffff;
@weshoke
weshoke / codebase-analyzer.py
Created February 8, 2026 21:34
dspy.RLM analyzing a code base with a rules file
#!/usr/bin/env python3
"""
Codebase analyzer using Recursive Language Models (RLM) via DSPy.
Based on: https://kmad.ai/Recursive-Language-Models-Security-Audit
Usage:
python analyze-codebase.py --mode security --output report.md
python analyze-codebase.py --mode documentation --exclude tests,vendor
python analyze-codebase.py --mode quality --max-iterations 50