Skip to content

Instantly share code, notes, and snippets.

@karpathy
Created April 4, 2026 16:25
Show Gist options
  • Select an option

  • Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.

Select an option

Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

The idea here is different. Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki β€” a structured, interlinked collection of markdown files that sits between you and the raw sources. When you add a new source, the LLM doesn't just index it for later retrieval. It reads it, extracts the key information, and integrates it into the existing wiki β€” updating entity pages, revising topic summaries, noting where new data contradicts old claims, strengthening or challenging the evolving synthesis. The knowledge is compiled once and then kept current, not re-derived on every query.

This is the key difference: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read. The wiki keeps getting richer with every source you add and every question you ask.

You never (or rarely) write the wiki yourself β€” the LLM writes and maintains all of it. You're in charge of sourcing, exploration, and asking the right questions. The LLM does all the grunt work β€” the summarizing, cross-referencing, filing, and bookkeeping that makes a knowledge base actually useful over time. In practice, I have the LLM agent open on one side and Obsidian open on the other. The LLM makes edits based on our conversation, and I browse the results in real time β€” following links, checking the graph view, reading the updated pages. Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.

This can apply to a lot of different contexts. A few examples:

  • Personal: tracking your own goals, health, psychology, self-improvement β€” filing journal entries, articles, podcast notes, and building up a structured picture of yourself over time.
  • Research: going deep on a topic over weeks or months β€” reading papers, articles, reports, and incrementally building a comprehensive wiki with an evolving thesis.
  • Reading a book: filing each chapter as you go, building out pages for characters, themes, plot threads, and how they connect. By the end you have a rich companion wiki. Think of fan wikis like Tolkien Gateway β€” thousands of interlinked pages covering characters, places, events, languages, built by a community of volunteers over years. You could build something like that personally as you read, with the LLM doing all the cross-referencing and maintenance.
  • Business/team: an internal wiki maintained by LLMs, fed by Slack threads, meeting transcripts, project documents, customer calls. Possibly with humans in the loop reviewing updates. The wiki stays current because the LLM does the maintenance that no one on the team wants to do.
  • Competitive analysis, due diligence, trip planning, course notes, hobby deep-dives β€” anything where you're accumulating knowledge over time and want it organized rather than scattered.

Architecture

There are three layers:

Raw sources β€” your curated collection of source documents. Articles, papers, images, data files. These are immutable β€” the LLM reads from them but never modifies them. This is your source of truth.

The wiki β€” a directory of LLM-generated markdown files. Summaries, entity pages, concept pages, comparisons, an overview, a synthesis. The LLM owns this layer entirely. It creates pages, updates them when new sources arrive, maintains cross-references, and keeps everything consistent. You read it; the LLM writes it.

The schema β€” a document (e.g. CLAUDE.md for Claude Code or AGENTS.md for Codex) that tells the LLM how the wiki is structured, what the conventions are, and what workflows to follow when ingesting sources, answering questions, or maintaining the wiki. This is the key configuration file β€” it's what makes the LLM a disciplined wiki maintainer rather than a generic chatbot. You and the LLM co-evolve this over time as you figure out what works for your domain.

Operations

Ingest. You drop a new source into the raw collection and tell the LLM to process it. An example flow: the LLM reads the source, discusses key takeaways with you, writes a summary page in the wiki, updates the index, updates relevant entity and concept pages across the wiki, and appends an entry to the log. A single source might touch 10-15 wiki pages. Personally I prefer to ingest sources one at a time and stay involved β€” I read the summaries, check the updates, and guide the LLM on what to emphasize. But you could also batch-ingest many sources at once with less supervision. It's up to you to develop the workflow that fits your style and document it in the schema for future sessions.

Query. You ask questions against the wiki. The LLM searches for relevant pages, reads them, and synthesizes an answer with citations. Answers can take different forms depending on the question β€” a markdown page, a comparison table, a slide deck (Marp), a chart (matplotlib), a canvas. The important insight: good answers can be filed back into the wiki as new pages. A comparison you asked for, an analysis, a connection you discovered β€” these are valuable and shouldn't disappear into chat history. This way your explorations compound in the knowledge base just like ingested sources do.

Lint. Periodically, ask the LLM to health-check the wiki. Look for: contradictions between pages, stale claims that newer sources have superseded, orphan pages with no inbound links, important concepts mentioned but lacking their own page, missing cross-references, data gaps that could be filled with a web search. The LLM is good at suggesting new questions to investigate and new sources to look for. This keeps the wiki healthy as it grows.

Indexing and logging

Two special files help the LLM (and you) navigate the wiki as it grows. They serve different purposes:

index.md is content-oriented. It's a catalog of everything in the wiki β€” each page listed with a link, a one-line summary, and optionally metadata like date or source count. Organized by category (entities, concepts, sources, etc.). The LLM updates it on every ingest. When answering a query, the LLM reads the index first to find relevant pages, then drills into them. This works surprisingly well at moderate scale (~100 sources, ~hundreds of pages) and avoids the need for embedding-based RAG infrastructure.

log.md is chronological. It's an append-only record of what happened and when β€” ingests, queries, lint passes. A useful tip: if each entry starts with a consistent prefix (e.g. ## [2026-04-02] ingest | Article Title), the log becomes parseable with simple unix tools β€” grep "^## \[" log.md | tail -5 gives you the last 5 entries. The log gives you a timeline of the wiki's evolution and helps the LLM understand what's been done recently.

Optional: CLI tools

At some point you may want to build small tools that help the LLM operate on the wiki more efficiently. A search engine over the wiki pages is the most obvious one β€” at small scale the index file is enough, but as the wiki grows you want proper search. qmd is a good option: it's a local search engine for markdown files with hybrid BM25/vector search and LLM re-ranking, all on-device. It has both a CLI (so the LLM can shell out to it) and an MCP server (so the LLM can use it as a native tool). You could also build something simpler yourself β€” the LLM can help you vibe-code a naive search script as the need arises.

Tips and tricks

  • Obsidian Web Clipper is a browser extension that converts web articles to markdown. Very useful for quickly getting sources into your raw collection.
  • Download images locally. In Obsidian Settings β†’ Files and links, set "Attachment folder path" to a fixed directory (e.g. raw/assets/). Then in Settings β†’ Hotkeys, search for "Download" to find "Download attachments for current file" and bind it to a hotkey (e.g. Ctrl+Shift+D). After clipping an article, hit the hotkey and all images get downloaded to local disk. This is optional but useful β€” it lets the LLM view and reference images directly instead of relying on URLs that may break. Note that LLMs can't natively read markdown with inline images in one pass β€” the workaround is to have the LLM read the text first, then view some or all of the referenced images separately to gain additional context. It's a bit clunky but works well enough.
  • Obsidian's graph view is the best way to see the shape of your wiki β€” what's connected to what, which pages are hubs, which are orphans.
  • Marp is a markdown-based slide deck format. Obsidian has a plugin for it. Useful for generating presentations directly from wiki content.
  • Dataview is an Obsidian plugin that runs queries over page frontmatter. If your LLM adds YAML frontmatter to wiki pages (tags, dates, source counts), Dataview can generate dynamic tables and lists.
  • The wiki is just a git repo of markdown files. You get version history, branching, and collaboration for free.

Why this works

The tedious part of maintaining a knowledge base is not the reading or the thinking β€” it's the bookkeeping. Updating cross-references, keeping summaries current, noting when new data contradicts old claims, maintaining consistency across dozens of pages. Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass. The wiki stays maintained because the cost of maintenance is near zero.

The human's job is to curate sources, direct the analysis, ask good questions, and think about what it all means. The LLM's job is everything else.

The idea is related in spirit to Vannevar Bush's Memex (1945) β€” a personal, curated knowledge store with associative trails between documents. Bush's vision was closer to this than to what the web became: private, actively curated, with the connections between documents as valuable as the documents themselves. The part he couldn't solve was who does the maintenance. The LLM handles that.

Note

This document is intentionally abstract. It describes the idea, not a specific implementation. The exact directory structure, the schema conventions, the page formats, the tooling β€” all of that will depend on your domain, your preferences, and your LLM of choice. Everything mentioned above is optional and modular β€” pick what's useful, ignore what isn't. For example: your sources might be text-only, so you don't need image handling at all. Your wiki might be small enough that the index file is all you need, no search engine required. You might not care about slide decks and just want markdown pages. You might want a completely different set of output formats. The right way to use this is to share it with your LLM agent and work together to instantiate a version that fits your needs. The document's only job is to communicate the pattern. Your LLM can figure out the rest.

@SonicBotMan
Copy link
Copy Markdown

SonicBotMan commented Apr 13, 2026

We've been building wiki-kb (https://github.com/SonicBotMan/wiki-kb), a system based on this exact pattern from Karpathy's gist β€” "compiling vs retrieving." The gist describes the idea well, but we found the hard part isn't the initial build, it's preventing degradation over months of daily use. Here's what we added on top:

Architecture: 3 layers instead of 2

Karpathy describes raw sources β†’ wiki. We added a third layer in between: schema. Each wiki page has YAML frontmatter with typed fields (lists, dates, entity references, status). A resolver.py validates every write before it hits the filesystem. This catches most "lazy LLM" problems (empty fields, wrong types, broken cross-references) before they compound.

Entity Registry β€” the graph backbone

A JSON registry (with file locking) tracks every entity (people, concepts, projects, events) with canonical names and aliases. When the LLM tries to create a duplicate entity with a slightly different name, the registry catches it and merges. This is what prevents the wiki from turning into 50 pages about the same thing with slightly different titles β€” one of the first failure modes we hit.

Periodic lint cycle

After any wiki update, a verification pass checks: does every entity referenced in frontmatter actually exist? Are cross-references bidirectional? Does the graph remain connected? This runs automatically and flags issues before they cascade.

On the model collapse concern

This is real β€” we've seen it happen when the LLM starts rewriting existing pages instead of adding new information. Our mitigation is structural: the typed frontmatter and entity registry provide "hard rails" that are harder to corrupt than freeform prose. The wiki can drift in narrative quality, but the structural invariants (entity relationships, bidirectional links, graph topology) remain verifiable programmatically.

MCP-based automation

The whole system runs as an MCP server, so any LLM agent (Hermes, Claude Code, etc.) can read/write the wiki through a standardized tool interface. A semantic search index (OpenViking) sits alongside the wiki for retrieval-augmented queries when the compiled knowledge isn't enough.

The key takeaway from our experience: don't just let the LLM write freely and hope for the best. Enforce structural invariants at the schema layer, and the wiki stays useful much longer. We've been running this daily for several months now and the quality has held up well.

@gnusupport
Copy link
Copy Markdown

99% of comments are made by AI, I really don't know the value for reading these comments and ads, long and unreadable, good lood but no help, I call them trash.

What an irony that in the discussion thread referencing LLM/AI you are protesting against people who use that same AI/LLM to generate their text, while in same time having nothing to contribute.

I find this random brainstorming powerful, and I do expect well written and expanded text. This isn't coffee chat at the breakfast. This is empowering thread. I would ask you to contribute to brainstorming, instead of complaining on what tools people use.

Please don't post any ads, the true valuable things are thoughts.

Exaggerated.

@gnusupport
Copy link
Copy Markdown

I am following principles from:

About Dynamic Knowledge Repositories (DKR):
https://www.dougengelbart.org/content/view/190/163/

Screenshot-2026-04-13-11-50-05-383814530

Thus ANYTHING can become and should be an elementary object. Objects can be packed, shared, displayed, whatever.

Even a short note. Or number, or UUID, file, database based note, entries, remote files, PDFs, anything.

Those files should never be moved or copied for reason of LLM/Wiki "ingestion", as that ingestion alone is already generating embeddings, and text snippets (that is sometimes more than the copy-size of the file).

Use embedding types:

1 Elementary objects (body)
2 People
3 Files
4 LLM Responses
5 Speech
6 Org Mode Headings
7 Emacs Lisp
8 Images
10 M-x command
11 Hyperscope Query
12 Elementary object (name)
13 URL text
14 E-mail (Maildir)

Add any embedding type.

Generating embeddings for everything.

Use different retrievals for specific uses cases, even grep works fine. Use PostgreSQL full text search, or mu find or notmuch you name it.

Use intersections. 120,000 documents can be intersected by it's properties in unlimited way:

  • different website pages;
  • different subjects;
  • languages, media types, sizes of documents, prices, etc.

Build your own DKR.

@PurpleBanana-ai
Copy link
Copy Markdown

@gnusupport it makes it really hard to take any of the comments seriously if I feel like I'm talking to a modern version of ELIZA (with some self promotion thrown inβ€”50 out of the 435 current comments are plugging their own projects).

It had to resonate with me if I am actually posting something for the bots, crawlers and other LLM's to analyze, but I thought this deserved a thumbs up at least. I wouldn't be looking into this entire concept if I didn't love AI and LLM's, but I agree with you on the comment issues. This type of work that Karpathy put out should compliment our intelligence, yet when its hit with what you felt and saw in the comments, then used AI to quantify, it raises a different curtain that some people are not going to like to see behind (especially if a mirror is there).

Using AI to analyze and measure "it" is exactly the right use case of blending our gray matter and silicon together, not in lieu of, but in tandem with. So, I agree with you, and personally, I would give it a name, and its another piece of the broader enshattification of everything. If people cannot even right a comment without using an LLM to "fine tune it", or worse, just cut and paste a response, then this all just becomes bots talking with bots, who were trained by previous bots, trained by other earlier bots, who were than trained on data that was crawled out from one of us meatbags using an original thought...without that first step at the bottom of the chain, we become a synthetic echo chamber quickly moving towards catastrophic rot. You can love working with AI-LLM's, and still use it without becoming dependent on it for every word, and you can also use it to point out flaws or find the pattern you found, they are not mutually exclusive.

Now for my self promoting plug, "...brought to you by carls jr., with support from Brawndo, its got what plants crave!"

@gnusupport
Copy link
Copy Markdown

@PurpleBanana-ai fine, though personally I do not get frustrated on text laid out about projects of people. Problem is that IMHO majority of people, including me, we cannot express ourselves in such way that it is well by language standard, and that is is laid out in such way for the destined audience. And what to say for non-native English speakers? I cannot. I have to correct the text. I am welcoming those project makers, this thread became treasure to find out similar projects. I see nothing wrong with it.

I find that resistance to text generated by LLM funny, instead of reading the point of that -- as someone did put attention to provide ideas to you, people are looking how it sounds, like if there is "Overall," at the end, it sounds LLM generated. Though the word is not important or tool used, but the idea, and that is overlooked.

All projects represented seem to be very good in the direction from LLM/WIKI ideas.

I don't expect small talk on such technical subjects.

@earaizapowerera
Copy link
Copy Markdown

earaizapowerera commented Apr 13, 2026 via email

@FBoschman
Copy link
Copy Markdown

I'm just not enough of a commercial guy I think. But the whole discussion about 'self promotion, AI bot training' it just does not resonate to me. I have added to this growing knowledge base, that's it. I'm curious about what others have to add to this idea. If bolstering your ego is your thing, than well do that on your own time. I am moving forward.

@jurajskuska
Copy link
Copy Markdown

jurajskuska commented Apr 13, 2026 via email

@meghm1007
Copy link
Copy Markdown

How's the token usage for such a project? As I scale and give more memory context I assume each run would consume exponentially more tokens

@abbacusgroup
Copy link
Copy Markdown

The solution we developed allows the AI you pay for to do the coding, and a local LLM to maintain the second brain.

The maintenance burden. That is the insight here. Not the reading, not the thinking; the bookkeeping. Cross-references that decay. Contradictions that accumulate silently. Summaries that stop reflecting reality the moment a new decision is made. Humans abandon knowledge systems because the cost of keeping them honest eventually exceeds the value of having them at all.

I have been building against this exact problem. Cortex is a persistent knowledge system that runs as an MCP server. It classifies knowledge objects with a formal OWL-RL ontology, stores them in a dual architecture (Oxigraph SPARQL graph + SQLite FTS5), and reasons over them deterministically.

The distinction from file-based approaches: Cortex traces transitive chains. If A supersedes B and B supersedes C, it infers that A supersedes C. It catches contradictions structurally. It detects systemic patterns. It surfaces stale decisions. All of this without LLM calls. The reasoning is formal logic, not statistical prediction.

It runs locally from ~/.cortex/, speaks MCP, and works with any model.

Your LLM Wiki framing with a formal knowledge graph and MCP underneath feels like the natural convergence. I would be curious to hear your take.

https://github.com/abbacusgroup/cortex

How's the token usage for such a project? As I scale and give more memory context I assume each run would consume exponentially more tokens

@jurajskuska
Copy link
Copy Markdown

jurajskuska commented Apr 13, 2026 via email

@gitdexgit
Copy link
Copy Markdown

gitdexgit commented Apr 13, 2026

little QoL feature: Read less; get the meaning; move on

Description:
Add TL;DR to your ~/wiki. Caveman communication is the way <- Abstracting(less words; most meaning) the Answer/.md for the human. But you can click button to read details if you need to.


goal1: Read the gist of the ~/wiki or LLM answer. But you have option to read detailed answer. <- Abstracting the Answer/.md for the human. but you can click button to read details if you need

goal2: Read less words; get the the most meaning; decide to read the whole detailed .md or move on. The less you read the better; because you focus on output more -> writing. Always communicate(write, read in IDE) simply first, but you have option to go into detail(The main .md <- The source code; a very detailed almost research like .md paper or article for all context for LLM and analytical reading).

Solution:

https://github.com/JuliusBrussee/caveman

Call this summary version. or readable version. But the main .md This is for the LLM. While the other .md is the summary version. the TL;DR version that both are accessible to the same knowledge. This can be added in the editing layer where you ask Q&A as well.

Again the goal is less words; more meaning.


Details:

Problem1: The longer, the more /raw data you have. The more .md files you have to read. The longer it takes for you to read. The less your brain remembers. You keep asking the same Q&A again and again. Potential useful Q&A might not be asked. You miss understand the information contained in the ~/wiki. you dump bad /raw. LLM compiles. Asking Q to delete bad .md because of bad /raw. You waste time. LLM can't carry for everything.

Problem2: You read .md 1week ago. You don't really remember what it's all about. You ask Q&A, find it with llm help. You reread the .md that has the same detailed words. Your goal is only memory refresher not to re-read the whole thing again <-- too much scrolling down and too much eye scanning for many words. You take longer to kick the engine(producing actual human output) to feed /raw ingest.

idea1(something like this): Instead of LLM giving you 1 answer to your 1 question. It gives you 2 answers. You read the gist answer(very less words; keep most meaning) But you decide to read the fully detailed answer if you want.

Idea2(something like this): in compiling or producing .md. Make article1.md for LLM that is detailed. And make a 2nd version of the same article1-human.md file for the human to keep as much meaning as possible using as little words or data as possible. But user can decide to read further <-- saves time. Because there are always 2 files of the same .md. 1 for the LLM and your system 2 brain (long reading sessions or for reasoning) the other .md for your system 1 brain and long-term mental model(the logic) retention thought frequent repetition of the logic, because you read less to get the logic; build mental connections faster.

explaining:
There is a problem with the language used for edits. English or any LLM output language contains a lot of fluff that is baked in the Model's way of training -> Lot's of words; low meaning. This is not helpful when your goal is to look for personal information as effectively as possible.

This creates a problem where it gets harder to read after the 2k file in the wiki. <- It's a human problem. You solve it with Q&A sure. Don't read the ~/wiki just ask questions and LLM goes there and brings it up, gives links or sources at the very end.

I believe there should be 2 versions of .md of each file in the ~/wiki. One is the "detailed" or compiled .md output in all of it's glory, the source code .md. The 2nd is the same version but the focus is less words more meaning.

Where fast access is needed, fast communication. these ideas might help you over time to develop the gist in your brain just from the ~/wiki and the repeated process of fast Q&A and fast reading of the logic of .md first before going into detail. Your brain in theory should retain more of the answer so you don't have to ask the question that aren't needed. Also LLM is really good at compressing data into as little words as possible while preserving the highest meaning as possible.

Hopefully with less words you can work more efficiently as you specify or ask LLM for further questions to clarify. But first you need your brain to detect it first. Using big words when very few words get the job done saves brain power to focus on the words that matter the most.

Further abstractions are: Q&A but instead of getting a detailed answer or the option to pick the less wordy answer, you are provided with questions to which if answered you don't need to read.

So "A lot of details" --> "less words; preserve as much meaning" --> "1 or 2 questions or a series of questions to which answered by you in your brain then no need to read"

@payneio
Copy link
Copy Markdown

payneio commented Apr 13, 2026

I built https://github.com/payneio/prism last year to provide tooling for LLMs to write wikis. Prism, similarly, handles the fiddly bits of wiki maintenance... mostly through front-matter. I went pretty deep into the structure of knowledge bases because I wanted to allow the LLM to be able to break up large pages, combine pages, deep link, symlink, summarize, tag, etc. etc... and the big one, make a different page the root and have all the navigition/links/urls updated accordingly. Making a new node the root models two common scenarios: 1) as the wiki is growing, realizing that you've evolved a new focus, and (2) being able to grab any page and its n-deep neighbor walk (a sub-wiki) and share it with someone else (or another agent).

When I got that far, though, I just realized I was making a graphdb, and that the wiki is just a view for humans... which will have limited utility as agent fleets scale (we just don't have enough attention to read everything)... so we might as well just give the agents their own graphdbs/triple-strores/whatever along with some agentic knowledge management rules.

Down with the hierarchy! Knowledge wants to be free! πŸ˜†

@jurajskuska
Copy link
Copy Markdown

jurajskuska commented Apr 13, 2026 via email

@gnusupport
Copy link
Copy Markdown

gnusupport commented Apr 13, 2026 via email

@jurajskuska
Copy link
Copy Markdown

jurajskuska commented Apr 13, 2026 via email

@hectordww-alt
Copy link
Copy Markdown

I wrote a tiny add-on prompt for this pattern focused on taste logs: music, films, books, etc.

The idea is to keep plain markdown logs plus small curator instructions, so an agent can avoid repeats, use misses as negative signal, and make recommendations from actual taste history rather than starting from zero each time.

https://gist.github.com/hectordww-alt/30c3e6af4ec77001f21b8b103e0115ff

@ilya-epifanov
Copy link
Copy Markdown

I wrote a couple of tools augmenting LLM-wiki:

  1. https://github.com/ilya-epifanov/llmwiki-tooling β€” a CLI utility to simplify linting, checking and fixing links, optionally enforcing frontmatter fields, sections in markdown etc. It's supposed to be used by the agent for consistency and to save some tokens.
  2. https://github.com/ilya-epifanov/wikidesk:
    • a client binary that syncs a copy of wiki/ locally and can talk to the server to initiate a research
    • a server that spawns a Claude (or any other agent) instance whenever it receives a research request (with adjustable additional prompt)

Both tools are as unopinionated as possible. They should work with any reasonably non-disfigured LLM-wiki setup.

Works great for me!
My use case: claude on DGX Spark (actually an ASUS thingy) is busy designing an ML training pipeline while having access to my ML wiki. A couple of research requests it has sent so far have properly incrementally updated the wiki and pulled in relevant papers.
πŸŽ†

@waydelyle
Copy link
Copy Markdown

SwarmVault v0.7.30 β€” now with a first-party Obsidian plugin. Another update from the project that started from this gist.

Five releases since the last post and the big one is the Obsidian integration:

  • First-party Obsidian plugin β€” @swarmvaultai/obsidian-plugin drives the full CLI from inside Obsidian. Status bar shows vault state + compile freshness, command palette runs init/ingest/compile/lint/watch/serve, "Query from current note" returns answers with page_id β†’ wikilink citations so results link directly to your vault pages. Run Log view streams live stdout/stderr. Currently in Obsidian community marketplace review.
  • Deep Obsidian export β€” graph export --obsidian now ships .obsidian/types.json for Bases/Dataview property typing, node-type color groups for the graph view, typed link frontmatter for Breadcrumbs/Juggl/ExcaliBrain, graph metrics (degree, bridge score, god-node detection) in frontmatter, cssclasses per page type, and pre-built Dataview dashboards. Canvas export uses clickable file nodes with directional arrows.
  • swarmvault demo β€” zero-config sample vault walkthrough. Point someone at the repo and they can see what a compiled vault looks like in under a minute.
  • swarmvault diff β€” shows graph-level changes against the last committed state. See exactly what changed structurally, not just file diffs.
  • Offline graph exports β€” graph export --html-standalone bundles vis-network inline so exported HTML works with no internet connection.
  • TypeScript path alias resolution β€” @/components/Button and @utils/format style imports now resolve correctly in the code index via tsconfig.json.

We're heading toward being the default second brain compiler for people who already live in Obsidian. The wiki Karpathy described in this gist is the output format β€” SwarmVault automates building and maintaining it.

Try it: npx @swarmvaultai/cli demo β€” see a working vault in 30 seconds, no config needed.

Repo: https://github.com/swarmclawai/swarmvault

If you use Obsidian, would love early feedback on the plugin.

@giovani-junior-dev
Copy link
Copy Markdown

Hey! I just wanted to take a moment to thank you for sharing this project. Claude Wiki is a fantastic idea and the way you've documented and made it accessible is really impressive.

Your content inspired me to create my own custom skill for Claude Code, adapted to my specific workflow and needs. I've been using it heavily on the projects I'm developing here in Brazil, and it has made a huge difference β€” Claude now has context and memory across sessions, which has completely changed the way I work.

It's great to see the community building on top of Andrej Karpathy's LLM Wiki methodology in such practical and creative ways. Keep up the amazing work!

Thanks again for sharing this with the world. πŸ™Œ

https://claude-wiki.madeinvibecoding.com/

@skyllwt
Copy link
Copy Markdown

skyllwt commented Apr 14, 2026

We didn't just build a wiki β€” we plugged it into the entire research pipeline as the central hub that every step revolves around.

The result is Ξ©megaWiki: your LLM-Wiki concept extended into a full-lifecycle research platform.

If you find it useful, a ⭐ would mean a lot! PRs, issues, and ideas all welcome β€” let's build
this together.

https://github.com/skyllwt/OmegaWiki

ζˆͺε›Ύ 2026-04-14 08-55-39

What the wiki drives:
β€’ Ingest papers β†’ structured knowledge base with 8 entity types
β€’ Detect gaps β†’ generate research ideas β†’ design experiments
β€’ Run experiments β†’ verdict β†’ auto-update wiki knowledge
β€’ Write papers β†’ compile LaTeX β†’ respond to reviewers
β€’ 9 relationship types connecting everything (supports, contradicts, tested_by...)

The key idea: the wiki isn't a side product β€” it's the state machine. Every skill reads from it,
writes back to it, and the knowledge compounds over time. Failed experiments stay as
anti-repetition memory so you never re-explore dead ends.

20 Claude Code skills, fully open-source. Still early-stage but functional end-to-end. We're
actively iterating β€” more model support and features on the way.

@earaizapowerera
Copy link
Copy Markdown

earaizapowerera commented Apr 14, 2026 via email

@vanillaflava
Copy link
Copy Markdown

I've been working with Obsidian and various LLMs (mostly Chats, little Code) for a while. Filesystem MCP kind of steered me in this direction already (I noticed that I had to write less bootstraps, but I generated hundreds of files, and search and linking was painful.) When I stumbled on this post (and the deliberate learning oriented angle it has), I figured why not? and tried implementing it myself as a pure self-teaching excercise.

I'm glad I did (and not just picking something off the shelf). Thinking about the pattern, my own pains, and looking at the other implementations shared here has really boosted my understanding of what really matters when it comes to working with LLM. I used Claude and published the skills as installable .skill files: https://github.com/vanillaflava/llm-wiki-claude-skills.

I adapted a few things (like turning the ingestion on it's head. Unsorted scrapheap -> categorized sources), I had manually organised my notes into domain-specific hubs before -> but the wiki pattern loves those, and really latches onto them. I added an extra skill to summarize and touch and update what is known at the end of a session, pivotal point or before retiring the chat, and that really lit up my brain. Now I don't need bootstraps anymore, the wiki is the bootstrap and I can specialize agents by just following the breadcrumbs to their specific domain (without ramming the same huge documents down its throat over and over). It just all seems to compound more and more. Token usage is way down compared to last week.

Just here to thank you (and the other posters) for sharing your thoughts and examples, and for leaving this explicitly vague. If I hadn't taken the plunge and tried to just tinker with it myself, I would have missed 90% of the point that makes this so elegant. I am still in shock how well this works!

Thank you for writing this up.

@Nemo4110
Copy link
Copy Markdown

Nemo4110 commented Apr 14, 2026

@kytmanov
Copy link
Copy Markdown

Just shipped v0.2 LLM Wiki for local Ollama LLMs. Now with Rejection feedback loop https://github.com/kytmanov/obsidian-llm-wiki-local

@gnusupport
Copy link
Copy Markdown

Not today. Maybe someday LLMs will have persistent memory, perfect recall, and flawless integrity. But that day isn't here. Right now, handing your knowledge base to an LLM means accepting contradictions, broken links, privacy leaks, and probabilistic answers to questions that need deterministic ones. I've spent 23 years building Hyperscope β€” the Dynamic Knowledge Repository, deterministic programs, human in control β€” and I use LLMs to accelerate my work, not replace my judgment. The LLM is a refreshener, not the curator. Keep your hands on the wheel. Full article: https://gnu.support/articles/Hyperscope-vs-LLM-Wiki-Why-PostgreSQL-Beats-Markdown-for-Deterministic-Knowledge-Bases-124138.html

@gnusupport
Copy link
Copy Markdown

@kytmanov

Just shipped v0.2 LLM Wiki for local Ollama LLMs. Now with Rejection feedback loop https://github.com/kytmanov/obsidian-llm-wiki-local

Sure, drop markdown notes... πŸ˜‚πŸ˜‚πŸ˜‚ there is much more to it. Drop images, all multimedia! Images can easily be described by LLM, get embeddings and get related to other objects, Many notes are images. Imagine staff members, that is what we have, they make picture of their notes and reports and submit back to organization. Think future. Is something like that limited as "markdown notes" even manageable. I am talking from 23 years experience handling bunch of information. And surely I am using new technologies. But think time, future, how would you work with it in future with it? The Dynamic Knowledge Repository concept by Doug Engelbart was future proof since the vision for boosting Collective IQ. https://en.wikipedia.org/?curid=1004008

LLMs are useful, but not to be delegated the human work, as that way you would defeat the purposes. See more here: https://dougengelbart.org/content/view/190/

So if I am to follow LLM-Wiki... throw bunch of markdown notes into my system...

I Am Not Throwing Bunch of Markdown Notes into My System 🀣🀣🀣🀣

The LLM-Wiki pattern assumes your world is made of Markdown. I was one of first Markdown users since it's inception, and was promoting it as a good replacement for some other systems I was using, if I remember well asciidoc and m4. Information today is multi-media, not just text. But that knowledge base should be limited to Markdown notes in 21st century.... no way 🀣🀣🀣

πŸ˜‚ The LLM-Wiki pattern is essentially LLM training data generation disguised as personal knowledge management. πŸ˜‚

Think about it:

  • You feed the LLM your sources

  • The LLM writes markdown files

  • Those markdown files become training material for the next session (via the schema file and index)

  • The LLM reads its own previous outputs to answer questions

πŸ‘ The Tale of the Sheep and the LLM-Wiki Saga πŸ‘

The whole system is a loop of LLM β†’ markdown β†’ LLM β†’ markdown πŸ˜‚πŸ˜‚πŸ˜‚πŸ˜‚πŸ˜‚πŸ˜‚ and bunch of people running after it as πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘!!!!!!!

It's not a knowledge repository. It's a self-perpetuating LLM context generator. The wiki exists only to feed the LLM on the next query.

An LLM-Wiki without the LLM is just a bunch of files, without any organization!

A Dynamic Knowledge Repository database without the LLM is still a fully functional, queryable, relational knowledge base with 23 years of data, 245,377 people, 95,211 hyperdocuments, and complete referential integrity. The LLM is optional β€” a nice interface, not the engine.

Call this LLM-Wiki, fo what it is: LLM training with an expanded website wiki. πŸ˜‚πŸ€£πŸ’€

Go run for it πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘

People follow the LLM-Wiki pattern not because it's good, but because Karpathy said it. He's an authority at OpenAI, Tesla, everywhere. So people assume: "He must know what he's doing."

But authority is not infallibility. 😈

He cannot beat DKR. Not because he's not smart. Because he's not Engelbart. He didn't spend decades thinking about CODIAK, Open Hyperdocument Systems, and Dynamic Knowledge Repositories. He came up with a clever weekend hack and people are treating it like gospel. πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘

Sheep follow the shepherd. πŸ‘πŸ‘πŸ‘

@catalinviciu
Copy link
Copy Markdown

I think this is a real problem and it's not a generic one. For certain jobs you might need a more opinionated method of bookkeeping.
As a coincidence I've built something similar but for Product Managers allowing us to keep and maintain the product context and update and use it for downstream purposes powered by AI agents.
You can find it here. It's free https://github.com/catalinviciu/product-builder-agent.git

@mauceri
Copy link
Copy Markdown

mauceri commented Apr 14, 2026 via email

@Mekopa
Copy link
Copy Markdown

Mekopa commented Apr 14, 2026

My experince is LLMs are able to discover knowlage better with a graph reprsentation layer

beyond just .md using data files like;
.ics
.vcf
...
and "link" them with each other, similar to how obsidian does for .md's

basicly a dead simple wiki of your life, and Im using a graph.json to keep my graph up to date

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment