Skip to content

Instantly share code, notes, and snippets.

View Sakeeb91's full-sized avatar
๐ŸŽฎ
playing infinite games

Shafkat Rahman Sakeeb91

๐ŸŽฎ
playing infinite games
View GitHub Profile

Claude Code memory docs: code.claude.com/docs/en/memory Auto-dream system prompt: github.com/Piebald-AI/claude-code-system-prompts Sleep-time Compute paper: arxiv.org/abs/2504.13171 MemGPT paper: arxiv.org/abs/2310.08560 Feature analysis: dev.to/akari_iku Mechanic deep-dive: claudefa.st/blog/guide/mechanics/auto-dream

Technical Commentary on Memory for All: SAGE โ€” Spatial Associative Geometric Embeddings

Paper reviewed: Likov, I. (March 2026). Memory for All: SAGE โ€” Spatial Associative Geometric Embeddings โ€” A Weight-Free Geometric Memory Architecture with Hippocampal-Inspired Consolidation.


1. What the paper is actually proposing

Read the full paper. Stripped of rhetoric, the system is this:

Seven Papers Converging on the Same Theory of Transformer Internals

Seven independent research programs โ€” biology, statistical physics, neuroscience, cognitive science, pure mathematics, dynamical systems โ€” all converging on the same structural description of how transformers compute. Different formalisms. Same underlying object.

Full isomorphism analysis in the thread. Here are the sources.


1. On the Biology of a Large Language Model

Lindsey, Gurnee, Ameisen, Chen, Pearce, Turner, Citro et al. โ€” Anthropic (2025)

@Sakeeb91
Sakeeb91 / LLMs.md
Last active January 5, 2023 17:38 — forked from yoavg/LLMs.md

Some remarks on Large Language Models

Yoav Goldberg, January 2023

Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.

Intro

Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We

Keybase proof

I hereby claim:

  • I am sakeeb91 on github.
  • I am sakeeb (https://keybase.io/sakeeb) on keybase.
  • I have a public key ASBTEcZrdWbjPCSP4vt97NpkT3z4G4lT33S0_RZkQhYB0wo

To claim this, I am signing this object: