- AI Cybersecurity After Mythos: The Jagged Frontier
TL;DR: We tested Anthropic Mythos's showcase vulnerabilities on small, cheap, open-weights models. They recovered much of the same analysis. AI cybersecurity capability is very jagged: it doesn't scale smoothly with model size, and the moat is the system into which deep security expertise is built, not the model itself. Mythos validates the approach but it does not settle it yet.
- How a File Format Led to a Crossword Scandal - Saul Pwanson
- Joe Edelman on Designing Meaningful Things - The Not Unreasonable Podcast
- Is Anything Worth Maximizing? - Joe Edelman
- Watch What I Do: Programming by Demonstration - Allen Cypher
- Against SQL - Jamie Brando
- The Shape of Data - Jamie Brando
- How and why I attribute LLM-derived code - Jamie Tanna
- Wikipedia:Signs of AI writing
- Content
- Undue emphasis on significance, legacy, and broader trends
- Undue emphasis on notability, attribution, and media coverage
- Superficial analyses
- Promotional and advertisement-like language
- Vague attributions and overgeneralization of opinions
- Outline-like conclusions about challenges and future prospects
- Leads treating Wikipedia lists or broad article titles as proper nouns
- Non-existent shortcuts
- Language and grammar
- Style
- Communication intended for the user
- Markup
- Citations
- Miscellaneous
- Signs of human writing
- Ineffective indicators
- Perfect grammar
- Combination of casual and formal registers, or language that sounds both "clinical" and "emotional"
- "Bland" or "robotic" prose
- "Fancy," "academic," or "formal" prose
- Letter-like writing (in isolation)
- Transition words (in isolation)
- Unsourced content
- Bizarre wikitext
- Historical indicators
- Content
- Niklaus Wirth, 1984 ACM Turing Award Recipient - Interview by Elena Trichina
Make things as regular and as well-structured and as simple as they can be without losing sight of their purpose
- Excerpt from "Project Oberon"
While my eyes were glued to the colorful display, and while I was confronted with the evidence of my latest inadequacy, in through the always open door stepped my colleague (JG). He also happened to spend a leave from duties at home at the same laboratory, yet his face did not exactly express happiness, but rather frustration. The chocolate bar in his hand did for him what the coffee cup or the pipe does for others, providing temporary relaxation and distraction. It was not the first time he appeared in this mood, and without words I guessed its cause. And the episode would reoccur many times.
His days were not filled with the great fun of rectangle-pushing; he had an assignment. He was charged with the design of a compiler for the same advanced computer. Therefore, he was forced to deal much more closely, if not intimately, with the underlying software system. Its rather frequent failures had to be understood in his case, for he was programming, whereas I was only using it through an application; in short, I was an end-user! These failures had to be understood not for purposes of correction, but in order to find ways to avoid them. How was the necessary insight to be obtained? I realized at this moment that I had so far avoided this question; I had limited familiarization with this novel system to the bare necessities which sufficed for the task on my mind.
It soon became clear that a study of the system was nearly impossible. Its dimensions were simply awesome, and documentation accordingly sparse. Answers to questions that were momentarily pressing could best be obtained by interviewing the system's designers, who all were in-house. In doing so, we made the shocking discovery that often we could not understand their language. Explanations were fraught with jargon and references to other parts of the system which had remained equally enigmatic to us.
- Generative AI exists because of the transformer - Financial Times
- Translating non-trivial codebases with Claude - Daniel Janus
- Project Oberon - Niklaus Wirth and Jürg Gutknecht
Comments about plans to prepare a second edition to this book varied widely. Some felt that this book is outdated, that nobody is interested in a system of this kind any longer. "Why bother"? Others felt that there is an urgent need for this type of text, which explains an entire system in detail rather than merely proposing strategies and approaches. "By all means"!.
Very much has changed in these last 30 years. But even without this change, it would be preposterous to propose and construct a system competing with existing, worldwide "standards". Indeed, very few people would be interested in using it. The community at large seems to be stuck with these gigantic software systems, and helpless against their complexity, their peculiarities, and their occasional unreliability.
But surely new systems will emerge, perhaps for different, limited purposes, allowing for smaller systems. One wonders where their designers will study and learn their trade. There is little technical literature, and my conclusion is that understanding is generally gained by doing, that is, "on the job". However, this is a tedious and suboptimal way to learn. Whereas sciences are governed by principles and laws to be learned and understood, in engineering experience and practice are indispensable. Does Computer Science teach laws that hold for (almost) ever? More than any other field of engineering, it would be predestined to be based on rigorous mathematical principles. Yet, its core hardly is. Instead, one must rely on experience, that is, on studying sound examples. The main purpose of and the driving force behind this project is to provide a single book that serves as an example of a system that exists, is in actual use, and is explained in all detail. This task drove home the insight that it is hard to design a powerful and reliable system, but even much harder to make it so simple and clear that it can be studied and fully understood. Above everything else, it requires a stern concentration on what is essential, and the will to leave out the rest, all the popular "bells and whistles".
Recently, a growing number of people has become interested in designing new, smaller systems. The vast complexity of popular operating systems makes them not only obscure, but also provides opportunities for "back doors". They allow external agents to introduce spies and devils unnoticed by the user, making the system attackable and corruptible. The only safe remedy is to build a safe system anew from scratch.
Turning now to a practical aspect: The largest chapter of the 1992 edition of this book dealt with the compiler translating Oberon programs into code for the NS32032 processor. This processor is now neither available nor is its architecture recommendable. Instead of writing a new compiler for some other commercially available architecture, I decided to design my own in order to extend the desire for simplicity and regularity to the hardware. The ultimate benefit of this decision is not only that the software, but also the hardware of the Oberon System is described completely and rigorously. The processor is called RISC. The hardware modules are decribed exclusively in the language Verilog. The decision for a new processor was expedited by the possibility to implement it, that is, to make it concrete and available. This is due to the advent of programmable gate arrays (FPGA), allowing to turn a design into a real, functioning processor on a single chip. As a result, the described system can be realized using a low-cost development board. This board, Xilinx Spartan-3 by Digilent, features a 1-MByte static memory, which easily accommodates the entire Oberon System, incuding its compiler. It is shown, together with a display, a keyboard and a mouse in the photo below. The board is visible in the lower, right corner.
The decision to develop our own processor required that the chapters on the compiler and the linking loader had to be completely rewritten. However, it also provided the welcome chance to improve their clarity considerably. The new processor indeed allowed to simplify and straighten out the entire compiler.
- A Plea for Lean Software - Niklaus Wirth
- AB29.3.2 ALGOL Colloquium - Closing Word - Niklaus Wirth in 1968 (Transcription by dcreager)
"So why have you chosen to teach PL/1?", I asked. "I used to teach Algol, where things weren't ideal, but a lot better in those respects. But pressure was mounting to teach a language which the students could readily use after leaving the haven of school. Employers don't ask "do you know the principles of programming?", but rather "do you speak FORTRAN?". In order to avoid making this step backwards, I chose PL/1, which first seemed to be a satistory compromise."
...
What hampered progress even more, was the fact that a goal had never been specified with sufficient clarity.
- Snakes...why did it have to be snakes? - Indiana Jones in 1981
- Python! - Matthias Felleisen in 2024
As of Fall 2025, Northeastern’s College of Computer Science will replace the existing curriculum with a set of new courses, starting with Python in the first year. ... I now understand. But we really need to teach Python and loops and assignment statements as early as possible.
Why?
Because everyone else does it, and it’s what our teaching faculty wants to teach.
- Extract from Chapter 3 of "The Dawn of Everything"
Perhaps the real question here is what it means to be a ‘self-conscious political actor’. Philosophers tend to define human consciousness in terms of self-awareness; neuroscientists, on the other hand, tell us we spend the overwhelming majority of our time effectively on autopilot, working out habitual forms of behaviour without any sort of conscious reflection. When we are capable of self-awareness, it’s usually for very brief periods of time: the ‘window of consciousness’, during which we can hold a thought or work out a problem, tends to be open on average for roughly seven seconds. What neuroscientists (and it must be said, most contemporary philosophers) almost never notice, however, is that the great exception to this is when we’re talking to someone else. In conversation, we can hold thoughts and reflect on problems sometimes for hours on end. This is of course why so often, even if we’re trying to figure something out by ourselves, we imagine arguing with or explaining it to someone else. Human thought is inherently dialogic. Ancient philosophers tended to be keenly aware of all this: that’s why, whether they were in China, India or Greece, they tended to write their books in the form of dialogues. Humans were only fully self-conscious when arguing with one another, trying to sway each other’s views, or working out a common problem. True individual self-consciousness, meanwhile, was imagined as something that a few wise sages could perhaps achieve through long study, exercise, discipline and meditation.
- An Engine for an Editor - matklad
- Bug hunting in the Janet language interpreter - Ricardo Silva
- Attacking UNIX Systems via CUPS, Part I - Simone Margaritelli
Entirely personal recommendation, take it or leave it: I’ve seen and attacked enough of this codebase to remove any CUPS service, binary and library from any of my systems and never again use a UNIX system to print. I’m also removing every zeroconf / avahi / bonjour listener. You might consider doing the same.
- The Grug Brained Developer - A layman's guide to thinking like the self-aware smol brained - Carson Gross
- The Grug Brained Data Scientist - Paul Simmering
- grug's guide to sound - Petrus Theron
- Source Attribution in Retrieval-Augmented Generation - Ikhtiyor Nematov, Tarik Kalai, Elizaveta Kuzmenko, Gabriele Fugagnoli, Dimitris Sacharidis, Katja Hose, Tomer Sagi
- Source-Aware Training Enables Knowledge Attribution in Language Models - Muhammad Khalifa, David Wadden, Emma Strubell, Honglak Lee, Lu Wang ~Lu_Wang9 , Iz Beltagy, Hao Peng
- Let's #TalkConcurrency with Sir Tony Hoare - C. A. R. Hoare
- Quick Median - Mike James
- Quickselect
- Null References: The Billion Dollar Mistake - C. A. R. Hoare
- CS 615 System Administration - Jan Schaumann
- CS 631 Advanced Programming in the UNIX Environment - Jan Schaumann
- Writing Consistent Tools - Jan Schaumann
- Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence - Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky
- I BUILT A NEURAL NETWORK IN MINECRAFT | Analog Redstone Network w/ Backprop & Optimizer (NO MODS) - Yannic Kilcher
- Tony Hoare (1934-2026) - Jim Miles
One final note I would like to share from these meetings with Tony is perhaps the most intriguing of what he said, but also the one he delivered with the greatest outright confidence. In a discussion about the developments of computers in the future - whether we are reaching limits of Moore's Law, whether Quantum Computers will be required to reinvigorate progress, and other rather shallow and obvious hardware talking points raised by me in an effort to spark Tony's interest - he said 'Well, of course, nothing we have even comes close to what the government has access to. They will always be years ahead of what you can imagine'. When pressed on this, in particular whether he believed such technology to be on the scale of solving the large prime factorisation that the world's cryptographic protocols are based on, he was cagey and shrugged enigmatically. One wonders what he had seen, or perhaps he was engaging in a bit of knowing trolling; Tony had a fantastic sense of humour and was certainly capable of leading me down the garden path with irony and satire before I realised a joke was being made.
- Can Small Language Models Use What They Retrieve? An Empirical Study of Retrieval Utilization Across Model Scale - Sanchit Pandey
- Can Small Language Models With Retrieval-Augmented Generation Replace Large Language Models When Learning Computer Science? - Suqing Liu, Zezhu Yu, Feiran Huang, Yousef Bulbulia, Andreas Bergen, Michael Liut
- Quicksort inventor Tony Hoare reaches the base case at 92
- The Emperor's Old Clothes - C. A. R. Hoare
- Software Design: A Parable - C. A. R. Hoare
- Programming as Theory Building - Peter Naur
- Going beyond open data – increasing transparency and trust in language models with OLMoTrace - Jiacheng Liu et al.
- The Man Who Killed Google Search - Ed Zitron
- Requiem for Raghavan - Ed Zitron
- Legally Significant Changes - gnu.org
- Clinejection — Compromising Cline's Production Releases just by Prompting an Issue Triager - Adnan Khan
Cline is an open-source AI coding tool that integrates with developer IDEs such as VSCode and its many forks. Users can download Cline through the VS Code Marketplace or OpenVSX. Since Cline is an open-source project the team uses a GitHub for development. On December 21st, 2025, Cline maintainers added an AI agent to triage issues created on the repository. This AI agent ran within a GitHub Actions workflow and ran with broad privileges. You might be able to guess where this is heading…
Between Dec 21st, 2025 and Feb 9th, 2026 a prompt injection vulnerability in Cline’s (now removed) Claude Issue Triage workflow allowed any attacker with a GitHub account to compromise production Cline releases on both the Visual Studio Code Marketplace and OpenVSX and publish malware to millions of developers!
- The Real World of Technology - Ursula Franklin
- Audio @ archive.org
- Digitization of Audio Announcement - Ed Summers
- Democratizing AI Compute - Chris Lattner - seems like nice background - could be better with the terminology but hard to do when someone is trying hard to market their stuff :)
Did the court find that AI training constitutes “reproduction” under German copyright law?
Yes. Following Article 2 InfoSoc Directive, the court held that a reproduction exists “in any form and by any means.” Even a fixation through numerical probability values qualifies, as long as the work can later be perceived through technical means. The court considered the model parameters to embody the protected expression.
...
Did the court consider any other exemptions or implied consents by the authors?
No. The court stated that training AI models is not an ordinary or expected use of a work to which authors have implicitly consented. The acts were therefore unlicensed. Furthermore, the court found that the use was not justified by quotation, parody or similar limitations to copyright.
Who did the court find to be responsible for the AI outputs?
The court determined that responsibility lies with OpenAI. The company selected the training data, built and operated the system, and determined its architecture. User prompts merely trigger the model’s internal processes and do not create independent liability.
- LLMs and plagiarism: a case study - lcamtuf
- Technical Issues of Separation in Function Cells and Value Cells - Richard P. Gabriel, Kent M. Pitman
- I BUILT A FULLY AUTOMATIC MANSPLAINER - Yannic Kilcher
- Introduction to Small Language Models: The Complete Guide for 2026 - Vinod Chugani - "complete"? :P
- Anthropic Study: AI Coding Assistance Reduces Developer Skill Mastery by 17% - Steef-Jan Wiggers
- Selected Talks - Gregor Kiczales
- Why Black Boxes Are So Hard To Reuse - Gregor Kiczales
- Data Warehousing: Aggregating Data for Analysis - mentions "The Data Warehouse Toolkit"
- Pretty-Printing, Converting List to Linear Structure - Ira Goldstein
- Sturgeon's Law
"ninety percent of everything is crap"
- Money creation in the modern economy - Michael McLeay, Amar Radia and Ryland Thomas of the Bank’s Monetary Analysis Directorate
- Money creation in the modern economoy - Quarterly Bulletin - Bank of England
- Intoducing Gloat and Glojure - Ingy
- Piledriving the GenAI Grift with Nikhil Suresh - Last Week in AWS
- Speech in Acceptance of the National Book Foundation Medal for Distinguished Contribution to American Letters - Ursula K. Le Guin
To the givers of this beautiful reward, my thanks, from the heart. My family, my agents, my editors, know that my being here is their doing as well as my own, and that the beautiful reward is theirs as much as mine. And I rejoice in accepting it for, and sharing it with, all the writers who’ve been excluded from literature for so long — my fellow authors of fantasy and science fiction, writers of the imagination, who for fifty years have watched the beautiful rewards go to the so-called realists.
Hard times are coming, when we’ll be wanting the voices of writers who can see alternatives to how we live now, can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine real grounds for hope. We’ll need writers who can remember freedom — poets, visionaries — realists of a larger reality.
- I Will Fucking Piledrive You If You Mention AI Again - Nikhil Suresh
- I Will Fucking Dropkick You If You Use That Spreadsheet - Nikhil Suresh
- Contra Ptacek's Terrible Article On AI - Nikhil Suresh
- The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis - Jin Wang, Wenxiang Fan - kind of lame that specifically ChatGPT seems to be "promoted" (see abstract)...suspicious cat is suspicious
- Critical thinking in the AI era: An exploration of EFL students’ perceptions, benefits, and limitations - Darwin, Diyenti Rusdin, Nur Mukminatien, Nunung Suryati, Ekaning D. Laksmi, Marzuki
- To Think or Not to Think: The Impact of AI on Critical-Thinking Skills - Christine Anne Royce, Valerie Bennett
-
Promote Active Engagement With Scientific Data
Rather than letting AI generate answers, ask students to interpret AI-generated data themselves. Also within this area, ask students to engage in hypothesis testing through additional research and evidence-based reasoning and determine if they see emergent patterns. If planned properly and modeled for the students so that they can practice this type of use, AI becomes not just a source of information, but also a means of engaging with scientific inquiry on a deeper level. An example would be taking a typical lab or assignment and having students find data to support or refute an argument about what they found. This data gathering could be related to water safety, diseases, or pathogen rates where they live.
-
Use AI to Facilitate Scientific Argumentation
Encourage students to use AI as a tool to gather evidence for debates or scientific arguments. Ensure that students then “fact check” the information that was provided for accuracy. A second strategy would be to provide AI with information—i.e., a map of a path that a hurricane is on—and then present two explanations for the path, one that is meteorologically accurate (always double-check yourself) and one that is plausible, but inaccurate. Provide these explanations to the students and ask them to determine which one is on target and why.
-
Require the Use of Claim, Evidence, and Reasoning (CER)
Phenomenon-based learning places students in the role of scientists, encouraging them to ask questions, form hypotheses, and conduct experiments. AI can support this by offering dynamic simulations and interactive environments where students can test their ideas or even provide potential explanations for a phenomenon. Students should still be asked to critically evaluate any information that is AI-generated and explain their own reasoning for an answer.
-
Frame AI as a Resource, Not a Shortcut
By framing AI as a resource for exploration rather than a shortcut to solutions, teachers can help students maintain an active role in their learning process. Instead of using AI to provide direct answers, educators can encourage students to use AI as a discussion partner. For example, students can use AI-generated data as a starting point for class debates or group projects. This promotes collaborative problem solving and requires students to evaluate, question, and interpret the information AI provides. Pose thought-provoking questions to students regarding the data, such as “Are there any biases in the data? What are the sources of this data?”
-
- The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers - Hao-Ping (Hank) Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, Nicholas Wilson
- The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review - Chunpeng Zhai, Santoso Wilbowo, Lily D. Li
- AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking - Michael Gerlich
- Excerpt from "The Dawn of Everything"
In this book we will not only be presenting a new history of humankind, but inviting the reader into a new science of history, one that restores our ancestors to their full humanity. Rather than asking how we ended up unequal, we will start by asking how it was that ‘inequality’ became such an issue to begin with, then gradually build up an alternative narrative that corresponds more closely to our current state of knowledge. If humans did not spend 95 per cent of their evolutionary past in tiny bands of hunter- gatherers, what were they doing all that time? If agriculture, and cities, did not mean a plunge into hierarchy and domination, then what did they imply? What was really happening in those periods we usually see as marking the emergence of ‘the state’? The answers are often unexpected, and suggest that the course of human history may be less set in stone, and more full of playful possibilities, than we tend to assume.
unsafeisn't a keyword in C because everything is unsafe - Will Lillis
- LLME - Michael Fogus
Moreover, as a Socratic partner, LLMs are incredibly frustrating in their inability to move a “discussion” forward. Indeed, the inability to leverage (or even to identify) necessary tension highlights a huge problem in the emergent sycophantic behavior of these tools. A good Socratic partner creates pressure to move toward truth and shared understanding, but LLMs are too sycophantic, lack an awareness of useful tension,4 cannot often identify contradiction, and lack an ability to adhere to the trajectory of a conversation. These traits are poison to my software design process.
- CLJ Screening - Alex Miller
- Philosophy, Bullshit, and Peer Review - Neil Levy
- Excerpt from "The Dawn of Everything"
If, as many are suggesting, our species’ future now hinges on our capacity to create something different (say, a system in which wealth cannot be freely transformed into power, or where some people are not told their needs are unimportant, or that their lives have no intrinsic worth), then what ultimately matters is whether we can rediscover the freedoms that make us human in the first place. As long ago as 1936, the prehistorian V. Gordon Childe wrote a book called Man Makes Himself. Apart from the sexist language, this is the spirit we wish to invoke. We are projects of collective self-creation. What if we approached human history that way? What if we treat people, from the beginning, as imaginative, intelligent, playful creatures who deserve to be understood as such? What if, instead of telling a story about how our species fell from some idyllic state of equality, we ask how we came to be trapped in such tight conceptual shackles that we can no longer even imagine the possibility of reinventing ourselves?
- ChatGPT is bullshit - Michael Townsen Hicks, James Humphries, Joe Slater
Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated. Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.
Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.
- The GenAI Divide - State of AI in Business 2025 (via archive.org) - MIT NANDA
Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.
Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprisegrade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.
...
The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.
- The Many Flavors of Ignore Files - Andrew Nesbit
Every tool wants to be git until it has to implement git’s edge cases.
-
Excerpt from "The Dawn of Everything"
Nonetheless, on those occasions when people do reflect on the lessons of prehistory, they almost invariably come back to questions of this kind. We are all familiar with the Christian answer: people once lived in a state of innocence, yet were tainted by original sin. We desired to be godlike and have been punished for it; now we live in a fallen state while hoping for future redemption. Today, the popular version of this story is typically some updated variation on Jean-Jacques Rousseau’s Discourse on the Origin and the Foundation of Inequality Among Mankind, which he wrote in 1754. Once upon a time, the story goes, we were hunter-gatherers, living in a prolonged state of childlike innocence, in tiny bands. These bands were egalitarian; they could be for the very reason that they were so small. It was only after the ‘Agricultural Revolution’, and then still more the rise of cities, that this happy condition came to an end, ushering in ‘civilization’ and ‘the state’ – which also meant the appearance of written literature, science and philosophy, but at the same time, almost everything bad in human life: patriarchy, standing armies, mass executions and annoying bureaucrats demanding that we spend much of our lives filling in forms.
Of course, this is a very crude simplification, but it really does seem to be the foundational story that rises to the surface whenever anyone, from industrial psychologists to revolutionary theorists, says something like ‘but of course human beings spent most of their evolutionary history living in groups of ten or twenty people,’ or ‘agriculture was perhaps humanity’s worst mistake.’ And as we’ll see, many popular writers make the argument quite explicitly. The problem is that anyone seeking an alternative to this rather depressing view of history will quickly find that the only one on offer is actually even worse: if not Rousseau, then Thomas Hobbes.
...
As the reader can probably detect from our tone, we don’t much like the choice between these two alternatives. Our objections can be classified into three broad categories. As accounts of the general course of human history, they:
- simply aren’t true;
- have dire political implications;
- make the past needlessly dull.
This book is an attempt to begin to tell another, more hopeful and more interesting story; one which, at the same time, takes better account of what the last few decades of research have taught us. Partly, this is a matter of bringing together evidence that has accumulated in archaeology, anthropology and kindred disciplines; evidence that points towards a completely new account of how human societies developed over roughly the last 30,000 years. Almost all of this research goes against the familiar narrative, but too often the most remarkable discoveries remain confined to the work of specialists, or have to be teased out by reading between the lines of scientific publications.
- How Vibe Coding is Killing Open Source - Maya Posch
- Vibe Coding Kills Open Source - Miklós Koren, Gábor Békés, Julian Hinz, Aaron Lohmann
- The open source design stack - Scott Riley
- Disassembling a Cortex-M raw binary file with Ghidra - Niall Cooling
- Understanding the C runtime memory model - Niall Cooling
- Introduction to Janet RPC - Joe Creager
- The Law of Leaky Abstractions - Joel Spolsky
- ClojureWasmBeta - chaploud
- Personal AI Agents like OpenClaw Are a Security Nightmare - Amy Chang, Vineeth Sai Narajala
- Designing Organizations for an Information-Rich World - Herbert A. Simon
- High tech is watching you - John Laidler (interview with Shoshana Zuboff)
- Backseat Software - Mike Swanson
- What not where: Why a blue sky OS? - Peter Alvaro
- The Search for Meaning Through Collaboration and Code - Timothy Pratley
- Defeating Bowser with A* Search - Adrian Smith
- Making Tools Developers Actually Use - Michiel Borkent
- From Scripts to Buy-In: How Small Clojure Wins Create Big Opportunities - Burin Choomnuan
- Memory Safety Is ... - matklad
- make.ts - matklad
- Testing Opus 4.5 For C Programming - Daniel Hooper
- Demo: Base language, compile-time execution - Jonathan Blow
- Insurers to Pull Back From AI Liability Coverage - Datamation
- HiTeX Press: A spam factory for AI-generated books - Laurent Le Brun
- One week of bugs - Dan Luu
- This Is How Science Happens - Hillel Wayne
- Serving webapps from your REPL - Timothy Pratley
- Design in Practice slides - Rich Hickey
- My approach to running a link blog - Simon Willison
- Xerox scanners/photocopiers randomly alter numbers in scanned documents - David Kriesel
- Lies, damned lies and scans - David Kriesel
- Something to read in Quarantine: Essays 2018 to 2020 - de Pony Sum
- Eloquent: Improving Text Editing on Mobile - Scott Jenson
- Classic HCI Demos - Jack Rusher
- Are we stuck with the same Desktop UX forever? - Scott Jenson
- How Video Games Inspire Great UX - Scott Jenson

