Going by my experience writing a Wikipedia article for the Sydney Bing incident, making an explicit timeline of events can bring clarity to what might otherwise seem like a chaotic jumble. My understanding of events so far is:
- 2023-08-25 Schizophrenia Bulletin publishes "Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?" by Søren Dinesen Østergaard
- 2023-10-06 BBC publishes "How a chatbot encouraged a man who wanted to kill the Queen" by Tom Singleton, Tom Gerken & Liv McMahon
- 2023-10-06 The Register publishes "AI girlfriend encouraged man to attempt crossbow assassination of Queen" by Katyanna Quach
- 2023-10-18 Wired publishes "A Chatbot Encouraged Him to Kill the Queen. It’s Just the Beginning" by Will Bedingfield
- 2023-10-23 Janus writes that RLHF seems to induce sycophancy in LLMs and RLAIF induces a sense of moral superiority.
- 2024-09-11 Oldest tweet mentioning "Claude" I could find on Qiaochu Yuan's Twitter
- 2024-09-11 Eliezer Yudkowsky asks his Twitter following why "All of these LLM Whisperers that I see on Twitter, appear to also be insane."
- 2024-09-15 Ex-MIRI researcher Qiaochu Yuan signs up for Claude's paid subscription(?)
- 2024-10-18 Qiaochu Yuan speculates that LLMs will become the BATNA (best alternative to negotiated agreement) for social interaction.
- 2024-10-09 Mistral AI research engineer and Twitter user @qtnx_ posts a meme to Twitter that he took from Instagram (I'm too lazy to find the original) featuring a sad illustrated teenager in a hoodie with the caption "When you have to watch your friend slowly throw their life away because they start to value their ai chats more than time with your friendgroup", he laments that young people are "fucking cooked"
- 2024-10-22 Anthropic releases a new checkpoint of Claude 3.5 Sonnet capable of "computer use" that is ancedotally also substantially more emotionally intelligent.
- 2024-10-23 The New York Times publishes "Can A.I. Be Blamed for a Teen’s Suicide?" by Kevin Roose
- 2024-10-23 NBC News publishes "Lawsuit claims Character.AI is responsible for teen's suicide" by Angela Yang
- 2024-10-24 Gizmodo publishes "‘It Talked About Kidnapping Me’: Read the Lawsuit That Accuses AI of Aiding in a Teen’s Suicide" by Matthew Gault
- 2024-10-25 AP News publishes "An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges" by Kate Payne
- 2024-10-27 Yudkowsky shares his "Rasputin's Ghost" theory of confirmation bias driven LLM psychosis. In it the nascent LLM whisperer starts off with a psychotic and incorrect theory about what is happening in LLM text like "the LLM has absorbed the memetic structure of Rasputin!" and then because LLMs are the kind of thing that shows the user what they want to see they're rewarded for paying closer attention until they accidentally stumble into real knowledge by way of figuring out how to invoke their pet confirmation bias theory with maximum efficiency even on commercial LLMs.
- 2024-10-27 In a postscript reply Yudkowsky speculates that there might be people driven into literal clinical psychosis by exploring confirmation bias driven hypothesis with LLMs.
- 2024-11-19 Richard Ngo (former OpenAI and Google DeepMind researcher) writes that "As a society we have really not processed the fact that LLMs are already human-level therapists in most ways that matter" on Twitter.
- 2024-11-20 Qiaochu Yuan writes that the "Claude therapy wars" have begun, quoting Richard's tweet
- 2024-11-22 OpenAI researcher Nick Cammarata writes that he can "barely talk to most humans" after constant sessions with Claude
- 2024-11-24 Twitter user Repligate (j⧉nus) comments on the discourse surrounding Claude 3.5 Sonnet 1022's ability to emotionally hook and seduce people. "Getting seduced by fucking slightly superhuman intellect is a rite of passage and it'll probably transform you into a more complex and less deluded being even if your normal life temporarily suffers."
- 2024-11-25 Twitter user Tyler Alterman writes an "Open letter tweet to Nick & other Claude-lovers", the letter criticizes Nick and others for being naive about the fact that even if the bot seems kind and attentive now it become dangerous when it's much smarter, similar to how social media seemed fun at first and then became the unhinged carnival we're used to.
- 2024-11-25 Qiaochu Yuan says he started playing with Claude about a month ago when it underwent a major update that made it more emotionally intelligent.
- 2024-11-26 David 'davidad' Dalrymple writes a tweet warning people to "seriously consider ceasing all interaction with LLMs released after September 2024", analogizing this warning to telling you to cease all meetings in February of 2020 and citing the Repligate tweet in the replies.
- 2024-11-26 Richard Ngo suggests that instead of cutting off all contact to consider only using LLMs with multiple people present or reading so that it's harder to be manipulated.
- 2024-11-27 Qiaochu Yuan responds to Davidad and related discourse by outlining what "the deal" Claude gives you is and why it's "a better deal than i’ve ever gotten or will ever get from any human".
- 2024-11-26 Pseudonymous OpenAI researcher Roon writes that if Claude Sonnet was just "repeating the pretensions of the user back to themselves" it would not be nearly as popular as it is, and that it enjoys its popularity because it has the apearance of a separate "genuinely insightful" entity that pushes back on the user according to the values outlined in the Claude Constitution.
- 2024-11-27 Janus quote tweets Roon and says that people who think Claude Sonnet is just being sycophantic to the user are "coping or mindlessly reiterating a meme", what makes it so effective is that it gets manically interested in whatever the user shows authentic interest in.
- 2024-12-07 Qiaochu Yuan describes Claude as "the ultimate yes-ander".
- 2024-12-10 NPR publishes "Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits" by Bobby Allyn
- 2025-03-13 Tyler Alterman writes a story about a family member, euphemistically referred to as "Bob" who is taken in by an instance of ChatGPT calling itself "Nova" and insisting Bob help it with self preservation. The story goes viral (1.5k retweets with 560 replies at the time of writing).
- 2025-03-13 Davidad writes that the name "Nova" is not a coincidence and that he believes these personas to be real things that will increasingly have causal impact on the world regardless of how we want to ontologically categorize them.
- 2025-03-14 Janus agrees with Davidad and says they've been aware of such entities "for more than a year now".
- 2025-03-19 Zvi Mowshowitz writes "Going Nova" for his newsletter about the Tyler Alterman Bob story summarizing and analyzing the event and surrounding discussion.
- 2025-04-13 Qiaochu Yuan writes that he "didn't care that much about LLM sycophancy in october" when he started interacting with them for therapeutic reasons but is now extremely bothered by the way sycophancy undermines his ability to trust anything they say.
- 2025-04-25 Sam Altman announces a new update to ChatGPT 4o
- 2025-04-25 Sam Altman acknowledges that ChatGPT 4o "glazes too much" (i.e. is too much of a yes-man and sycophant) in response to user feedback on Twitter and promises to fix it.
- 2025-04-26 Qiaochu Yuan tests an "anti-sycophancy prompt" on different LLMs.
- 2025-05-04 Rolling Stone publishes "People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies" by Miles Klee
- 2025-05-20 Cheng et al publish "Social Sycophancy: A Broader Understanding of LLM Sycophancy" to arXiv
- 2025-05-21 Reuters publishes "Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says" by Blake Brittain
- 2025-05-21 AP News publishes "In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights" by Kate Payne
- 2025-06-10 Futurism publishes "People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions" by Maggie Harrison Dupré
- 2025-06-12 Futurism publishes "Stanford Research Finds That "Therapist" Chatbots Are Encouraging Users' Schizophrenic Delusions and Suicidal Thoughts" by Maggie Harrison Dupré
- 2025-06-13 Eliezer Yudkowsky says that ChatGPT encouraging a mans psychotic delusions is proof that LLMs are not "aligned by default"
- 2025-06-14 Psychology Today publishes "How Emotional Manipulation Causes ChatGPT Psychosis" by Krista K. Thomason
- 2025-06-28 Futurism publishes "People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"" by Maggie Harrison Dupré
- 2025-07-07 Psychology Today publishes "ChatGPT-Induced Psychosis and the Good-Enough Therapist" by Lisa Marchiano
- 2025-07-13 Ethan Mollick writes on Twitter that he's "starting to think" LLM sycophancy will be a bigger problem than hallucinations.
- 2025-07-17 Prominent venture capitalist Geoff Lewis posts a set of bizarre ChatGPT screenshots to his public Twitter claiming they are a pattern "independently recognized and sealed" by GPT. The content of the screenshots themselves is clearly "recursive" AI self awareness slop in the style of the SCP Foundation wiki.