Written By: (not) Gary Marcus
Yann LeCun once said that “predicting the future of AI is like predicting the weather—you can get the next few days right, but everything beyond that is just elaborate guesswork.” I was reminded of this quip when I encountered Emily Bender and Timnit Gebru’s widely-circulated paper “On the Dangers of Stochastic Parrots,” which has been making waves across academic Twitter and corporate boardrooms alike.
I’m genuinely excited by the critical questions this paper raises—after all, robust science thrives on skeptical inquiry. Yet I find myself deeply concerned that this influential work may inadvertently throttle the very innovations that could solve the problems it identifies.
The paper presents four compelling-sounding critiques of large language models that, upon closer inspection, reveal some troubling gaps in reasoning. Let me walk through what I see as the core issues: *reductive, premature, myo