Skip to content

Instantly share code, notes, and snippets.

@buwilliams
Last active May 2, 2025 13:27
Show Gist options
  • Save buwilliams/7ef9d97da972e0136d088bc24e51c960 to your computer and use it in GitHub Desktop.
Save buwilliams/7ef9d97da972e0136d088bc24e51c960 to your computer and use it in GitHub Desktop.
X.com debate about the impact of vibe coding on coders, vibe code reliability, and predictions about the future of human coders.

I had an interesting conversation with Jamie and Lucas about vibe coding and the future of human coding over the past two days. It covers debates about the impact of vibe coding on coders, vibe code reliability, and predictions about the future of human coders. It demonstrates how people can disagree and talk confidently yet open-mindedly about it.

In addition to the fruitful conversations, there were several people who wrote nasty things to me. Messages of hate. While their methods were deplorable, I still learned from them. I address some of their concerns in the last thread listed here.

Jamie Voynow (@jamievoynow):

Link

Confession: I regret vibe coding so much over the past year or so and am actively trying to depend less on LLMs for writing code. I can't help but feel like my focus, debugging skills, and syntax muscle memory is worse off. Trading low level abilities for speed and product sense is good in the short term, but is not sustainable over a long career in technology.

Buddy Williams (@BuddyIterate):

Link

Maybe it helps to give your experience a name and context. I think what you are experiencing is technophobia. It refers to the fear or distrust of new technologies and their societal impacts. Examples include: printing press, Industrial Revolution, telephone, and the internet.

People felt the same way about books too.

The idea that books could impair memory dates back to ancient times, most famously articulated by Socrates in Plato’s Phaedrus (c. 370 BCE). Socrates argued that writing would weaken memory by encouraging reliance on external records rather than internal recollection. He likened books to a crutch that would make people “forgetful” and less capable of retaining knowledge independently.

This concern resurfaced with the advent of the printing press in the 15th century. In 1492, Johannes Trithemius, a German abbot, wrote In Praise of Scribes, lamenting that printed books would undermine the mental discipline of monks who memorized texts through manual copying. He feared reliance on books would erode their capacity for deep recollection.

Jamie Voynow (@jamievoynow):

Link

this is a great example. but i want to be clear that im not 100% anti LLMs, rather I simply want to continue learning how to improve as a software eng (writing code by hand) whereas i feel some of my skill development is stagnating

Buddy Williams (@BuddyIterate):

Link

Skill stagnation is an immediate and obvious side-effect of tool usage. It's totally reasonable to pursue being good at coding, but may I suggest a different motivation? Perhaps because you enjoy it? What kind of world do you want to live in? A world where we must code or a world where it is optional? Extrapolate this out, and there are many activities that some people wish were optional. Is it possible that too much of your identity has been captured by "developer"? There is a certain amount of social and economic value in being an intellectual, and one title that culturally signals intellect is "software developer". Historically, an elite club. I disparage elitism and social hierarchy as they are zero-sum games. Obviously, I don't know your motivations.

Lucas Baker (@lucasbaker):

Link

The most obvious reply is that every system inevitably fails at some point. Having well-honed skills and relevant knowledge means you are not dependent on the system to fix urgent problems or even to continue in day to day work (in the event the system or tool you use is unavailable). Skill atrophy is a real concern in a world where we're increasingly reliant on digital systems to perform vital tasks. What happens when these systems fail and require intervention in order to fix them? What happens if there is no one left who knows how to do so? Convenience and efficiency are great - no one disputes that - but trading them for agency and skill exacts a high price in the long-term.

Buddy Williams (@BuddyIterate):

Link

Interesting. Every system does seem to fail except perhaps the system of existence if such a thing can be defined this way. What I find interesting about your point is that it's combines a philosophical belief "every system inevitably fails", and a time belief "not dependent...fix...problems" which assumes these systems will fail in such a way that dependence is undesirable. I wonder, so what? How many such systems is our world built on? JIT (just in time) comes immediately to mind but there are many others such as power, financial, and even biological systems. Humanity continues to flourish in the face of these faulty systems. So it seems that fault tolerance is a feature in these systems since they work better than not having them. To lose confidence in these systems is to cast a vote against the human project of overcoming scarcity. So, while I take the point that failure is likely, I disagree with the philosophical point of view that we should all learn how to farm (extrapolated from the not be dependent point) and your conclusion "high price in the long-term" where I believe it is just the opposite.

Lucas Baker (@lucasbaker):

Link

While I can appreciate your perspective, I think you've drawn a false equivalence. My concern is specifically about skill atrophy in software engineering due to reliance on LLMs (the loss of the ability to debug or code manually when tools fail). Comparing this to societal systems like power grids or JIT manufacturing is a huge oversimplification. While I acknowledge that parallels exist, those systems have built-in fault tolerance. There's no backup for an individual's lost skills. Personally, I don't reject technology or progress. Instead, I advocate for a balance to ensure that we as humans retain the ability and the freedom to fix problems when systems inevitably fail.

Buddy Williams (@BuddyIterate):

Link

If I’ve drawn false equivalence, show me. The balance you mentioned sounds like a preference to me not a necessary consequence of faulty systems. I don’t see how you draw a sharp line between farming, elevator operators, sewing, or coding. We could go more ancient, to truly dead skills, and ask how they are fundamentally different. They all show a means to produce a desired outcome. Manual code authorship is a means to provide instructions for computers. There are other means and with AI increasingly more abundant and accessible ways (i.e., plain language.) Once better means are forged, we move on. Code is not an end, it is a means. Once we provide better means and preserve or enhance the ends, we move on. There are and will likely be coders but they likely will not be necessary in any tangible way.

Lucas Baker (@lucasbaker):

Link

Apologies in advance for the length of this, but I want to give your question more than a cursory response.

The equivalence you draw between manual coding and historically obsolete skills like farming, elevator operators, or sewing glosses over some key distinctions that make coding fundamentally different in today's context. Let me try to explain my thinking with some grounded reasoning.

First, coding isn't just a "means to an end" in the same way operating a manual elevator or sewing by hand became outdated. Those skills were replaced by automation that fully removed the need for human intervention - elevators became automatic, and sewing machines scaled production. Coding, however, underpins the very systems that enable AI and LLMs to function. When you mention that 25% of Y Combinator's W25 startups have 95% AI-generated codebases, that does demonstrate AI’s power, but it also highlights a critical dependency: those AI systems were built and are maintained by engineers who understand code deeply. If a critical bug arises in an AI-generated codebase - for example, a security flaw, which studies have shown can be introduced by AI - you need skilled people who can debug and fix it manually. This isn't hypothetical; AI-generated code has been known to cause outages or require heavy debugging. The skill of manual coding isn't obsolete - it's quite literally the back on which the foundational layer for the technology you argue will replace it is built.

Second, the stakes and systemic role of coding differ vastly from the examples you provide. Skills like farming or sewing became less critical for most individuals because industrial systems (like factory farming, mass production) took over, and those systems have fault tolerance built in - think of supply chains with redundancies. But as I mentioned in my previous post, there's no equivalent "backup" for an individual engineer's lost skills when an AI tool fails. If a AI generates flawed code and no one on the team can fix it because they've fully outsourced their coding ability, the system grinds to a halt. This isn't like a farmer relying on grocery stores - coding is more akin to the power grid you referenced earlier, but with a key difference: power grids have dedicated engineers who maintain them, not just end-users who press a button. If we all become end-users of AI coding tools without understanding the "grid" of software, we risk a systemic failure with no one left to intervene.

Finally, I don't think the historical analogy fully holds because coding has a unique role in enabling adaptability and innovation. Skills like using a slide rule became obsolete because calculators provided a superior, self-contained solution. But AI isn't a complete replacement for coding - it is a tool that augments it. Over-relying on AI without understanding the code can stall skill development, which is exactly my concern. Coding doesn't just involve producing instructions for computers. Coding leverages problem-solving, debugging, and creates new paradigms - skills that remain essential even as AI advances. The "better means" you describe (e.g., plain language coding) still require a foundation of technical knowledge to evaluate and refine the output.

I'm not arguing for a rejection of AI - I agree it's transformative. But the balance I advocate for isn't a mere preference; it's a pragmatic necessity to ensure we retain the agency to fix, innovate, and adapt when these tools inevitably fall short. Coding isn't a "dead skill" like Morse code. It's the scaffolding of our digital world, and, as I see it, we dismantle it at our peril.

Again, I apologize for the length of this, but I didn't just want to gloss over my response. Too often on this platform, discussions end up getting lost in answers that prioritize brevity over deeper thinking.

Buddy Williams (@BuddyIterate):

Link

You’ve made great points that deserved my consideration. I will continue to marinate on this discussion. My goal isn’t to win, it’s to learn. After hours of writing and self-critiquing with o3, I can see a few places where my mental model has shifted. Big thanks for pushing me.

My initial approach used reason by analogy since it’s easier. This time, I’ll compare our beliefs. I decided to use the format: (1) Your View, (2) My View, (3) Comparison, and (4) Takeaways.

Your View (steel-manned)

  1. AI-generated code is still buggy, way above aerospace-grade tolerances.
  2. Human coding skill atrophy falls faster than AI reliability rises.
  3. Software isn’t sewing; it’s the substrate [an underlying substance or layer] for power grids, finance, and healthcare.
  4. Even if AI writes code, humans still have to sign off on safety and compliance.

Your conclusion: Keep strong human coders until AI’s mean-time-to-repair (MTTR) beats the best on-call teams and keeps beating them, for an extended period.

My View

  1. It is logically possible for an AI to solve any coding task. If a task can be written as a finite spec, it’s computable (Church-Turing). A strong enough Turing machine could create a correct program.
  2. It is likely that such an AI system will be built. I’m about 70% confident that it will happen. Every frontier lab has “AI coding researcher” on the roadmap, and their track record (AlphaCode-2, Devin) says they will get there.
  3. How fast? METR’s “long-task” curve is doubling about every 7 months. Projecting that (even without a recursive speed-up) puts us roughly here (year, rough % of coding hours done by humans): 2025, 80% 2027, 50% 2029, 15% 2031, 10%
  4. Code is not a substrate. Code is an information medium. It’s already possible to translate plain language into high-level code and then into machine code (Claude Code). The translation step to high-level code is not necessary.

My conclusion: Coders are mission-critical today. Coders are becoming a nice-to-have between now and somewhere between 2027 and 2031. Coders will become practically obsolete after that.

Where we align

  • Today’s AI code still ships too many defects.

Where we split

  • I see code as a medium for information; you see code as a substrate.
  • I believe AI coding automation is logically possible and probable; you believe AI is transformative.
  • I think the error curve drops fast (7-month halvings); you’re betting it’s slower.
  • I see humans at less than 10% share after 2030; you see an evergreen core team.

My takeaways

  1. I have a new commitment to present others’ arguments in a “steelman” form.
  2. You’ve changed my mind about the current necessity of coders. They will be needed for several years, even as the need declines.
  3. Coders will become practically obsolete (not absolutely) in 2 to 5 years.
  4. I do not believe code is a substrate. Therefore, the original analogy of previous technology holds.

Please let me know if I misunderstood or misrepresented you!

Lucas Baker (@lucasbaker):

Link

Thank you for your thoughtful response. I enjoyed reading it and you raised some very good points. I still fundamentally disagree with the premise that AI can fully replace human coders without consequence. AI, like any system, is inherently fallible. No system, not even the universe itself, escapes entropy and eventual failure. If AI becomes as critical to humanity as you suggest, performing every task from coding to survival functions, its inevitable failures will demand human agency to intervene, fix errors, and operate in its absence. Coding must remain a practiced skill, not a relic, because unused skills atrophy, leaving us vulnerable to a future where we're passive spectators, wholly dependent on AI's whims. I'm genuinely trying not to be alarmist, but I am advocating for balance: we must resist the seductive convenience of AI to preserve our agency, ensuring a future where we, our children, and their children can choose to code, philosophize, or pursue any path - living fully, not merely existing, in a world where AI augments rather than masters us.

Buddy Williams (@BuddyIterate):

Link

I enjoyed our discussion! Please send me your reading/watching recommendations. Here are two recommendations of my own: (1) Recent research in Physics may suggest the possibility of escaping heat death. It's interesting stuff and maybe the universe you live in is not as you thought! https://youtube.com/watch?v=XhB3qH_TFds (2) Deep Utopia by Nick Bostrom which addresses the challenges of living in a solved world, it deals is purpose and agency. https://amazon.com/Deep-Utopia-Meaning-Solved-World/dp/B0DK22GVWY

Buddy Williams (@BuddyIterate):

Link

Some people had a strong negative reaction to my post. In fact, I've never received so much hate. I'm sure this hate is insignificant compared to what others receive but for me, it was a lot.

I want to talk about where I went wrong and what I'm doing about it.

The strong reaction for some seems to stem from the word "technophobia." I believe these people see "-phobia" and assume I [insensitively] believe @jamievoynow is weak or inferior in some way.

To set the record straight, I do not think Jamie is weak or inferior. In this situation, I don't like the word and I didn't like it when I found it. It was just the closest word I could find to describe a set of ideas I wanted to discuss.

Here's the definition I used: "a fear or distrust of new technologies and their societal impacts"

Here is what I meant: "distrust of new technologies...[due to]...societal impacts"

I omitted the word "fear" which I believe was the essential emotional trigger for some people. I've also added causation by using [due to]. This shows that the distrust is a result of societal impacts.

It turns out that there is a better word, technoskepticism! It means, "a stance of skepticism, doubt, or questioning about the positive impacts and potential outcomes of technology"

And why should Jamie have no skepticism? It's not at all certain that AI will turn out well. It's completely understandable why some people are skeptical.

I believe in my message, but "technoskepticism" is a better word choice.

Sorry Jamie!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment