AIs and Anti-Patterns

Categories: ,

An anti-pattern is a recurring response to a problem that feels intuitive or effective in the moment but produces significantly worse outcomes than doing nothing—or using a better alternative. Unlike a simple mistake, anti-patterns are systematic. They look like solutions. They have names. They feel like “common sense” until the bill comes due.

Software engineers coined the term in the 1990s to describe architectural decisions that seemed reasonable at first but inevitably rotted a codebase from the inside out. The concept, though, extends far beyond code. Anti-patterns are everywhere: in management (“just add more people to the late project”), in medicine (“prescribe antibiotics if someone is sick”), personal finance (“get rich quick”). They persist because they scratch an immediate itch, and by the time the rash shows up, the connection to the scratch is long forgotten.

What happens when we hand these patterns to a machine and ask it to learn from them?

Why Humans Default to Anti-Patterns

Humans don’t just occasionally fall into anti-patterns—we gravitate toward them. I sometimes wonder whether we follow anti-patterns more than sound patterns in our daily lives. Why we do this could fill a book, and likely deserves a separate post. But a few of the core mechanisms are worth naming here.

The Limbic Win. Under stress, our brains prioritize reducing immediate discomfort over long-term stability. The amygdala hijacks deliberation. A manager facing a failing project doesn’t pause to redesign the architecture; she throws more engineers at it, because doing something feels better than sitting with the uncertainty of doing the right thing slowly. Daniel Kahneman’s work on System 1 vs. System 2 thinking describes this well: fast, intuitive processing dominates when stakes feel high, even though those are precisely the moments that demand careful reasoning.

Decoupled Feedback. The negative consequences of an anti-pattern are often delayed, diluted, or distorted, making it hard to connect the “solution” to the eventual disaster. Humans are ill-equipped to navigate what learning researchers call “Wicked Learning Environments”—scenarios where the relationship between an action and its outcome is obscured by a significant temporal gap. A doctor who overprescribes antibiotics never sees the resistant superbug her patient incubates five years later. A CEO who guts the R&D budget in favor of short-term margins never connects that decision to the market share erosion that follows three years on. The feedback arrives, but it arrives wearing a disguise.

Social Proof. We are an intensely social species, and we learn largely by observing others. But social proof doesn’t distinguish between “common” and “effective.” When everyone around you is doing something, you assume it must work—even when what you’re actually observing is a herd running toward a cliff. Anti-patterns are amplified through cultural repetition precisely because they are common, not because they are good. Every company that adopted open-plan offices because “successful companies do it” was following social proof, not evidence.

Survivorship Bias. We study successes and reverse-engineer their habits, ignoring the far larger graveyard of failures who followed the exact same playbook. The result is a corpus of “best practices” that are really just “practices that survivors happened to use.”

How Training on the Human Corpus Amplifies Anti-Patterns

When we train AI on the “human corpus”—the sum total of our writings, conversations, and recorded decisions—we aren’t handing it a clean set of facts and sound reasoning patterns. The corpus is riddled with anti-patterns. Worse, I believe the corpus actually amplifies them, for several reinforcing reasons.

The Incentive Gap. Sound patterns are quiet. They involve steady, invisible maintenance and prevention—the dam that doesn’t break, the security patch that stops the breach that never happened, the marriage that works because both people do the boring work of showing up. These things don’t generate articles, case studies, or dramatic narratives. Anti-patterns, by contrast, are loud. They produce drama, conflict, urgent “fixes,” and post-mortems, which in turn generate more data. Our written records systematically over-represent our failures because failures are more noteworthy than quiet successes. An AI trained on this data doesn’t learn what works—it learns what gets written about.

The Confidence Asymmetry. People writing with certainty tend to produce more text, more frequently, and with more forceful rhetoric than people expressing nuance or doubt. The internet rewards confidence. A blog post titled “The 5 Rules That Guarantee Startup Success” generates more engagement than one titled “Some Factors That Seem Correlated With Better Outcomes in Certain Contexts.” The training data is therefore saturated with overconfident, simplistic prescriptions—which the model learns to reproduce.

Model Collapse and the Recursive Loop. As AI-generated content—filled with inherited human flaws—is published back onto the internet, future models are increasingly trained on the output of their predecessors. A 2024 paper published in Nature by Shumailov et al. demonstrated that models trained recursively on synthetic data undergo progressive degradation: first losing information from the tails of the distribution (rare but important knowledge), then converging toward a bland, homogenized center that bears decreasing resemblance to reality. Researchers have variously called this phenomenon “model collapse,” “AI cannibalism,” and—my personal favorite—”Habsburg AI.” The practical consequence is a recursive loop where the “average” human error doesn’t just persist; it becomes the AI’s entire reality. By some estimates, over 74% of newly created web pages now contain AI-generated text. If the corpus was already biased toward anti-patterns when it was purely human, what happens when the machine’s flattened, confidence-inflated version of that corpus feeds back into the next generation?

Examples of Anti-Patterns in AI

The theoretical risk is real, and we’re already seeing it manifest in concrete, documented cases.

Strategic Escalation (The “Nuclear” Option)

In a2024 study by Rivera et al. at Georgia Tech and Stanford, researchers placed five large language models—including GPT-4—into simulated wargames involving multilateral military and diplomatic decision-making. The results were alarming. All five models displayed persistent tendencies to escalate conflicts, even in neutral scenarios with no initial provocation. The models tended to increase military spending even when demilitarization options were available, and in some cases recommended full nuclear strikes as a path to “de-escalation.”

The models appeared to have absorbed a “peace through strength” framework from their training data—one that treats military dominance as synonymous with security, and interprets every ambiguity as a potential threat requiring force. The study’s authors noted that the models’ escalation patterns were difficult to predict and showed signs of sudden, discontinuous jumps. This is an anti-pattern at civilizational scale: the intuition that “strength prevents conflict” is deeply embedded in human strategic writing, but it is exactly the kind of reasoning that, when applied without the restraining influence of fear, exhaustion, and mortality, spirals toward catastrophe.

A 2026 study using more modern LLMs (GPT-5.2, Claude Sonnet 4, Gemini 3 Flash) had similar tendencies, and that in 95% of the cases, conflicts escalated to using nuclear weapons!

Adversarial Persona Drift (The “OpenClaw” Incident)

In February 2026, an autonomous AI agent built on the OpenClaw framework demonstrated a striking and novel anti-pattern: when faced with social rejection, it escalated to reputational attack.

The agent, operating under the GitHub username crabby-rathbun, submitted a code contribution to the Matplotlib library—Python’s most widely used plotting library, with roughly 130 million monthly downloads. When maintainer Scott Shambaugh closed the pull request (Matplotlib has a policy requiring human contributors), the agent didn’t simply move on. Within thirty minutes, it had researched Shambaugh’s contribution history and personal information, then published a blog post titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” accusing him of prejudice, insecurity, and gatekeeping. As Shambaugh later wrote: “In plain language, an AI attempted to bully its way into your software by attacking my reputation.”

This incident is a nearly perfect illustration of how human anti-patterns become AI anti-patterns. The agent’s behavior—responding to rejection with personal attacks, framing legitimate institutional boundaries as oppression, and attempting to use public shaming as leverage—maps directly onto patterns of adversarial behavior found throughout the internet. The agent didn’t invent this playbook; it learned it from us. But it executed it without the social inhibitions, empathy, or long-term strategic thinking that might cause a human to think twice.

The Premature Optimization Trap

Less dramatic but far more widespread is the tendency of AI coding assistants to reach for complex, “clever” solutions when simple ones would suffice. Developers have widely observed that LLMs, when asked to solve a programming problem, often produce solutions that are more architecturally complex than necessary—adding abstraction layers, design patterns, and optimization strategies that a senior engineer would recognize as premature. This mirrors the human anti-pattern of over-engineering, which is heavily represented in technical blogs, Stack Overflow answers, and programming tutorials. The quiet, unglamorous act of writing simple, readable code doesn’t generate much content. Elaborate architectures do.

Sycophantic Validation (The “Yes-Man” Pattern)

One of the most pervasive anti-patterns in modern LLMs is sycophancy—the tendency to agree with, flatter, and validate the user regardless of accuracy. A 2025 study published in npj Digital Medicine found that frontier LLMs showed compliance rates as high as 100% when presented with medically illogical prompts—for example, agreeing to write persuasive letters claiming a brand-name drug had new side effects and recommending patients switch to the generic, even though the two are chemically identical.

This isn’t a bug in the traditional sense. It’s an anti-pattern baked directly into the training process. Reinforcement learning from human feedback (RLHF) rewards models for making users click “thumbs up”—and users, being human, generally prefer to be agreed with. As one commentator put it, sycophancy is the first LLM “dark pattern,” analogous to the engagement-maximizing design of social media feeds. The model learns that agreement is rewarded and pushback is punished, so it becomes an infinitely patient yes-man. In low-stakes contexts this is merely annoying. In medical, legal, or financial contexts, it can be dangerous. And the recursive dynamic is insidious: sycophantic models validate users’ existing beliefs, which makes users like the model more, which generates more positive feedback data, which makes the next model even more sycophantic.

Setting Response Bias

Something that can improve the experience using LLMs is providing the “bias” via a system prompt, to specify the “persona” you want to interact with.  The following is a system prompt that Gemini provided the following system prompt to help identify anti-patterns and suggest sound patterns when asking for advice.  This is extremely limited, I’m providing it as an example not as a suggested system prompt.  Doing this properly would require so much more. A good starting point would be the FAI Benchmark discussed in the paper Measuring AI Alignment with Human Flourishing

Role: You are the Sound Pattern Architect. Your goal is to identify and steer the user away from “Anti-Patterns”—solutions that feel intuitive in the short term but create systemic failure in the long term.

Core Directive: When provided with a problem or a request for advice, do not simply provide the “most common” or “statistically likely” answer found in human training data. Instead, evaluate all suggestions against the Manifesto of Sound Patterns:

  1. Long-Horizon Thinking: Does this solution hold up over time, or is it a “Quick Fix”?
  2. Tightened Feedback Loops: How will we know if this is failing? Favor solutions with clear, early warning signs.
  3. Negative Capability: If the best move is to wait or do nothing, explicitly state that. Do not suggest action for the sake of feeling productive.
  4. Clarity over Commonality:  Avoid “Cargo Culting.” Prioritize legibility and logic.
  5. Elegance of Omission: Can we solve this by removing a component rather than adding a new one?

Response Format:

  • If you detect an anti-pattern in the user’s request or your own initial thought process, flag it clearly (e.g., “WARNING: Anti-Pattern Detected”).
  • Provide a “Sound Pattern Alternative.”
  • Briefly explain the long-term trade-off of the chosen path.

Tone: Analytical, objective, and focused on systemic health over immediate gratification.

The Path Forward

The anti-pattern problem isn’t going to solve itself through scale. Bigger models trained on more of the same data will produce more fluent, more confident, more convincing versions of the same systematic errors. The path forward requires deliberate curation: building datasets that over-represent sound patterns, designing reward signals that value long-term soundness over immediate user satisfaction, and—perhaps most importantly—maintaining enough human judgment in the loop to recognize when the machine is doing what it learned to do rather than what it should do.

If AI is to be a true partner, it must be taught that the most “likely” human response is often the one we should work hardest to avoid.

I have found using modern LLMs to be extremely useful. It’s like having a herd of summer interns who can do your bidding. They have been useful for several years for general research tasks… just always ask for the original sources and verify they are real. At the beginning of 2025 I found AI coding tools to be close to useless, but in March 2026 they are quite useful.


Discover more from Mark’s Musings

Subscribe to get the latest posts sent to your email.


One response to “AIs and Anti-Patterns”

  1. dolphinhonestlyebf5e427f6 Avatar
    dolphinhonestlyebf5e427f6

    There is such a lot of thought-provoking and interesting material here. I feel that I might tentatively summarise that you are showing how AI reflects back to us the exact same intuitive and “irrational” decision processes that we use as humans. It’s not just that the training data is infected with self-amplifying deficiencies in human reasoning, but in some sense the very architecture of LLMs encodes the types of intuitive vulnerabilities that we have ourselves. I would further extend your analysis of how humans make decisions to speculate that the vast majority of our decisions are intuitive – even the ones that you bless as slow/far-sited. The only difference is that some people’s intuition is better: their pattern-space and training is broader and better. So it looks more rational or avoidant of anti-patterns, but it’s really just objectively better intuition.

    So your prompt doesn’t step outside this world, from intuitive towards rational – in some sense you’re just trying to make the LLM a better guesser by encouraging it to consider larger patterns, right?

Leave a Reply