I was on Elyse Holladay's podcast On Theme earlier this week. We were talking through naming, governance, and what AI readiness actually demands of a design system team, and one thread in that conversation deserves more room than a podcast allows.

Dan Mall's "three times is a pattern" is one of those ideas that spread through the design systems community because it solves a real problem. The logic is clean: if one team needs a component, wait. If two teams need it, note it. If three teams independently arrive at the same need, you probably have something worth abstracting into the system. Three is meaningful precisely because it represents evidence of a genuine common problem – the pattern has emerged from independent decisions made under different constraints, by teams who weren't coordinating with each other. That's the condition that makes abstraction safe. Fewer than three instances and you risk premature generalisation, encoding something too specific to hold up as shared infrastructure.

The rule works because it assumes independence. Three teams arriving at the same solution from different directions signals something real about the problem space. The pattern is discovered rather than manufactured. And for most of design systems' history, that assumption held. Contribution processes were built around human proposers with legible intent, working in separate product contexts, making separate decisions. The rule didn't need to account for a shared generative source because there wasn't one.

AI tools change that specific condition, and without anyone deciding to change it.

When teams use code generation tools to build product interfaces, those tools aren't making design decisions. They're reproducing patterns from their training data, defaulting to whatever combinations appeared most frequently across the codebases they were trained on. A 2026 paper on design homogenisation in vibe coding found that LLMs default to whatever patterns appeared most in their training data, and that deadline pressure makes teams more likely to ship the suggestion than examine it. The path of least resistance isn't to question whether the suggestion fits the product. It's to ship it.

So when three teams start consistently combining the same header component, data table, and filter row, your contribution process might reasonably flag that as a pattern candidate. Three teams, same structure, independent codebases. But if all three arrived at that layout because the same AI tool suggested it, and all three accepted it because it looked reasonable, you don't have convergent evidence of a good design. You have an echo of a training data distribution.

The frequency is real. The signal isn't.

This matters more than it might initially seem, because the evaluation criteria most contribution processes rely on are built around human provenance. When a person proposes a pattern for promotion, you can ask what problem they were solving. You get context about the constraints they were working under, the alternatives they considered, what the pattern had to do to earn its place. The proposal comes with reasoning attached. Even a brief Slack message explaining "we kept needing this and it wasn't in the system" tells you something about intentionality.

When an AI surfaces a pattern through repeated suggestion and repeated acceptance, that reasoning isn't available. Nobody made a deliberate choice about whether this was the right solution for this problem. The decision was distributed across many small moments of not pushing back, which is a different thing entirely from many small moments of considered agreement.

There's a compounding dimension to this too. The self-consuming loop – where AI-generated outputs gradually populate codebases, and future AI suggestions are shaped by those same codebases – is well-documented at the model training level. Research published in Nature has shown that training generative models on their own outputs, rather than on diverse human-generated data, degrades both quality and diversity over successive generations. The direct parallel to design systems isn't exact – teams aren't retraining foundation models on their codebase outputs – but the directional dynamic holds at a smaller scale. Patterns that get accepted accumulate in the codebase. Future suggestions from the same tools, operating in the same context, will be informed by what's already there. The echo gets louder the longer it goes unchallenged.

None of this means the three times rule is wrong. It was a good heuristic for the conditions it was designed for, and those conditions have changed in a specific way. The rule assumed human proposers with legible intent. Adapting it means adding a question that didn't used to need asking: how did these instances come to exist, and do the teams who accepted them understand what they were agreeing to?

That's a harder question to answer than counting instances, and most design system teams don't currently have the observability to answer it without doing additional work. You'd need to know whether teams using a pattern are satisfied with the outcomes, or just accepted it without questioning. Whether it's appearing across genuinely different product contexts or within a narrow surface area where the same tool gets used most. Whether the pattern holds up under accessibility or performance constraints that the AI didn't surface unprompted.

I said in the podcast that I don't have a fully resolved answer here, and I still don't. But the teams that figure out how to distinguish AI-generated frequency from genuine signal will be the ones where pattern promotion stays meaningful. The alternative is a contribution process that ends up rubber-stamping statistical defaults as design intent, and a system that gradually converges toward whatever an LLM would have suggested anyway, regardless of what the product actually needs.

That's a loss, and it happens without anyone choosing it.


If you haven't already, you can listen to the podcaste by clicking the link below:

Design system quality has a business case now (and it’s ... AI?), with Murphy Trueman
Murphy Trueman joins me to dig into something most design system teams already know but haven’t wanted to say out loud: your design system isn’t ready for AI. We discuss what it actually means to treat your design system like a semantic API, how to think about governance, and why the fixes AI demands are ones we probably should have made years ago. Plus, what it means to allow more roles (and LLMs) to build with the system, and and what teams can do right now to get their house in order.

Thanks for reading! If you enjoyed this article, subscribing is the best way to keep up with new posts. And if it was useful, passing it on to someone who'd find it relevant is always appreciated.

You can find me on LinkedIn, X, and Bluesky.

If there's something you'd like to see me write about, reply and let me know, or reach out directly via social media.

Free tool · Murphy Trueman
CTA Image

Is your design system ready for AI? AI agents are already consuming design systems. Find out if yours is structured to be understood by them.

Take the free assessment →
Share this post