AI doesn't need your design system to be perfect. It needs it to be honest.
Why documenting your mess matters more than fixing it
I’ve been auditing design systems for AI readiness over the past year, and I keep noticing the same pattern. Teams assume they need to fix everything before AI tools can help them. They think the mess disqualifies them. It doesn’t.
The systems that work best with AI aren’t the cleanest ones. They’re the ones that know where their problems are.
A team that documents “our naming is inconsistent in these three places” gives AI something to work with. A team that thinks their naming is consistent but actually has seventeen variations of the same pattern gives AI a minefield. Both systems are messy, but only one is honest about it.
That honesty matters more than I expected.
Why honesty beats polish
When I wrote about design system entropy, I described how AI surfaces every shortcut and inconsistency in your system. It doesn’t smooth over gaps the way humans do. It sees btn_primary and button[variant=primary] as two unrelated patterns, because structurally, they are.
What I’ve learned since is that this isn’t necessarily a problem. The problem starts when you don't know about it.
AI can work with inconsistency if you tell it the inconsistency exists. You can add context in your prompts. You can document the exceptions. You can flag the areas where your system contradicts itself. AI can adjust.
What AI can’t do is compensate for hidden mess. It can’t guess that the detached Modal was supposed to be the same as the component in your library. It can’t infer that spacing.large in Figma and space-lg in code mean the same thing. It doesn’t have access to the conversation you had six months ago when someone decided to name things differently.
The question isn’t whether your system has problems – every system has problems – the question is whether you know what they are.
The new hire test
There’s a simple way to think about this. Imagine a senior developer joins your team tomorrow. They’re talented, they work fast, and they’re eager to ship. But they’ve never seen your system before.
How much could they figure out just by reading what you’ve documented?
If your component names communicate purpose, they’ll understand what OnboardingStep is for. If your names are things like CardBase and BoxAlt, they’ll spend their first week asking questions in Slack. If your tokens use semantic naming like color-border-critical, they’ll apply them correctly. If your tokens are cherry-500 and ocean-200, they’ll guess wrong and nobody will catch it until QA.
AI is that new hire. Talented, fast, zero context.
I explored this in Your next design system user is an agent, where I argued that design systems are already APIs. The components are endpoints. The props are parameters. The question is whether your API is well-documented or whether it relies on institutional knowledge that lives in people’s heads.
The honest answer for most teams is somewhere in between. Parts of the system are clear. Parts rely on context that never got written down. Knowing which is which matters.
Messy but explicit beats polished but implicit
I worked with a team last year whose design system was, by conventional standards, a disaster. Inconsistent naming. Token gaps everywhere. Components that had drifted so far from source that the library was almost decorative.
But they knew it. They had a spreadsheet tracking every inconsistency. They’d documented which components were safe to use and which ones would cause problems. They’d flagged the areas where Figma and code had diverged.
When they started using AI tools, they fed that documentation into the context. “Our button naming is inconsistent. Here are the three variations. Treat them as equivalent”. “These tokens map to each other despite different names”. “Ignore any Modal that isn’t in the /components/core folder”.
It worked. Not perfectly, but well enough to be useful. The AI stopped generating duplicate code. It started referencing the right components. Output that used to need heavy editing started arriving closer to usable.
Compare that to teams I’ve seen with visually pristine systems that fall apart under AI because nobody documented the five implicit rules that make everything hang together. The polish was a facade. The system looked consistent but behaved inconsistently the moment anyone outside the inner circle tried to use it.
Explicit mess beats implicit order. Every time.
What honesty actually looks like
Honesty in a design system means documentation, not confession.
Start with source of truth. Is Figma canonical? Is code? When they conflict, which one wins? If you can’t answer that question instantly, neither can AI.
Track your exceptions. Every system has components that break the pattern. Every system has one-offs that seemed reasonable at the time. Document them. Name them. Flag them as exceptions rather than letting them hide among the rules.
Be clear about your debt. Technical debt in a design system is fine. Unacknowledged technical debt is a trap. If your tokens are a mess, say so. If your component hierarchy is tangled, map the tangles. The documentation doesn’t need to be pretty. It needs to be accurate.
Know your gaps. Which areas are solid? Which areas would you warn a new team member about? That warning is valuable information. Write it down somewhere.
In JSON demystified, I wrote about design tokens as contracts between design and development. The same principle applies here. Your documentation is a contract with AI. The clearer the terms, the better the output.
You might be closer than you think
Something counterintuitive happens when teams get honest about their systems. They often discover they’re more ready than they thought.
When you stop comparing yourself to an imaginary perfect system and start assessing what you actually have, the picture shifts. Maybe your naming is inconsistent, but your component architecture is solid. Maybe your tokens are a mess, but your Figma-to-code linking is tight. Maybe you’ve got drift in one area and excellent discipline in another.
Perfection isn’t the bar. Awareness is.
I’ve talked to teams who assumed they were years away from benefiting from AI tools, only to realise that fixing two or three specific gaps would unlock real value. I’ve talked to teams who thought they were ahead and discovered that their confidence was built on assumptions nobody had validated.
Both groups arrived at the same place through the same exercise. Looking honestly at what was actually there.
Finding out where you stand
I built an assessment tool because I kept having this same conversation manually. Team after team asking “are we ready?” and me asking “ready for what?” and eventually landing on “let’s just figure out what’s actually true about your system”.
The assessment covers fifteen factors across four areas: documentation, component architecture, design-to-code connection, and workflow integration. It doesn’t ask whether your system is good or bad. It asks whether you know what you have.
Some of the questions are technical. Do your tokens follow DTCG format? Is your accessibility information machine-readable? Do you have component metadata exposed as structured data?
Some are more about awareness. Do you know which files are canonical? Do you have explicit exception handling? Can you describe where your system is weakest?
The output is a map showing where you’re solid, where you’re not, and what would make the biggest difference if you addressed it.
If you want to try it, head to designsystemsforai.com. It should take about ten minutes. Once done, you’ll get your results immediately, and if you enter your email, I’ll send specific recommendations based on where you landed.
Honesty is a practice, not a destination
When I wrote about rebuilding a 180SX, I talked about how every rebuild starts with confronting what’s actually broken, not what you hoped was true. You find rust where you expected solid metal. You find bodge jobs from previous owners who had more ambition than skill. You discover dependencies you didn’t know existed.
Design systems work the same way. The job isn’t getting to “done”. The job is staying honest about where you are.
AI doesn’t need your system to be finished. It needs you to be clear about what you’re working with.
The good news is that clarity costs nothing. You can start today, without changing a single component. Just look at your system and write down what’s actually there. The gaps. The inconsistencies. The things you’d warn a new hire about.
That document, rough as it might be, is where AI readiness actually begins.
Thanks for reading! Subscribe for free to receive new posts directly in your inbox.
This article is also available on Medium, where I share more posts like this. If you’re active there, feel free to follow me for updates.
I’d love to stay connected – join the conversation on X and Bluesky, or connect with me on LinkedIn to talk design systems, digital products, and everything in between. If you found this useful, sharing it with your team or network would mean a lot.





