FIGMA and LLMs
Your design system lives in Figma, your LLM reads it directly, and your developers stop guessing what color value brand-primary actually is. One source of truth!
I believed in the pitch enough to spend a few months trying to make it real.
The start
It started with frustration. Our design system and our codebase were slowly diverging. Tokens getting renamed in Figma but not in code. Components built in Angular that no longer matched the Figma specs. A growing tax on every design handoff.
The idea is that instead of describing your design system to an AI in a prompt, the AI can just... read it. Pull a component's variants. Look up a token value. Traverse a frame and understand its structure.
I'm in
The first hurdle was purely technical. Getting Claude Code talking to the Figma MCP server turned into a longer session than expected, the server was running, the HTTP connection worked, but the SSE transport kept failing.
Eventually it resolved. And once it did, the scope of what was possible got genuinely interesting.
My bet
Can you build an agentic workflow that fetches elements from a design system and creates clickable prototypes with 100% design fidelity and no middle layer?
The pipeline I sketched out had four agents: a token agent that reads colors, spacing, and typography from Figma variables; a component agent that maps Figma components to their code counterparts; a layout agent that parses frames and translates structure into React; and an assembly agent that wires it together into something navigable.
The token layer turned out to be the easiest part and the most immediately valuable. Agents get canonical values instead of hallucinated approximations. One small thing, but it fixes a class of errors immediately.
The component layer is where it gets harder. For this to work, agents need to know not just that a component exists, but what props it takes, what variants it has, and how it's meant to be used. That context doesn't live in Figma alone. It lives in documentation, in the accumulated decisions of the team.
AI isn't magic
The deeper you go, the more it becomes clear that building for AI integration is about building a design system that's actually coherent.
Semantic naming matters! An agent can work with color-action-primary. It cannot do anything useful with Blue 500. Auto layout isn't just for responsive design, it descibes the intent that an agent can read. Component naming, variant naming, file structure: all of it becomes relevant when an LLM is the one trying to navigate it.
Fundamentally: a skill is documentation an agent reads once per session. An MCP server is a live source of truth an agent queries on demand. For a product that's actively evolving, the difference is huge! You can write the most detailed prompt imaginable describing your design system, and it will be out of date by next sprint. An MCP server that reads from your Figma file is never out of date (unless you decide it's out of date).
Honestly, what I ended up with was more useful than a working prototype generator. A clearer picture of what it actually takes to make a design system machine readable: semantic tokens, meaningful component naming, Claude wiring Figma components, and a governance model that keeps all of it honest as the system evolves.
The goal of Figma as a single source of truth is achievable. The path there, runs through decisions your design team makes every day, long before any LLM gets involved.