From Design Systems to Design Protocols - Design context Protocol (DCP)

I've been diving deep into the world of MCPs (Model Context Protocols) recently — the backend infrastructure that's allowing applications to be AI-ready. Every major company seems to be building its own flavor of an MCP server — structuring data, exposing APIs, and giving LLMs the context they need to act intelligently within a product.

This backend movement is laying the technical foundation for a new generation of AI-native experiences. It’s setting things up for scale, flexibility, and long-term adaptability.


But as a designer, the real question isn't just

How do we leverage that?


The deeper question is

How do we rethink interaction design in this new paradigm?


This isn't about applying AI as a tool. It’s about applying first principles thinking to design in an AI-native world.

If the backend has a Model Context Protocol, maybe what the front-end needs is a Design Context Protocol.

If the backend has a Model Context Protocol, maybe what the front-end needs is a Design Context Protocol.

Design Context Protocol: More Than Just Components

Design systems today focus heavily on how things look — spacing, type scales, color tokens, component libraries. But in an AI-first product, that’s not enough.


Every atomic element in a system needs richer context. It’s not just about what a button looks like — it’s about when to use it, why it exists, and what kind of feedback or confirmation it should trigger. A font style should carry embedded meaning: bold might mean high urgency, while muted gray might imply a secondary action.

As designers, we think deeply about when to slow a user down, when to nudge them forward, and how to communicate intent with clarity.

As designers, we think deeply about when to slow a user down, when to nudge them forward, and how to communicate intent with clarity.

A Design Context Protocol gives us a way to codify those opinions and share them with machines.

Embedding Opinionated Design Context

Imagine a delete button. Should it confirm before deleting? Should it use a red color? Should it be the most prominent button on the screen? As designers, we know the answer is yes — and now we can teach the AI that.

export const DestructiveActionButton = (props) => {
  return (
    <button
      aria-label="Delete item"
      data-dcp='{
        "intent": "destructive",
        "confirmationRequired": true,
        "visualPriority": "high",
        "designOpinion": "Always ask for confirmation. Use red tone. This action is irreversible."
      }'
      {...props}
    >
      {props.children}
    </button>
  );
};
export const DestructiveActionButton = (props) => {
  return (
    <button
      aria-label="Delete item"
      data-dcp='{
        "intent": "destructive",
        "confirmationRequired": true,
        "visualPriority": "high",
        "designOpinion": "Always ask for confirmation. Use red tone. This action is irreversible."
      }'
      {...props}
    >
      {props.children}
    </button>
  );
};

This example shows how we can start encoding not just the look of a component, but its purpose and philosophy. You're literally embedding your design intent into the UI code.

Real-World Example: Handling Refund Frustration with Empathy

Scenario: A user just searched "refund" and is clicking around in frustration. Instead of showing a cold help article, the AI responds with empathy, clarity, and a direct action.

{
  "component": "SupportMessageCard",
  "useCase": "refund-frustration",
  "emotionalState": "frustrated",
  "designContext": {
    "tone": "empathetic",
    "colorTheme": "brand.calmBlue",
    "icon": "emoji-sparkle",
    "fontWeight": "medium"
  },
  "content": {
    "headline": "We're here to help you with that refund",
    "summary": "We understand this can be frustrating. Here's a quick summary of your options.",
    "action": {
      "label": "Start refund process",
      "type": "primary",
      "onClick": "handleRefundStart()"
    },
    "reference": {
      "label": "Read our refund policy",
      "link": "https://yourbrand.com/help/refunds"
    }
  },
  "aiMetadata": {
    "detectEmotion": true,
    "humanEscalation": true,
    "confidenceThreshold": 0.7
  }
}
{
  "component": "SupportMessageCard",
  "useCase": "refund-frustration",
  "emotionalState": "frustrated",
  "designContext": {
    "tone": "empathetic",
    "colorTheme": "brand.calmBlue",
    "icon": "emoji-sparkle",
    "fontWeight": "medium"
  },
  "content": {
    "headline": "We're here to help you with that refund",
    "summary": "We understand this can be frustrating. Here's a quick summary of your options.",
    "action": {
      "label": "Start refund process",
      "type": "primary",
      "onClick": "handleRefundStart()"
    },
    "reference": {
      "label": "Read our refund policy",
      "link": "https://yourbrand.com/help/refunds"
    }
  },
  "aiMetadata": {
    "detectEmotion": true,
    "humanEscalation": true,
    "confidenceThreshold": 0.7
  }
}

This interaction feels caring and proactive, because the AI was trained to understand the emotion, choose the right tone, and offer a helpful resolution directly in the moment.

Logical Context: Acting on Low AI Confidence

Scenario: The AI filled in some data during signup, but isn't confident about it. Instead of pushing it through, it asks for a human to double-check.

{
  "component": "AIConfidenceBanner",
  "useCase": "data-auto-fill",
  "logicalContext": {
    "aiConfidence": 0.42,
    "threshold": 0.6,
    "action": "suggestHumanReview"
  },
  "designContext": {
    "tone": "neutral",
    "colorTheme": "brand.warningYellow",
    "icon": "alert-circle",
    "fontWeight": "medium"
  },
  "content": {
    "headline": "We’re not 100% sure about this one",
    "summary": "The AI filled this data automatically, but the confidence is below the safe threshold. You may want to double-check.",
    "action": {
      "label": "Review & confirm",
      "type": "primary",
      "onClick": "handleManualReview()"
    }
  },
  "aiMetadata": {
    "logConfidenceScore": true,
    "allowOverride": true
  }
}
{
  "component": "AIConfidenceBanner",
  "useCase": "data-auto-fill",
  "logicalContext": {
    "aiConfidence": 0.42,
    "threshold": 0.6,
    "action": "suggestHumanReview"
  },
  "designContext": {
    "tone": "neutral",
    "colorTheme": "brand.warningYellow",
    "icon": "alert-circle",
    "fontWeight": "medium"
  },
  "content": {
    "headline": "We’re not 100% sure about this one",
    "summary": "The AI filled this data automatically, but the confidence is below the safe threshold. You may want to double-check.",
    "action": {
      "label": "Review & confirm",
      "type": "primary",
      "onClick": "handleManualReview()"
    }
  },
  "aiMetadata": {
    "logConfidenceScore": true,
    "allowOverride": true
  }
}

Designers should define when AI should step back, not just when to step forward. DCP allows that decision-making to be visualized and codified.

AI Design Decision Tree: Teaching AI How to Choose Components

Think of this as an AI router for your design system. Given a situation, it picks the right UI, tone, and fallback plan.

Onboarding Drop-off User has been stuck on a form for 45 seconds. Offer help in context instead of letting them drop.

{
  "id": "onboarding-dropoff",
  "match": {
    "scenario": "onboarding",
    "step": "form-completion",
    "inactivitySeconds": { "gte": 45 },
    "emotionalState": "neutral"
  },
  "action": {
    "component": "HelpTooltipCard",
    "tone": "encouraging",
    "colorTheme": "brand.infoBlue",
    "content": {
      "headline": "Need a hand?",
      "summary": "Looks like you’ve been here for a bit — want us to fill this out with example data?",
      "action": {
        "label": "Fill with examples",
        "onClick": "handleFillWithExamples()"
      },
      "reference": {
        "label": "Skip and do this later",
        "link": "https://yourproduct.com/docs/onboarding-options"
      }
    }
  }
}
{
  "id": "onboarding-dropoff",
  "match": {
    "scenario": "onboarding",
    "step": "form-completion",
    "inactivitySeconds": { "gte": 45 },
    "emotionalState": "neutral"
  },
  "action": {
    "component": "HelpTooltipCard",
    "tone": "encouraging",
    "colorTheme": "brand.infoBlue",
    "content": {
      "headline": "Need a hand?",
      "summary": "Looks like you’ve been here for a bit — want us to fill this out with example data?",
      "action": {
        "label": "Fill with examples",
        "onClick": "handleFillWithExamples()"
      },
      "reference": {
        "label": "Skip and do this later",
        "link": "https://yourproduct.com/docs/onboarding-options"
      }
    }
  }
}

Smart Upsell A user on the Starter plan tries to access advanced reporting multiple times. Let’s suggest an upgrade politely, with context.

{
  "id": "upsell-smart-trigger",
      "match": {
        "scenario": "feature-locked",
        "featureId": "advanced-reporting",
        "userTier": "starter",
        "usageFrequency": { "gte": 3 }
      },
      "action": {
        "component": "SmartUpsellCard",
        "tone": "helpful",
        "colorTheme": "brand.premiumGold",
        "offer": {
          "type": "trial-extension",
          "label": "Try Pro free for 14 days",
          "conditions": {
            "notUpgradedBefore": true
          }
        },
        "content": {
          "headline": "You’re ready for more powerful reports",
          "summary": "You’ve explored advanced reports a few times — this feature is available on the Pro plan.",
          "action": {
            "label": "Upgrade to Pro",
            "onClick": "handleUpgrade()"
          },
          "reference": {
            "label": "See all plan features",
            "link": "https://yourproduct.com/plans"
          }
        },
      "reference": {
        "label": "See all plan features",
        "link": "https://yourproduct.com/plans"
      }
    }
  }
}
{
  "id": "upsell-smart-trigger",
      "match": {
        "scenario": "feature-locked",
        "featureId": "advanced-reporting",
        "userTier": "starter",
        "usageFrequency": { "gte": 3 }
      },
      "action": {
        "component": "SmartUpsellCard",
        "tone": "helpful",
        "colorTheme": "brand.premiumGold",
        "offer": {
          "type": "trial-extension",
          "label": "Try Pro free for 14 days",
          "conditions": {
            "notUpgradedBefore": true
          }
        },
        "content": {
          "headline": "You’re ready for more powerful reports",
          "summary": "You’ve explored advanced reports a few times — this feature is available on the Pro plan.",
          "action": {
            "label": "Upgrade to Pro",
            "onClick": "handleUpgrade()"
          },
          "reference": {
            "label": "See all plan features",
            "link": "https://yourproduct.com/plans"
          }
        },
      "reference": {
        "label": "See all plan features",
        "link": "https://yourproduct.com/plans"
      }
    }
  }
}

Designers at the Helm of the AI Shift

This next technological upshift isn’t just about AI writing code or generating UIs — it’s about crafting experiences that are intelligent, intentional, and trustworthy.


And designers are in the driver’s seat.

Why? Because we bring something AI can’t learn from scraping the web Taste. Judgment. Opinion.

We know when something feels right. We know when friction is necessary. We know when a moment needs pause, or when it should feel instant. These aren’t decisions you make with data alone — they’re decisions rooted in human understanding.


As the backend world builds context protocols like MCP to structure logic for machines, the design world needs its counterpart — a Design Context Protocol (DCP) that encodes experience, intent, and philosophy.

It’s not about feeding the AI more data.
It’s about teaching it what matters.