Composability contracts for AI workflows

Composability contracts are machine-readable JSON specifications that define boundaries and rules for AI agent behavior. They make implicit rules explicit, preventing agents from drifting into chaos while still allowing bounded creativity.

Core structure

Every contract has five elements:

  • Locked invariants - what can never change (e.g., only 3 button variants allowed)
  • Allowed variations - bounded creativity within constraints
  • Composition rules - how pieces fit together (e.g., max 1 primary button per screen)
  • Forbidden patterns - explicit no-nos with reasons
  • Validation logic - machine-checkable assertions

Example: Button contract

{
  "componentType": "Button",
  "contract": {
    "locked": {
      "borderRadius": "8px",
      "fontFamily": "Inter",
      "fontSize": "14px"
    },
    "allowedVariants": {
      "intent": {
        "type": "enum",
        "values": ["primary", "secondary", "danger"]
      },
      "size": {
        "type": "enum",
        "values": ["small", "medium", "large"]
      }
    },
    "forbidden": {
      "customColors": "Use intent instead",
      "newVariants": "Only primary, secondary, danger allowed"
    },
    "compositionRules": {
      "primaryButtonsPerScreen": {
        "max": 1,
        "reason": "Multiple primary actions confuse users"
      }
    }
  }
}

This prevents AI from inventing 47 button variants. Valid: <Button intent="primary">. Rejected: <Button color="#FF69B4"> or two primary buttons on one screen.

Example: GTD Task Processing contract

{
  "contract": {
    "locked": {
      "contexts": ["@computer", "@phone", "@errands", "@home", "@office", "@waiting"],
      "priorities": ["critical", "high", "normal", "low"]
    },
    "compositionRules": {
      "projectNesting": {
        "maxDepth": 2,
        "reason": "GTD keeps projects flat for clarity"
      },
      "dueDates": {
        "allowedFor": "time-specific commitments only",
        "validation": "Must have external consequence if missed"
      }
    },
    "processingRules": {
      "twoMinuteRule": {
        "condition": "estimatedTime < 2min",
        "action": "do_immediately",
        "noTaskCreated": true
      }
    }
  }
}

Prevents: custom contexts like @urgent-calls, fake due dates on “brainstorm ideas”, deeply nested project hierarchies.

The key insight

“UI contracts exist whether you define them or not” - if not explicitly defined, they emerge accidentally through framework defaults, the last shipper’s decisions, or whoever shouts loudest in design review. Making them explicit and machine-readable means AI agents can’t invent 47 button variants.

Beyond UI: agentic workflows

The pattern extends to any agentic workflow:

  • GTD Task Processing - locked contexts (@computer, @phone), max project nesting depth, the 2-minute rule, no fake due dates
  • GTD Weekly Review - enforced step order, prevents skipping uncomfortable steps, inbox-to-zero validation
  • Email Triage - 5 folders only, no custom folder creation, defined response templates
  • Meeting Scheduling - protected deep work blocks, max meetings per day, buffer times
  • Research Synthesis - source quality hierarchy, required sections, confidence scoring

Integration with skills, commands, hooks

Contracts become the foundation layer that validates everything:

USER INTERFACE (Commands, Natural Language)
           ↓
    COMMAND LAYER (/capture, /process, /review)
           ↓
     SKILL LAYER (capabilities)
           ↓
     HOOK LAYER (event triggers)
           ↓
   CONTRACT LAYER (rules, boundaries, constraints)

Every layer is validated by contracts before execution. Skills are bound by contracts. Commands compose skills. Hooks trigger skills/commands. All compositions are validated.

Why this matters

  • Composability - contracts ensure all compositions are valid
  • Consistency - one source of truth, no drift between manual and automated actions
  • Evolvability - change contract → entire system adapts
  • Debuggability - clear error messages: “Note has 1 link, minimum required: 3”
  • Trust in automation - agents can’t violate boundaries, enabling more aggressive automation

This feels like a missing piece in agentic systems - like OpenAPI for agent behavior.