How to assemble prompts, skills, and agents from the catalog into production-grade workflows — including IO contracts, handoff patterns, and error handling.
Every product in the catalog is a machine-readable file (Markdown, JSON, or YAML) with a defined structure. Products come in five types — prompts, skills, agents, utilities, and docs — and each type plays a distinct role in a workflow stack.
Every skill and agent product ships with an IO contract — a JSON schema specifying the exact inputs it expects and the outputs it produces. Before connecting two products, verify their contracts are compatible.
{
"input_schema": {
"required": ["context", "task"],
"optional": ["tone", "max_tokens", "examples"],
"types": {
"context": "string",
"task": "string",
"tone": "enum:professional|casual|technical",
"max_tokens": "integer"
}
}
}Replace {{placeholder}} tokens in prompt files before passing to your LLM. All tokens are listed in the product's manifest section.
Use a system prompt to establish context and persona, then invoke skills for discrete sub-tasks within the same session.
system_prompt.md → establishes role + constraints
↓ call
/skill-name {input} → discrete tool-use unit
↓ output
next_prompt.md → uses skill output as contextA top-level agent manifest routes sub-tasks to specialized skills. The agent decides which skill to call based on task type.
agent-manifest.md ├── task: "research" → research-skill.md ├── task: "draft" → writer-skill.md ├── task: "review" → qa-skill.md └── task: "publish" → publish-skill.md
Chain multiple system prompts in sequence, passing the output of each as the input context of the next. Use a utility schema to validate each handoff.
step-1-intake.md → structured JSON output ↓ validate with io-schema.json step-2-analysis.md → uses JSON as context ↓ validate step-3-output.md → final formatted response
When passing output between components, use structured handoffs to avoid context drift.
Always request JSON output from intermediate steps. Parse and validate before passing downstream. Include a "status" field so downstream components know whether to proceed or escalate.
Large pipelines can overflow the context window. Pass only the essential structured output from each step, not the full conversation history. Each component should be self-contained.
Each component in a chain should stay within its defined role. Use the system prompt to hard-constrain behavior. If a downstream component needs the upstream role, pass it as a context string, not as a new system prompt.
End every prompt with an explicit output format instruction. E.g. "Respond with a JSON object matching {field: type, ...}. No commentary, no markdown wrapping."
SCHEMA_MISMATCHOutput from upstream component does not match downstream input schema. Fix: add a validation step between components or relax the downstream schema.
CONTEXT_OVERFLOWCombined context length exceeds model limit. Fix: trim history to essential structured data, use summarization prompt before next step.
HALLUCINATED_TOOLModel invokes a skill that does not exist or uses wrong syntax. Fix: explicitly list available skills in system prompt and use strict tool-calling mode.
RETRY_LOOPComponent fails and retries indefinitely. Fix: set explicit retry limits (typically 2–3) with exponential backoff, then escalate to human-in-the-loop or fallback response.
DISCLOSURE_MISSINGTrading product invoked without risk disclosure acknowledgement. Fix: gate all trading skill invocations behind a disclosure check — see the Trading Risk Playbook.