How to install and invoke SKILL.md modules, handle tool routing, validate IO, and wire error retry patterns into your agent stack.
A SKILL.md file is a self-contained, callable unit of AI capability. It defines:
Skills are the composable building blocks of agent workflows. A single agent might call 5–10 skills in sequence or in parallel depending on the task.
---
skill_id: AA-SKL-001
name: skill-name
version: 1.0.0
compatibility: claude-code, cursor, openai-functions, langchain-tools
---
# SKILL: skill-name
## Invocation
```
/skill-name {required_param} [optional_param]
```
## Input Schema
| Parameter | Type | Required | Description |
|-----------|--------|----------|--------------------|
| required_param | string | yes | Description of what this is |
| optional_param | string | no | Defaults to X if omitted |
## Output Schema
```json
{
"status": "success | error | retry",
"result": "...",
"metadata": { "confidence": 0.0–1.0, "tokens_used": int }
}
```
## Error Conditions
- MISSING_PARAM: required_param not provided
- INVALID_TYPE: param type mismatch
- TIMEOUT: execution exceeded 30s
## System Prompt
[The actual skill instructions for the LLM]# Single skill cp your-skill.md ~/.claude/skills/ # Multiple skills from a bundle cp bundle-folder/*.md ~/.claude/skills/ # Verify installation ls ~/.claude/skills/
Skills are auto-discovered by Claude Code on startup. Restart the session after adding new skills.
# Install to project scope (takes precedence over global) mkdir -p .claude/skills/ cp your-skill.md .claude/skills/ # Skills in .claude/skills/ are project-scoped # Only available when Claude Code runs from this directory
# Basic call /skill-name "your input" # With named params /skill-name context="your context" task="specific task"
# Output of one skill as input to next
result = /skill-a "input"
/skill-b context={result.output} task="next step"# Route to different skills based on task type
if task_type == "research":
/research-skill {input}
elif task_type == "draft":
/writer-skill {input}Always validate skill outputs before passing them to the next step. Each skill ships with a JSON schema you can use for programmatic validation.
import jsonschema
import json
output_schema = {
"type": "object",
"required": ["status", "result"],
"properties": {
"status": {"type": "string", "enum": ["success", "error", "retry"]},
"result": {"type": "string"},
"metadata": {"type": "object"}
}
}
skill_output = json.loads(raw_output)
jsonschema.validate(skill_output, output_schema) # raises if invalid
if skill_output["status"] == "error":
handle_error(skill_output)
elif skill_output["status"] == "retry":
retry_with_backoff(skill_call, max_retries=2)
else:
pass_to_next_step(skill_output["result"])Network timeouts, rate limits, or brief model unavailability. Retry up to 3 times with exponential backoff (1s, 2s, 4s). Log each attempt.
The skill output doesn't match the expected schema. Do not retry blindly — identify which field is wrong and fix the upstream input or prompt.
The model tried to invoke a skill that isn't installed. List all available skills in your system prompt. Add a fallback instruction: 'If a skill is not available, describe what you would do instead.'
A failure in step 3 should not silently corrupt steps 4–10. Design pipelines with checkpoints that verify state before proceeding. Use a status field in every handoff payload.