Quick Integration

Get up and running in under 30 minutes. Copy-paste integration patterns for every supported platform.

OpenAIAnthropicCursorLangChain

Step 1 — Download and open your file

After purchase, download your .md file from the success page or your account. Open it in any text editor.

Every file has this structure at the top:

---
product_id: AA-XXX
name: Product Name
category: prompt | skill | agent | utility | doc
version: 1.0.0
compatibility: openai, anthropic, cursor, langchain
---

# Product Name

## System Prompt / Skill Definition / ...

Find and replace all {{placeholder}} tokens before using. Every required token is listed in the manifest section of the file.

Anthropic — Claude API

Paste the file content as your system prompt. Works with all Claude models.

import anthropic
import pathlib

client = anthropic.Anthropic()
system_prompt = pathlib.Path("your-product.md").read_text()

message = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=4096,
    system=system_prompt,
    messages=[{"role": "user", "content": "Your task here"}]
)
print(message.content[0].text)

For skill modules, use the tool_use feature. Pass the skill's IO schema as a tool definition and call it by name.

Claude Code & Cursor — Skills

Skill (.md) files install directly into your IDE agent environment.

Claude Code
# Install
cp your-skill.md ~/.claude/skills/skill-name.md

# Invoke in any Claude Code session
/skill-name {your input here}
Cursor
# Add to .cursor/rules/ or reference in Cursor Rules
cp your-skill.md .cursor/rules/skill-name.md

# Or paste directly as a Cursor Rule in Settings → Rules

OpenAI — API & Assistants

Chat Completions API
from openai import OpenAI
import pathlib

client = OpenAI()
system_prompt = pathlib.Path("your-product.md").read_text()

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "Your task here"}
    ]
)
Assistants API
# Create an assistant with your product as instructions
assistant = client.beta.assistants.create(
    name="My Agent",
    instructions=system_prompt,  # your .md file content
    model="gpt-4o",
    tools=[{"type": "code_interpreter"}]
)

LangChain & LangGraph

from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic
import pathlib

system_prompt = pathlib.Path("your-product.md").read_text()

prompt = ChatPromptTemplate.from_messages([
    ("system", system_prompt),
    ("human", "{input}")
])

model = ChatAnthropic(model="claude-opus-4-6")
chain = prompt | model

result = chain.invoke({"input": "Your task here"})

For LangGraph agents, assign the product as the system message for a specific node. Skill modules map naturally to LangGraph tool nodes.

Direct use — ChatGPT, Claude.ai

For no-code use: open the .md file, copy the content below the frontmatter divider (---), and paste into a new conversation as the first message or as a custom instructions / project instructions block.

Next steps