Get up and running in under 30 minutes. Copy-paste integration patterns for every supported platform.
After purchase, download your .md file from the success page or your account. Open it in any text editor.
Every file has this structure at the top:
--- product_id: AA-XXX name: Product Name category: prompt | skill | agent | utility | doc version: 1.0.0 compatibility: openai, anthropic, cursor, langchain --- # Product Name ## System Prompt / Skill Definition / ...
Find and replace all {{placeholder}} tokens before using. Every required token is listed in the manifest section of the file.
Paste the file content as your system prompt. Works with all Claude models.
import anthropic
import pathlib
client = anthropic.Anthropic()
system_prompt = pathlib.Path("your-product.md").read_text()
message = client.messages.create(
model="claude-opus-4-6",
max_tokens=4096,
system=system_prompt,
messages=[{"role": "user", "content": "Your task here"}]
)
print(message.content[0].text)For skill modules, use the tool_use feature. Pass the skill's IO schema as a tool definition and call it by name.
Skill (.md) files install directly into your IDE agent environment.
# Install
cp your-skill.md ~/.claude/skills/skill-name.md
# Invoke in any Claude Code session
/skill-name {your input here}# Add to .cursor/rules/ or reference in Cursor Rules cp your-skill.md .cursor/rules/skill-name.md # Or paste directly as a Cursor Rule in Settings → Rules
from openai import OpenAI
import pathlib
client = OpenAI()
system_prompt = pathlib.Path("your-product.md").read_text()
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Your task here"}
]
)# Create an assistant with your product as instructions
assistant = client.beta.assistants.create(
name="My Agent",
instructions=system_prompt, # your .md file content
model="gpt-4o",
tools=[{"type": "code_interpreter"}]
)from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic
import pathlib
system_prompt = pathlib.Path("your-product.md").read_text()
prompt = ChatPromptTemplate.from_messages([
("system", system_prompt),
("human", "{input}")
])
model = ChatAnthropic(model="claude-opus-4-6")
chain = prompt | model
result = chain.invoke({"input": "Your task here"})For LangGraph agents, assign the product as the system message for a specific node. Skill modules map naturally to LangGraph tool nodes.
For no-code use: open the .md file, copy the content below the frontmatter divider (---), and paste into a new conversation as the first message or as a custom instructions / project instructions block.