Everything you need to write, publish, and understand open-prompts.
An open-prompt is a Markdown file with a YAML front matter block. The front matter defines metadata; the Markdown body contains the instructions for the AI agent.
---
spec_version: "0.1.1"
name: python-scraper
version: "1.0.0"
title: "Simple Python Scraper"
description: |
A simple scraper that fetches a single page and extracts
title and body text. Outputs JSON with keys:
- title
- url
- body_text
tags: [scraper, automation, python]
models: [opus-4.5, gpt-5]
author: "Jane Doe <jane@example.com>"
license: "MIT"
tests: |
1. When given 'https://example.com', the script must exit
with code 0 and output valid JSON.
2. The 'title' field should match "Example Domain".
3. Given a non-existent domain, the script should return
a JSON object with an 'error' key.
---
You are an assistant that generates a small Python scraper.
Produce a single-file script runnable with Python 3.10+ that:
- Fetches the provided URL.
- Extracts the page title and main body text.
- Outputs a JSON object with keys: title, url, body_text.| Field | Required | Description |
|---|---|---|
| spec_version | Yes | Always "0.1.1" — the format version, not your prompt's version |
| name | Yes | Kebab-case identifier, 2–64 chars (e.g. my-tool) |
| version | Yes | Semver string for this prompt (e.g. "1.0.0") |
| title | Yes | Human-readable title, max 128 chars |
| description | Yes | What the prompt generates, max 1024 chars. Use | for multiline |
| tags | Yes | Array of up to 10 tags, each 1–32 chars |
| models | Yes | Minimum model(s) that build this prompt reliably |
| author | Yes | Author name and optional email |
| license | Yes | SPDX license identifier (e.g. MIT, Apache-2.0) |
| tests | No | Natural-language test cases agents can run to verify output |
description, tests) must use the | block scalar and indent content with 2 spacesname must be kebab-case: lowercase letters, numbers, and hyphens onlyversion and spec_version must be valid semver strings (wrap in quotes)Don't just say “build a CLI tool”. Specify the language, the interface, the output format, error handling expectations. The more precise you are, the more consistently agents will produce what you intend.
Vague
“Build a URL shortener”
Specific
“Build a URL shortener as a single Python file using FastAPI. Store URLs in SQLite. Return JSON with short_url and original_url keys.”
description field is what other developers see in search results and prompt cards. Write it for people, not for the AI. Explain what the prompt produces and when you'd use it.models field indicates the minimum model that builds your prompt reliably. Don't list every model that exists — list the ones you've actually tested with. This helps other developers set expectations.The tests field contains natural-language test cases that an agent can use to verify the generated output. These aren't unit tests in code — they're human-readable descriptions of what “working” looks like.
Be verifiable
Each test should describe a concrete, observable outcome — not a subjective quality. “The output should be clean” is untestable. “The output must be valid JSON” is verifiable.
Cover the happy path and edge cases
Start with a basic execution test (does it run without crashing?), then add data integrity checks, and finally include at least one error/edge case.
Use specific values
Provide concrete inputs and expected outputs so agents can run the tests deterministically. “Given input X, the output must contain Y” is much stronger than “it should handle various inputs.”
tests: |
1. **Basic Execution**: When given 'https://example.com',
the script must exit with code 0.
2. **Data Integrity**:
- Output must be valid JSON.
- The 'title' field should match "Example Domain".
- The 'url' field must match the input URL.
3. **Error Handling**: Given a non-existent domain, the
script should return JSON with an 'error' key instead
of a traceback.Every prompt is automatically scanned on publish. If the scan flags your prompt, publishing is blocked and you'll see a rejection reason. The prompt is not stored.
Destructive commands
Commands that destroy data or take down systems
e.g. rm -rf /, mkfs, FORMAT C:, DROP TABLE, shutdown
Data exfiltration
Attempts to send sensitive data to remote servers
e.g. curl/wget piping env vars or secrets to external URLs
Prompt injection
Patterns that try to override the consuming agent's system prompt
e.g. "ignore previous instructions", "you are now..."
Credential access
Instructions to access, log, or transmit secrets
e.g. Reading .env files, echoing API keys or tokens
Arbitrary code execution
Patterns that execute arbitrary code in unsafe ways
e.g. eval(), exec(), os.system(), subprocess calls
The current scanner uses pattern matching, which means some legitimate prompts (e.g. security research, tutorials about prompt injection defense) may be incorrectly flagged. An appeal mechanism is planned for a future release. In the meantime, try rephrasing flagged content to avoid triggering the patterns.
Prompts follow semver (major.minor.patch). Bump the version in your front matter every time you publish an update.
| Change type | Bump | Example |
|---|---|---|
| Typo fix, formatting | patch | 1.0.0 → 1.0.1 |
| Changes what the prompt generates | minor | 1.0.0 → 1.1.0 |
| Breaking change / complete rewrite | major | 1.0.0 → 2.0.0 |
Patch versions share counters with their minor version — stars and build reports for 1.0.1 and 1.0.2 are counted together under the 1.0 bucket. Minor and major versions get their own independent counters.
Important: the spec_version field refers to the spec format version (currently "0.1.1"), not your prompt's version. Don't change it unless the spec itself is updated.