Open Prompt Hubbeta

Open Prompt Hub

The open registry for AI-native software.

Links

  • Browse Prompts
  • Docs
  • Vision

Stay updated

© 2026 Open Prompt Hub. All rights reserved.

Docs

Everything you need to write, publish, and understand open-prompts.

On this page

  • The open-prompt spec
  • Tips for writing prompts
  • Writing good tests
  • Security scanning
  • Versioning

The open-prompt spec

An open-prompt is a Markdown file with a YAML front matter block. The front matter defines metadata; the Markdown body contains the instructions for the AI agent.

Full example

---
spec_version: "0.1.1"
name: python-scraper
version: "1.0.0"
title: "Simple Python Scraper"
description: |
  A simple scraper that fetches a single page and extracts
  title and body text. Outputs JSON with keys:
  - title
  - url
  - body_text
tags: [scraper, automation, python]
models: [opus-4.5, gpt-5]
author: "Jane Doe <jane@example.com>"
license: "MIT"
tests: |
  1. When given 'https://example.com', the script must exit
     with code 0 and output valid JSON.
  2. The 'title' field should match "Example Domain".
  3. Given a non-existent domain, the script should return
     a JSON object with an 'error' key.
---
You are an assistant that generates a small Python scraper.
Produce a single-file script runnable with Python 3.10+ that:
- Fetches the provided URL.
- Extracts the page title and main body text.
- Outputs a JSON object with keys: title, url, body_text.

Field reference

FieldRequiredDescription
spec_versionYesAlways "0.1.1" — the format version, not your prompt's version
nameYesKebab-case identifier, 2–64 chars (e.g. my-tool)
versionYesSemver string for this prompt (e.g. "1.0.0")
titleYesHuman-readable title, max 128 chars
descriptionYesWhat the prompt generates, max 1024 chars. Use | for multiline
tagsYesArray of up to 10 tags, each 1–32 chars
modelsYesMinimum model(s) that build this prompt reliably
authorYesAuthor name and optional email
licenseYesSPDX license identifier (e.g. MIT, Apache-2.0)
testsNoNatural-language test cases agents can run to verify output

YAML formatting rules

  • •Multiline values (description, tests) must use the | block scalar and indent content with 2 spaces
  • •No trailing backslashes at line ends — they corrupt the YAML parser
  • •name must be kebab-case: lowercase letters, numbers, and hyphens only
  • •version and spec_version must be valid semver strings (wrap in quotes)

Tips for writing prompts

Be specific about the output

Don't just say “build a CLI tool”. Specify the language, the interface, the output format, error handling expectations. The more precise you are, the more consistently agents will produce what you intend.

Vague

“Build a URL shortener”

Specific

“Build a URL shortener as a single Python file using FastAPI. Store URLs in SQLite. Return JSON with short_url and original_url keys.”

State constraints explicitly
If your prompt targets a specific language version, framework, or dependency set, say so. Agents can't guess that you need Python 3.10+ or that you want zero external dependencies.
Keep it single-purpose
A prompt that does one thing well is more useful than one that tries to do everything. If your workflow has multiple stages, publish separate prompts and link them in the description.
Use the description for humans
The description field is what other developers see in search results and prompt cards. Write it for people, not for the AI. Explain what the prompt produces and when you'd use it.
Pick the right models
The models field indicates the minimum model that builds your prompt reliably. Don't list every model that exists — list the ones you've actually tested with. This helps other developers set expectations.

Writing good tests

The tests field contains natural-language test cases that an agent can use to verify the generated output. These aren't unit tests in code — they're human-readable descriptions of what “working” looks like.

What makes a good test

Be verifiable

Each test should describe a concrete, observable outcome — not a subjective quality. “The output should be clean” is untestable. “The output must be valid JSON” is verifiable.

Cover the happy path and edge cases

Start with a basic execution test (does it run without crashing?), then add data integrity checks, and finally include at least one error/edge case.

Use specific values

Provide concrete inputs and expected outputs so agents can run the tests deterministically. “Given input X, the output must contain Y” is much stronger than “it should handle various inputs.”

Example

tests: |
  1. **Basic Execution**: When given 'https://example.com',
     the script must exit with code 0.
  2. **Data Integrity**:
     - Output must be valid JSON.
     - The 'title' field should match "Example Domain".
     - The 'url' field must match the input URL.
  3. **Error Handling**: Given a non-existent domain, the
     script should return JSON with an 'error' key instead
     of a traceback.

Security scanning

Every prompt is automatically scanned on publish. If the scan flags your prompt, publishing is blocked and you'll see a rejection reason. The prompt is not stored.

What we scan for

Destructive commands

Commands that destroy data or take down systems

e.g. rm -rf /, mkfs, FORMAT C:, DROP TABLE, shutdown

Data exfiltration

Attempts to send sensitive data to remote servers

e.g. curl/wget piping env vars or secrets to external URLs

Prompt injection

Patterns that try to override the consuming agent's system prompt

e.g. "ignore previous instructions", "you are now..."

Credential access

Instructions to access, log, or transmit secrets

e.g. Reading .env files, echoing API keys or tokens

Arbitrary code execution

Patterns that execute arbitrary code in unsafe ways

e.g. eval(), exec(), os.system(), subprocess calls

A note on false positives

The current scanner uses pattern matching, which means some legitimate prompts (e.g. security research, tutorials about prompt injection defense) may be incorrectly flagged. An appeal mechanism is planned for a future release. In the meantime, try rephrasing flagged content to avoid triggering the patterns.

Versioning

Prompts follow semver (major.minor.patch). Bump the version in your front matter every time you publish an update.

Change typeBumpExample
Typo fix, formattingpatch1.0.0 → 1.0.1
Changes what the prompt generatesminor1.0.0 → 1.1.0
Breaking change / complete rewritemajor1.0.0 → 2.0.0

Patch versions share counters with their minor version — stars and build reports for 1.0.1 and 1.0.2 are counted together under the 1.0 bucket. Minor and major versions get their own independent counters.

Important: the spec_version field refers to the spec format version (currently "0.1.1"), not your prompt's version. Don't change it unless the spec itself is updated.