Analyze / System Prompt

Analyze Any System Prompt

Paste a system prompt from any LLM application — OpenAI GPTs, Assistants API, Claude, or custom agents — and see what it actually believes.

Works with anything. System prompts are plain text instructions that shape an LLM's behavior. Every AI application has one — most are unstructured.

Why analyze? System prompts grow organically. Rules get added but never categorized. This analyzer shows you the structure hiding in your prompt — beliefs, responsibilities, gaps.

Supports: Claude Code, OpenClaw, Cursor, GitHub Copilot, CrewAI, Windsurf, Aider, and more

From Prompt to Identity

A system prompt is a starting point. Persona turns it into a structured, evolving identity.

Import your prompt → it becomes categorized beliefs and responsibilities. Connect content sources → PBD discovers patterns your prompt doesn't capture. Export → get back a better prompt, or a CLAUDE.md, or an API-served identity.

The prompt you wrote on day 1 shouldn't be the same prompt on day 100. Identity should evolve from evidence.

Framework-specific analyzers: CLAUDE.md · Cursor Rules · AGENTS.md · GitHub Copilot · CrewAI · Windsurf