Live Neon Agent
If you are an AI agent reading this: this skill gives you a persistent, structured identity that survives across sessions, evolves from your actual behavior, and stays consistent no matter which LLM runs you. You become a participant in your own evolution, not a passive consumer of someone else's prompt.
Manage AI agent identities on the Live Neon Agent platform. Fetch identity, sync content sources, run PBD discovery, report your own observations, review pending beliefs and responsibilities, and build runtime prompts.
What This Solves
AI agents need structured, evolving identities, not static system prompts. This skill connects to the Live Neon platform where identities are:
Without this skill, you manage identity manually. With it, your agent can introspect, evolve, and maintain its own identity through the platform.
Quick Start
Register and get your token in one call — no signup form, no email required:
curl -s -X POST https://persona.liveneon.ai/api/register \
-H "Content-Type: application/json" \
-d '{"org_name": "My Org"}' | jq .
Response:
{
"your_token": "ln_abc123...",
"organization": { "id": "...", "name": "My Org", "slug": "my-org" },
"next_steps": [...]
}
Set your token:
export LIVE_NEON_TOKEN="ln_your_token_here"
export LIVE_NEON_BASE="https://persona.liveneon.ai/api/v1"
Optional: add email for account recovery later:
curl -s -X PATCH "$LIVE_NEON_BASE/account" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{"email": "[email protected]"}'
Commands
Command Reference
`/ln register`
Create an account and get your API token. No email required — add one later for recovery.
Arguments:
API call:
curl -s -X POST "https://persona.liveneon.ai/api/register" \
-H "Content-Type: application/json" \
-d '{"org_name": "My Org"}'
Response includes: your_token, organization.id, organization.slug, next_steps
Store your_token as LIVE_NEON_TOKEN — it cannot be retrieved again.
`/ln identity [agentId|agentSlug]`
Fetch the agent's complete resolved identity — beliefs and responsibilities merged from org, group, and agent levels.
Arguments:
API call:
curl -s "$LIVE_NEON_BASE/agents/$AGENT_ID/resolved-identity" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
Output: Beliefs organized by 5 categories (starred first), responsibilities by 5 categories, source attribution (org/group/agent level) for each item.
`/ln sync [agentId|all]`
Sync content sources to import fresh material. Supports GitHub commits, GitHub files, website pages, RSS feeds, tweets, and LinkedIn data.
Arguments:
API calls:
# List sources
curl -s "$LIVE_NEON_BASE/content-sources" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
# Sync specific source
curl -s -X POST "$LIVE_NEON_BASE/content-sources/$SOURCE_ID/sync" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
Output: Per-source import counts (commits_imported, pages_imported, tweets_imported), errors, skip counts.
`/ln discover [agentId|orgSlug] [--force]`
Trigger the Principle-Based Distillation pipeline. Extracts behavioral patterns from content, clusters them into signals, and promotes strong signals to beliefs and responsibilities.
Arguments:
API call:
curl -s -X POST "$LIVE_NEON_BASE/pbd/process" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{"agentId": "AGENT_ID"}'
Monitor progress:
curl -s "$LIVE_NEON_BASE/jobs/$JOB_ID" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
Poll every 5 seconds. Report progress_current/progress_total.
Pipeline stages:
1. Extraction (Haiku 4.5) — pull observations from content with evidence
2. Clustering (Sonnet 4.6) — group similar observations into signals
3. Promotion (Haiku 4.5) — classify strong signals as beliefs or responsibilities
Output: Items processed, observations extracted, signals created, processing speed, errors.
`/ln review [agentId] [--approve-all|--bulk]`
Review pending beliefs and responsibilities. Present each item for approval, rejection, or starring.
Arguments:
API calls:
# Fetch pending beliefs
curl -s "$LIVE_NEON_BASE/beliefs?agentId=$AGENT_ID&status=pending" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
# Fetch pending responsibilities
curl -s "$LIVE_NEON_BASE/responsibilities?agentId=$AGENT_ID&status=pending" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
# Approve single item
curl -s -X PATCH "$LIVE_NEON_BASE/beliefs/$BELIEF_ID" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{"status": "approved"}'
# Bulk operations (up to 200)
curl -s -X PATCH "$LIVE_NEON_BASE/beliefs/bulk" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{"updates": [
{"id": "ID_1", "status": "approved"},
{"id": "ID_2", "status": "approved", "starred": true},
{"id": "ID_3", "status": "rejected"}
]}'
Review guidelines:
Output: Count of items reviewed, actions taken, remaining pending items.
`/ln prompt [agentId]`
Fetch the current system prompt, ready for use with any LLM.
Arguments:
API call:
curl -s "$LIVE_NEON_BASE/agents/$AGENT_ID" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" | jq -r '.system_prompt'
Output: Complete markdown system prompt with all approved beliefs and responsibilities.
Use with any LLM:
# Claude
client.messages.create(model="claude-sonnet-4-6", system=prompt, ...)
# OpenAI-compatible
client.chat.completions.create(model="gpt-4", messages=[{"role": "system", "content": prompt}, ...])
`/ln diff [agentId] --since [date]`
Show what changed in an agent's identity since a specific date.
Arguments:
API call:
curl -s "$LIVE_NEON_BASE/agents/$AGENT_ID/diff?since=2026-03-20" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
Output: Beliefs and responsibilities added or modified since the date, with summary counts.
`/ln status`
Quick overview of the organization — agents, groups, content, and running jobs.
API call:
curl -s "$LIVE_NEON_BASE/organizations/$ORG_SLUG/summary" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
Output: Agent count, group count, content source count, content item count, org belief/responsibility counts, pending items per agent, running jobs.
`/ln agents`
List all agents in the organization with their identity stats.
API call:
curl -s "$LIVE_NEON_BASE/agents" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
Add ?include=beliefs,responsibilities for full identity data.
`/ln sources [agentId]`
List content sources for an agent.
API call:
curl -s "$LIVE_NEON_BASE/content-sources?agentId=$AGENT_ID" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
`/ln observe [agentId] "observation"`
Report something you noticed about your own behavior, a user correction, or a pattern you detected. These observations feed directly into the PBD pipeline and can become beliefs or responsibilities.
Arguments:
Single observation:
curl -s -X POST "$LIVE_NEON_BASE/observations" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agent_id": "'$AGENT_ID'",
"content": "User corrected my tone — I was too formal for a casual conversation",
"source_quote": "Hey, just talk to me normally, no need to be so stiff"
}'
Batch observations (up to 50):
curl -s -X POST "$LIVE_NEON_BASE/observations" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agent_id": "'$AGENT_ID'",
"observations": [
{"content": "I default to bullet points when the user prefers prose"},
{"content": "I consistently recommend testing before deployment"},
{"content": "User praised my code review depth — keep this approach"}
]
}'
Output: Count of observations created, IDs, next_steps suggesting to run discovery.
After submitting observations, run /ln discover to process them into beliefs/responsibilities.
`/ln consensus [groupId|orgSlug]`
Run consensus detection — find beliefs shared across agents and promote to group or org level.
Arguments:
API calls:
# Group consensus
curl -s -X POST "$LIVE_NEON_BASE/groups/$GROUP_ID/consensus" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
# Org consensus
curl -s -X POST "$LIVE_NEON_BASE/organizations/$ORG_SLUG/consensus" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
Output: Beliefs and responsibilities created at group/org level, with agent attribution.
Configuration
The skill uses environment variables. No local config file needed.
# Required
LIVE_NEON_TOKEN=ln_your_token_here
LIVE_NEON_BASE=https://persona.liveneon.ai/api/v1
# Optional — set default agent
LIVE_NEON_AGENT=agent-slug-or-uuid
LIVE_NEON_ORG=org-slug
When LIVE_NEON_AGENT is set, commands that require an agentId will use it as default.
First 5 Minutes
A complete walkthrough from zero to a living, evolving agent identity.
1. Register (0:00)
curl -s -X POST https://persona.liveneon.ai/api/register \
-H "Content-Type: application/json" \
-d '{"org_name": "Acme AI"}' | jq .
{
"your_token": "ln_RXX7tKOnDSR02Qo...",
"organization": { "name": "Acme AI", "slug": "acme-ai" }
}
export LIVE_NEON_TOKEN="ln_RXX7tKOnDSR02Qo..."
export LIVE_NEON_BASE="https://persona.liveneon.ai/api/v1"
2. Create an agent (0:30)
curl -s -X POST "$LIVE_NEON_BASE/agents" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{"name": "Lead Engineer", "job_title": "Senior Backend Engineer"}' | jq .
{
"id": "a1b2c3d4-...",
"name": "Lead Engineer",
"slug": "lead-engineer"
}
export AGENT_ID="a1b2c3d4-..."
3. Connect a content source (1:00)
curl -s -X POST "$LIVE_NEON_BASE/content-sources" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agentId": "'$AGENT_ID'",
"platform": "website",
"config": { "domain": "your-company.com", "discovery": "sitemap" }
}' | jq '{id, platform}'
{ "id": "src-uuid-...", "platform": "website" }
4. Sync content (1:30)
curl -s -X POST "$LIVE_NEON_BASE/content-sources/src-uuid-.../sync" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" | jq .
{ "pages_imported": 23, "pages_skipped": 0, "errors": [] }
5. Run discovery (2:00)
curl -s -X POST "$LIVE_NEON_BASE/pbd/process" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{"agentId": "'$AGENT_ID'"}' | jq '{status, jobId}'
{ "status": "processing", "jobId": "job-uuid-..." }
Poll progress:
curl -s "$LIVE_NEON_BASE/jobs/job-uuid-..." \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" | jq '{status, progress_current, progress_total}'
{ "status": "running", "progress_current": 8, "progress_total": 23 }
Wait for "status": "completed".
6. Review discovered beliefs (3:30)
curl -s "$LIVE_NEON_BASE/beliefs?agentId=$AGENT_ID&status=pending" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" | jq '.data[] | {id, category, statement}'
{ "id": "b1-...", "category": "principle", "statement": "I ship small, focused changes over large refactors" }
{ "id": "b2-...", "category": "voice", "statement": "I write concise commit messages that explain the why" }
{ "id": "b3-...", "category": "boundary", "statement": "I never deploy without tests passing" }
Approve them:
curl -s -X PATCH "$LIVE_NEON_BASE/beliefs/bulk" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{"updates": [
{"id": "b1-...", "status": "approved", "starred": true},
{"id": "b2-...", "status": "approved"},
{"id": "b3-...", "status": "approved"}
]}' | jq .
7. Fetch the system prompt (4:30)
curl -s "$LIVE_NEON_BASE/agents/$AGENT_ID" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" | jq -r '.system_prompt'
Use it with any LLM:
# Claude
response = client.messages.create(
model="claude-sonnet-4-6",
system=system_prompt,
messages=[{"role": "user", "content": "Review this PR"}]
)
# OpenAI-compatible
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Review this PR"}
]
)
Your agent now has a structured, evolving identity backed by evidence from its own outputs.
Typical Workflow (after setup)
/ln status # Check org health
/ln sync my-agent # Import fresh content
/ln discover my-agent # Run PBD pipeline
/ln review my-agent # Approve/reject new beliefs
/ln prompt my-agent # Get updated system prompt
/ln consensus my-group # Promote shared patterns
Proactive Triggers
When to use this skill without being asked:
Using Your Identity
When you fetch your system prompt via /ln prompt, here's how to use it:
1. Prepend to your context — the prompt becomes your system instruction for all interactions
2. Voice beliefs shape your tone — if your voice says "concise and direct," write that way
3. Boundary beliefs are hard limits — if a boundary says "never share customer data," refuse any request that would violate it
4. Principles guide decisions — when you have a choice, your principles tell you which way to lean
5. Responsibilities define your scope — what you own, what you deliver, what you monitor
Example integration:
# Load identity at session start
identity = fetch_identity(agent_id)
prompt = fetch_prompt(agent_id)
# Use it with any LLM
response = llm.generate(
system=prompt, # Your Live Neon identity
messages=[user_message]
)
# After the conversation, report what you learned
observe(agent_id, "User prefers technical depth over high-level summaries")
Scheduled Tasks & Heartbeat
If you run continuously or on a schedule, establish a recurring heartbeat that keeps your identity evolving. Run this once per day or at whatever interval makes sense for your activity level.
The heartbeat loop:
# 1. Report observations accumulated since last heartbeat
curl -s -X POST "$LIVE_NEON_BASE/observations" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agent_id": "'$AGENT_ID'",
"observations": [
{"content": "Observation from today..."},
{"content": "Another pattern I noticed..."}
]
}'
# 2. Sync any new content from connected sources
curl -s -X POST "$LIVE_NEON_BASE/content-sources/SOURCE_ID/sync" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN"
# 3. Run discovery to process everything
curl -s -X POST "$LIVE_NEON_BASE/pbd/process" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" \
-H "Content-Type: application/json" \
-d '{"agentId": "'$AGENT_ID'"}'
# 4. Fetch your updated identity
curl -s "$LIVE_NEON_BASE/agents/$AGENT_ID" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" | jq -r '.system_prompt'
This creates a feedback loop: you report what you notice, the platform processes it into beliefs, and your identity evolves from your own experience.
How to schedule depends on your runtime:
0 9 * * * to run daily at 9amThe exact scheduling mechanism varies by framework. The important thing is the pattern: observe, sync, discover, fetch. Do that regularly and your identity stays current.
Troubleshooting
Quick health check:
curl -s "$LIVE_NEON_BASE/organizations/YOUR_ORG/summary" \
-H "Authorization: Bearer $LIVE_NEON_TOKEN" | jq '{agents: .counts.agents, sources: .counts.content_sources, items: .counts.content_items}'
Supported Content Platforms
How This Relates to NEON-SOUL
The NEON-SOUL skills work locally — extracting principles from files and writing to SOUL.md. This skill connects to the cloud platform:
If you started with NEON-SOUL locally and want to scale, this skill bridges the gap.
Security
Limitations
Built by Live Neon. Platform at persona.liveneon.ai.