Contents
NSP:// · v1.0 · April 2026 · CONFIDENTIAL
Narrative
State Protocol
The cognitive infrastructure layer that makes autonomous AI agents
reliable enough for production deployment.
SEED ROUND $15M – $30M PRE-REVENUE $250M – $400M PRE-MONEY 5 PRODUCTION RUNTIMES
275K+
lines of infrastructure
5,142+
automated tests
$346B
combined TAM by 2030
01 / 14
Executive Summary
"Think before you act."

NSP is the protocol layer that makes autonomous AI agents reliable enough for production — enforcing structured belief management, confidence tracking, and cognitive gating before any action executes.

What NSP is: Cognitive infrastructure sitting between memory/retrieval and agent execution. Adds structured belief management and verification that no other production system provides.

What NSP is not: Not an LLM wrapper. Not an agent framework. Not a memory store. NSP occupies the uncontested cognitive state layer of the agentic AI stack.

Three Technical Papers
Paper 1 — Published
"Think Before You Act"
doi.org/10.5281/zenodo.19334052 ↗
Paper 2 — Published
"Programmable Emergence"
doi.org/10.5281/zenodo.19392279 ↗
Paper 3 — Before May 2026
"Why Structure, Not Similarity"
$346B
Combined TAM · 5 verticals · 2030
275K+
Lines of cognitive infrastructure
5
Production runtimes validated
114.4×
Speedup on Anthropic public benchmark
Independent Validation

Third-party project mempalace #565 independently converged on the same five cognitive primitives — convergent evolution from an unrelated starting point.

02 / 14
The Problem
The AI stack has one critical gap.

Every production agent can perceive, reason, remember, and act. None of them can verify they understand before they act.

The Missing Layer

Applications · Cursor · Devin · Character.ai
Orchestration · LangChain · CrewAI · AutoGPT
⚠  Cognitive State Layer — uncontested gap
Memory / RAG · Hydra DB · Mem0 · Zep
Inference · OpenRouter · Together · Replicate
Foundation Models · OpenAI · Anthropic · Google

Cost of the Gap

AI Coding

Agents modify code they don't understand. Average prompt exceeds 20K tokens with no comprehension gate before destructive operations.

AI Companions

Prompt-engineered personality degrades as context fills. No mathematical consistency model — character breaks unpredictably.

Healthcare AI

Zero auditable reasoning trail. No structured record of what data was considered or what was uncertain — a compliance blocker.

Research Agents

No mechanism to detect and correct strategic errors. Failed approaches repeat indefinitely without cognitive oversight.

03 / 14
Market Opportunity
LLM usage grew 14× in one year.

Reasoning models now >50% of usage. The market is no longer Q&A — it is agentic workflows that demand cognitive infrastructure.

AI Coding
$26B
by 2030 · 27% CAGR
AI Companions
~$100B
by 2030 · ~25% CAGR
Healthcare AI
$188B
by 2030 · 39% CAGR
AI Education
$32B
by 2030 · 31% CAGR

Protocol-layer capture model: NSP sits beneath all verticals like Stripe under e-commerce. 1% protocol capture = $3.5B revenue opportunity by 2030.

Signal 01

Agentic AI Is the Default

Reasoning models >50% of usage. Tool invocation rising. Prompt lengths 4× longer. Agents need infrastructure that verifies they act correctly.

Signal 02

Enterprise Trust Gap

Every deployment decision involves "but can we trust it?" NSP provides auditable cognitive state management — a board-level concern in regulated industries.

Signal 03

Regulatory Tailwind

EU AI Act enforced Aug 2025. FDA AI/ML guidance. Global regulations require AI decision auditability. NSP provides compliance infrastructure by design.

04 / 14
The Solution
The Cognitive Loop.
No action without verification.

Every NSP agent runs this loop

PERCEIVE · receive input, parse context
UNDERSTAND · articulate belief via I_understand_as
VERIFY · confidence gate · assumption check
ACT · gated execution · PreToolUse hook
REFLECT · update belief state · learn from outcome

275,000+ lines of cognitive infrastructure. Not a wrapper — the full cognitive architecture: belief management, confidence calibration, CIA Layers 0–4, cross-domain knowledge graph, and a mathematical personality engine grounded in dynamical systems theory.

Core Primitives

Belief State Management
Structured per-agent belief store, updated every turn
Confidence Calibration
Per-turn confidence scoring with threshold gating
PreToolUse Cognitive Gate
No action executes without verification
Cusp Catastrophe Personality Engine
Dynamical systems theory — zero LLM tokens per state update
Full Belief Trace
Audit-ready record of every belief transition
Three-Layer Memory + Cross-Session Lifecycle
Full lifecycle management across sessions
05 / 14
Product Lines
Five runtimes. One protocol.

Same cognitive architecture validated across five distinct production domains — the platform thesis in action.

NSP-Coding

per-seat SaaS

Hooks into AI coding assistants via PreToolUse gate. Enforces structured understanding before every code edit. Live cognitive state dashboard. Architecture-agnostic — plugs into any tool-use-capable LLM.

NSP-Roleplay

freemium + B2B

Mathematical personality engine via cusp-catastrophe dynamics. Zero LLM tokens per state update. 2,300+ tests. Sleep/dream cycle. Multi-character sessions with independent belief states.

NSP-Learn

enterprise R&D

Self-correcting research framework. Applied to Anthropic's public benchmark — reached 1,291 cycles (114.4× speedup), passing all 8 thresholds. Result reproduced across 6 random seeds.

NSP-Edu

institutional SaaS

Tracks what students actually understand vs. what they can repeat. Persistent knowledge graph per learner. Active misconception detection. Adapts based on verified mastery state.

NSP-SimWorlds

SDK licensing

Mathematical character evolution at game-tick speed. Viable for real-time simulation with hundreds of simultaneous characters. No LLM per tick — the same math-driven state layer is one step from a World Model SDK.

06 / 14
Competitive Position
An uncontested layer.

NSP does not compete with memory systems or agent frameworks — they are complementary. A production AI agent needs all three.

CapabilityMemory Systems (Mem0, Zep)Agent Frameworks (LangChain)NSP
Belief dynamicsCusp catastrophe engine
Confidence trackingPer-turn calibration
Action gatingPreToolUse verification
Personality consistencyMathematical state machine
Auditability / belief traceFull trace by design
Fact retrievalCore strengthNot the focus
Tool orchestrationCore strengthNot the focus
Moat 01 · Protocol Depth

275,000+ lines across schemas, mathematical models, and lifecycle management. Requires rethinking architectural foundations to replicate.

Moat 02 · Cross-Domain Validation

Same belief dynamics validated across coding, roleplay, research, and education. Independently corroborated by mempalace #565 — convergent evolution is strong architectural validation.

Moat 03 · Mathematical Foundation

Cusp catastrophe engine grounded in dynamical systems theory. Produces principled behaviour — noise tolerance, signal detection, threshold revision.

Moat 04 · Data Flywheel

Every interaction produces structured cognitive data. Competitors without state management cannot generate this class of data — a permanent structural advantage.

07 / 14
Business Model
One wedge first. Platform second.

Coding-first revenue architecture — the natural wedge where the Claude Code integration is already in production and the buyer has AI tooling budget.

Phase 01
NSP-Coding Wedge
Enterprise plugin · Team → Enterprise tiers
Phase 02
Runtime Expansion
Companion B2B + 3rd vertical
Phase 03
Platform SDK
3rd-party runtimes + data products
Phase 04
World Model SDK
Cognitive layer for world models

NSP-Coding · Enterprise Pricing

TierAnnual PriceTarget
Team$30K/yr5–25 AI engineers
Enterprise$78K/yr25–200 AI engineers
Enterprise+$180K+/yr200+ engineers / platform

Priced as infrastructure, not tooling. Benchmarked against Datadog / Sentry observability — not GitHub Copilot per-seat — to protect margin and positioning.

Revenue Trajectory

PhaseTimelineTarget ARR
Seed · PilotsYear 1$0.5M
Growth · PlatformYear 2$3.5M
Scale · Multi-verticalYear 3$12M
Platform · EcosystemYear 5+$35M+

75–85% gross margin target. State management runs on zero LLM tokens. Margin improves with scale.

08 / 14
Cognitive Data Flywheel
Structured data no competitor can generate.

Every NSP interaction produces structured cognitive data with precise schema — not logs or opaque vectors.

// example: single belief_transition record
transition:
dimension: "trust_in_advisor"
before: 0.82  after: 0.31
trigger: "contradictory_evidence"
catastrophe: true
Belief Transition Data

When and why AI beliefs change — calibration benchmarks for new models and improved gating thresholds.

Confidence Calibration

Per-turn confidence vs. outcome records — feeds improved CIA trigger thresholds across all runtimes.

Intervention Outcomes

Which cognitive interventions succeeded or failed — drives CIA architecture improvements across all domains.

Structural cost advantage at scale: NSP-Roleplay's personality and state engines run as pure computation — zero LLM tokens per state update. In consumer-scale companion mode, this is an order-of-magnitude cost advantage vs. competitors whose state maintenance consumes tokens per operation.

09 / 14
Product Roadmap
Foundation built. Commercial launch next.
Phase 1 · Now

Foundation

5 production runtimes. 5,142+ tests. CIA Layers 0–4. Cusp catastrophe engine validated. Cross-domain knowledge graph (27 nodes, 17 bridges). 114.4× speedup on Anthropic benchmark. Two technical papers published, Paper 3 before May 2026.

Phase 2 · Year 1–2

Commercial Launch

Companion Mode MVP. NSP-Coding enterprise plugin. 3–5 design partner pilots. Professional tier launch. SDK + documentation.

Phase 3 · Year 2–3

Scale

Healthcare AI audit layer. Education platform. Data flywheel first products. Third-party ecosystem via SDK. Asia-Pacific expansion.

Phase 4 · Year 3–5

Platform Dominance

World model integration. Industry standards push. Government / regulated sectors. Complementary memory system acquisitions.

12-Month Milestones

  • Companion Mode MVP with mathematical personality engine live
  • NSP-Coding as standalone enterprise plugin
  • Paper 3 — "Why Structure, Not Similarity" — published before May 2026
  • 3–5 enterprise pilot customers signed

24-Month Milestones ($30M raise)

  • Revenue from companion + enterprise licensing
  • Healthcare AI audit layer in pilot with 2+ health systems
  • SDK enabling third-party domain runtime development
  • Cognitive data flywheel producing first data products
  • 50+ enterprise customers across verticals
10 / 14
Valuation
Evidence-anchored, not TAM-anchored.

Comparable companies at similar product maturity:

CompanyCategoryValuationRevenueDate
Cursor (Anysphere)AI Coding$29.3B$1B ARRNov 2025
Cognition AI (Devin)AI Coding Agent$14.5B$73M ARRJun 2025
Hippocratic AIHealthcare AI Agents$3.5BEarly revenueNov 2025
AbridgeAI Clinical Docs$5.3B~$100M ARRJun 2025
Character.aiAI Companions$1B+ (Series A)
Conservative
$150M–$250M

Product maturity comparable to Series A AI startups. Discount reflects commercial traction, not engineering evidence.

Aggressive
$500M–$1B

Category creation premium — if investors believe NSP establishes cognitive state management as standard infrastructure like observability or auth.

11 / 14
Funding
The Ask.

Two raise scenarios with clear capital allocation and a defined path to Series A.

$15M Raise
~$500K–600K / mo · 25–30 months runway
  • Engineering 60% — $9M: 8–10 engineers across platform, companion, coding plugin, healthcare, infra
  • Go-to-Market 20% — $3M: Developer relations, enterprise pilots, 3 sales engineers
  • Research 10% — $1.5M: Papers, benchmark development, LLM inference
  • Operations 10% — $1.5M: Legal, IP protection, cloud infrastructure

18-month ARR target: $2M–4M. The number that unlocks a credible Series A at the platform valuation. $15M raise is sufficient if GTM stays disciplined around the coding beachhead.

12 / 14
Why Invest Now
The window to set the protocol standard is open.

In 12–18 months this will be an established category with competition.

01

Pre-Category Premium

NSP is defining a new infrastructure category. Early investors in Stripe, Twilio, and Databricks captured maximum value before the category was recognised. No established price anchors yet.

02

Timing Window

Agentic AI is becoming the default mode of LLM usage. The window to set the protocol standard is open right now — not in 18 months.

03

Platform Compound Effect

NSP's value increases super-linearly with domain coverage — each runtime makes all others more valuable.

04

Architectural Moat

The moat is the accumulated cognitive architecture backed by two published papers and a third-party benchmark anchor. Any single component can be reimplemented in a quarter; the full architecture cannot.

05

Companion-Mode Cost Advantage

State engines run on pure computation — the inverse of LLM-wrapper competitors whose costs scale linearly with usage.

The cognitive state layer is currently uncontested by production infrastructure. NSP is the most mature implementation in that layer today.

13 / 14
Financial Projections
36-Month Cash Flow ($15M Raise)

Phase 1: Coding beachhead (M1–18) · Phase 2: Platform expansion (M19–36) · Series A modelled at M24

Monthly Revenue Monthly Burn Cumulative Cash
Enterprise ACV · Coding
$40K–80K/yr

Team-based annual license. Benchmarked vs. observability tools, not Copilot per-seat.

B2B Licensing · Roleplay
$0.002–0.008/session

Usage-based. NSP's zero-token advantage makes this strongly ROI-positive at scale.

Gross Margin Target
75–85%

Core computation is non-LLM. Low COGS. Margin improves with scale.

14 / 14 · Summary
The cognitive state
layer is uncontested.
NSP is the most mature implementation in that layer today — five production runtimes, two published papers, an independently verifiable benchmark result, and third-party architectural convergence validating the core primitives.
Research Arc
Paper 1 "Think Before You Act" zenodo.19334052 ↗
Paper 2 "Programmable Emergence" zenodo.19392279 ↗
Paper 3 "Why Structure, Not Similarity" — before May 2026 forthcoming
$250M–$400M
Recommended Pre-Money
$15M–$30M
Seed Round
$346B
Combined TAM · 2030
NSP · Narrative State Protocol · Confidential · April 2026
Appendix A1
Appendix · Strategic Recommendations
Revenue Model — Recommended Architecture

Core argument: The plan tries to monetise five verticals simultaneously. That is not a revenue model — it is a wish list. Every successful infrastructure company picked one wedge first. NSP's natural wedge is NSP-Coding: the Claude Code integration exists and is in production, the customer problem is urgent and has a clear dollar cost, and the buyer already has AI tooling budget.

Phase 1 · Months 0–18 · Coding-First

Direct enterprise sales to AI engineering teams deploying AI coding agents. NSP-Coding as a standalone enterprise plugin for Claude Code plus an API layer for teams building custom agents.

MonthActionRevenue Signal
0–33–5 design partner pilots, free with case-study obligation$0 · high signal
3–6Convert 2–3 pilots to paid, publish benchmark results$150K–300K ARR
6–128–12 enterprise accounts via referrals + enterprise BD outreach$600K–1.2M ARR
12–18Companion mode B2B beta with 2–3 licensing partners$1.5M–3M ARR

18-month ARR target: $2M–4M. This is the number that unlocks a credible Series A at the platform valuation the pitch claims.

Phase 2 · Months 18–36 · Platform Expansion

NSP-Roleplay B2B

License personality engine to companion app developers. API integration — low-touch, high-margin. Usage-based per active character session.

NSP-Edu

Enter through 2–3 large EdTech platforms as embedded feature, not standalone product. Per-MAU licensing fee.

NSP-Learn R&D

Fixed-term contracts with pharma or materials science labs. Long sales cycles but very high ACV ($200K–500K/contract).

Data Flywheel Products

Package cognitive transition data as benchmarking API for LLM providers and enterprise AI teams.

Illustrative P&L ($M) — $15M Raise Scenario

YearARRHeadcountAnnual BurnNet Cash Position
Year 1$0.5M12$6.0M$9.0M
Year 2$3.5M22$9.5M→ raise Series A
Year 3$12.0M38$14.0MSeries A funded

The $15M raise is sufficient to reach Series A if GTM stays disciplined around the coding beachhead. The $30M raise is only necessary for a multi-vertical parallel push — a riskier posture at this stage.

36-Month Cash Flow Projection

$15M raise · Base case · Monthly view · All figures $K

Monthly Revenue Monthly Burn Cumulative Cash

Key Unit Economics

Enterprise ACV · Coding
$40K–80K/yr

Team-based annual license. Benchmarked vs. Datadog/Sentry observability tools, not GitHub Copilot per-seat.

B2B Licensing · Roleplay
$0.002–0.008/session

Usage-based per active character session. NSP's zero-LLM-token advantage makes this strongly ROI-positive at scale.

Gross Margin Target
75–85%

Core computation is non-LLM → low COGS. State management consumes zero tokens; LLM inference only on extraction turns. Companion-scale deployments see margin improve most strongly with scale.

Appendix A2
Appendix · Strategic Recommendations
Pricing Architecture — Recommended Structure

Core argument: The most important pricing decision is to avoid per-seat comparison to GitHub Copilot ($19/seat/mo). NSP is infrastructure, not a productivity tool. Price as infrastructure — team-based annual license benchmarked against observability tools like Datadog — to protect margin and positioning.

NSP-Coding · Enterprise Plugin · Team-Based Annual License

TierTargetPriceIncludes
Team5–25 AI engineers$2,500/mo · $30K/yrNSP-Coding hooks, dashboard, standard support, 90-day audit log
Enterprise25–200 AI engineers$6,500/mo · $78K/yrTeam + custom schemas, SLA 99.5%, SSO, 2-year audit log, onboarding
Enterprise+200+ engineers or platform$15K+/mo · customMulti-tenant deploy, dedicated infra, compliance exports, custom integrations

Team tier ($30K/yr) sits below the "needs procurement" threshold at most companies (~$25–50K), enabling direct engineering leader purchase. Enterprise benchmarked against Datadog, Sentry, incident.io — all reliability/observability tools with similar value framing.

NSP-Roleplay · B2B Usage-Based Licensing

TierMonthly Min.Per-SessionTarget
Indie$500$0.008<50K MAU
Growth$2,000$0.00550K–500K MAU
Scale$8,000$0.002500K+ MAU

At 100K DAU × 5 sessions/day at $0.005 = ~$75K/mo. NSP's zero-token personality engine saves the licensee 10%+ in inference costs at this scale — strong ROI.

NSP-Learn · R&D Fixed-Term Contracts

ContractPriceDuration
Pilot$50K3 months
Annual$200K–500K12 months
Strategic$500K+Multi-year

NSP-Edu · Per-Learner Licensing

ModelRate
Per-active-learner/month$0.50–$1.50
Platform integration fee (one-time)$25K–75K
Annual minimum$60K

Developer Sandbox — Replacing Open-Core Adoption Funnel

Removing open-core eliminates the primary developer adoption engine without replacing it. A trial mechanism is required to maintain bottom-up discovery.

Option A · 30-Day Free Trial

Full NSP-Coding access, no credit card required, one workspace. Standard SaaS conversion motion. Converts at 15–25% for infrastructure tools with genuine product value.

Option B · Developer Sandbox (Permanent)

NSP-Coding for personal/non-commercial projects, capped at 500 gate invocations/day. Drives developer familiarity and word-of-mouth without cannibalising Team tier.

Appendix A3
Appendix · Strategic Recommendations
Technical Risk & Dependency — Five Risks, Three Requiring Immediate Action
Risk 01 · HIGH · Distribution-Channel Concentration via Claude Code

The NSP cognitive state layer is architecture-agnostic by construction and runs on any tool-use-capable LLM. The current distribution channel for NSP-Coding, however, is Claude Code's PreToolUse hook, which is Anthropic-specific. If Anthropic changes that hook API, restricts third-party access, or absorbs cognitive gating natively, the integration surface would need to re-target — the underlying protocol would not change, but the go-to-market for the coding wedge would slow.

Mitigation
PriorityActionTimeline
ImmediateDocument the Claude-specific integration surface vs. the vendor-neutral protocol layerMonth 1
Short-termShip integration adapters for OpenAI and Google tool-use APIs — no protocol rewrite requiredMonth 3–6
MediumCertify NSP-Coding on current-generation OpenAI and Google frontier modelsMonth 6–12
StrategicFormal API partnership with Anthropic — convert dependency into co-development lock-inOngoing
Risk 02 · HIGH · Foundation Model Native Competition

Anthropic's extended thinking already gives Claude introspective capabilities. A natural extension is structured belief state output — which is exactly what NSP's extraction layer does. This risk is absent from the current plan entirely.

Mitigation Strategy
API partnership: Approach Anthropic about formal co-marketing — convert exposure into roadmap influence before they build against NSP
Audit trail moat: Historical belief transition data stored in customer environments cannot be replicated by a model upgrade — emphasise this as the irreplaceable asset
Speed: The mitigation for this risk is time-to-market, not product depth — get customers locked in before the window closes
Risk 03 · MEDIUM-HIGH · Gate Latency Overhead

The cognitive gate adds latency before every PreToolUse event. Enterprise buyers will ask this question in the first sales call. No benchmarks are published.

Estimated Latency Breakdown
ComponentEst. Latency
Belief state retrieval (YAML)< 5ms
Confidence threshold check< 1ms
Understanding extraction (LLM call)400ms – 1,200ms
Gate decision + state write-back< 15ms
Total (gated turn)~420ms – 1,220ms
  • Selective gating: gate file deletions and multi-file refactors only — reduces gate invocations by 60–70%
  • Async shallow gate: run gate in parallel for low-risk ops; roll back on failure — eliminates perceived latency for majority of interactions
  • Cache hot understanding: inherit verified understanding of a file across subsequent turns on the same file
  • Publish benchmarks: NSP-gated vs. ungated on SWE-bench — show fewer reverts and CI failures, reframe latency as positive ROI
Risk 04 · MEDIUM · Cross-Domain Generalisation Proof

The platform thesis rests on NSP's ability to carry the same cognitive machinery across coding, research, music, and companion domains. Two domains (coding + research) are shipped. The music domain is still in pilot, and formal generalisation proofs appear in Paper 3 before May 2026.

Mitigation: Paper 3 (Why Structure, Not Similarity) closes this risk with the geometric-immunity argument and three-domain architecture. Until it ships, investors can verify the cross-domain claim directly against the Anthropic kernel result — a neutral third-party benchmark.
Risk 05 · MEDIUM · Multi-LLM Benchmark Evidence Gap

All published benchmarks to date use Claude. This is an evidence gap, not a capability gap — NSP's cognitive state layer is architecture-agnostic by design. Enterprise customers with existing OpenAI or Google contracts will nonetheless ask for a head-to-head benchmark on their stack before committing.

Fix (low effort): Run the cognitive gating benchmark against current-generation OpenAI and Google frontier models. Publish as a supplement to Paper 2. Converts a perceived risk into a market-expansion story.

Risk Summary

RiskSeverityLikelihoodMitigation ComplexityAction Required
Distribution-channel concentrationHighMediumLowIntegration adapters + Anthropic partnership
Foundation model native competitionHighMedium-HighMediumAPI partnership + audit trail moat
Gate latency overheadMedium-HighHigh (will be raised)LowSelective gating + benchmarks
Cross-domain generalisation proofMediumLow (nearly closed)LowPaper 3 + Anthropic benchmark
Multi-LLM benchmark evidence gapMediumLow (fixable)LowRun benchmarks, publish