Trust Verification Layer for AI · reality compiler · est. 2026

Compile reality for AI work.

Mejepa opens a whole new market: independent AI verification that the AI cannot grade, game, or self-attest. It turns code changes, documents, tests, audit trails, and outcomes into signed reality signals you can use to verify claims, predict failures, trigger agents, and prove what happened.

Q1-Q5 reality predictions 13 frozen instruments reward signals from outcomes ed25519 audit trail
WITNESS LOG · PUBLIC EXTRACT · 2026-05-12T14:02Z VERIFIABLE OFFLINE
Pass Filing — 14 citations matched source · 0 hallucinations · pre-filing diligence record sig 0x9a3f…d7c1
Abstain Compliance summary — low conformal coverage · escalated to human reviewer sig 0x7e21…b9f0
Out of distribution Underwriter proposal — outside validated envelope · refused before delivery sig 0x4c08…2a1e
Fail AI-generated PR — tests claimed passing · tests do not exist · merge blocked sig 0x1d77…5b3a
§00 WHY THIS IS A WHOLE NEW MARKET

The missing layer is Trust Verification.

AI adoption is forcing every organization to answer a question existing software was not built to answer: how do you know the AI output is true, safe, in-bounds, and provable later? Mejepa is the Trust Verification Layer for that problem.

Mejepa verification packet surrounded by code, legal filing, compliance evidence, audit log, and witness-chain receipts.
01 / THE GAP

AI now does real work. Most teams only have self-attestation.

Governance dashboards produce policies. Eval tools produce scores. Humans produce review notes. None of those compile the shifting state of reality into a signed prediction record that an auditor, release manager, insurer, or agent can consume.

02 / THE DIFFERENCE

Mejepa is independent of the AI being judged.

The verifier is structurally separate from the generating model. Frozen instruments read the artifact, the conformal guard refuses unsafe predictions, and the witness chain signs the result. That is why it can't be gamed by the AI.

03 / THE NEW CATEGORY

A predictive world model for AI work.

Mejepa does not just say pass or fail. It predicts what exists, what works, why it fails, what else could go wrong, and how the change impacts reality. That turns verification into an operating layer.

Based on what?

This market exists because AI outputs are now entering courts, codebases, compliance workflows, procurement, insurance, and regulated operations. The unsolved problem is not "more AI." The unsolved problem is trusted reality feedback: independent verification, durable evidence, and reward signals grounded in what actually happened.

What changes?

Organizations can stop using people as the only reality check. Engineers, lawyers, auditors, and security teams still set the bar, but Mejepa produces the first-pass signed record: what the AI claimed, what reality showed, what failed, what abstained, and what needs a human.

§0 WHAT CHANGES WHEN AI IS GROUNDED IN REALITY

Stop managing prompts. Start operating from reality signals.

A reality compiler converts messy work into deterministic readings: what changed, what exists, what passed, what failed, what drifted, and what needs a human. That gives organizations a new operating surface for AI agents, compliance, code review, procurement, and risk.

Verified reality events flowing into agent action nodes through signed audit signals.
LIVE LEAD PATH

Give the visitor one concrete next step.

They do not need to understand the whole system to convert. They need to know what to send: one AI-generated PR, filing, compliance workflow, product output, or audit artifact.

Send one artifact
01

"Done" becomes verifiable

When an agent says a task is complete, Mejepa checks the filesystem, tests, claims, provenance, and witness chain. The organization gets a signed record instead of a status update.

02

Failures surface before they become incidents

RealityPrediction asks what works, why it might fail, what else could go wrong, and how the patch impacts production, security, cost, compliance, and edge cases.

03

Agents react to reality, not timers

Instead of cron jobs and prompt loops, agents can activate when signed reality changes: a test flips, a claim fails, a workflow leaves its validated envelope, or an auditor-ready packet is ready.

04

Reward signals come from durable outcomes

Edits, test outcomes, reviewer feedback, and field results become training material. The system learns from reality because the target side is frozen, measured, and externally defined.

05

Audit evidence is created as work happens

Every verdict is signed, hashed, and replayable. Compliance stops being a cleanup project at the end of the quarter and becomes a byproduct of verified execution.

06

Human review stops being the bottleneck

Pass, Fail, Abstain, and Out-of-distribution verdicts route attention. Reviewers stop rereading everything and focus on the claims, diffs, citations, workflows, and outputs that reality rejected.

Deterministic outcomes Mejepa produces

For code, the system is designed around five operator questions from the source-code map: does the claim exist on disk, does the code work, why would it fail, what else could go wrong, and how does the patch impact reality.

  • Signed verdicts for AI-assisted artifacts
  • RealityPrediction records agents can consume
  • Human-review queues for risky or uncertain work
  • Offline-verifiable audit trails for compliance
  • Reward signals grounded in tests, edits, and outcomes
  • Repeatable evidence packets for buyers, auditors, insurers, and courts
§1 THE PROBLEM

AI is doing the work. Reality is not in the loop.

Every AI-assisted output sits between productivity and liability. When the auditor, judge, insurer, customer, or release manager asks "how do you know this happened and worked?" — most teams have no signed answer.

AI can draft a brief, triage a ticket, write a compliance memo, or change code. Five questions remain unanswered every single time:

Did the AI cite a real source? Did it make an unsupported claim? Did it leave its validated operating envelope? Did a human review the risky parts? Can you prove any of that later — six months later, in front of a regulator, judge, or assessor?

Mejepa answers those questions with signed verification records. Every AI-assisted output is checked against thirteen frozen instruments, guarded by a conformal boundary, and signed so it can be replayed and verified by anyone — long after the original AI tool has changed or vanished. This is the durable position: not another AI tool, but the independent trust layer around AI work.

Send the artifact. We return the packet. Fixed price. Fixed turnaround. Signed.

§2 HOW IT WORKS

Four invariants. One reality compiler.

Internally, Mejepa compiles work into reality signals: fixed readings, calibrated predictions, conformal refusal, and a signed witness chain. Externally, you get a packet humans can trust and agents can act on.

01 / READ

13 frozen instruments

Every AI output flows through fixed-shape lenses that never change between releases. The reading is always the same for the same input — a calibrated ruler, not a black box.

02 / PREDICT

The predictor

A learned model forecasts what should happen in a shared embedding space. It can be retrained. The instruments cannot. The split is what makes drift detectable.

03 / GUARD

Conformal boundary

Before any verdict leaves, a statistical geometry check asks: is this prediction inside what Mejepa learned safely? If not, abstain. The abstention itself is signed.

04 / SIGN

Witness chain

Every verdict is signed with an ed25519 key and hashed into an append-only chain. An auditor replays it offline. The signature is the proof. The chain is the record.

Human Oversight Rule

Mejepa uses AI and agentic workflows internally — but every customer-facing verification packet receives human oversight before delivery. Our small team is a trust signal, not a constraint.

§3 WHERE TEAMS NEED SIGNED PROOF

Four packets. Quoted price. Fixed turnaround.

No platform demos. No multi-month integrations. No procurement back-and-forth. Send the artifact you would most hate to defend. We return the signed verification packet.

№ 01 · LITIGATION $750–$2,500

Legal AI Filing Verification

Litigation firms · Risk partners · General counsel

Submit one AI-assisted brief, memo, or filing. In 48 hours, Mejepa returns a signed packet showing every citation checked, every quote matched to source, every unsupported claim flagged, every section requiring human attorney review.

Stop a Mata v. Avianca
№ 02 · CYBERSECURITY & COMPLIANCE $1,500–$3,500 + retainer

Cybersecurity AI Evidence Snapshot

MSPs · MSSPs · vCISOs · CMMC & NIST consultants

In five business days, Mejepa reviews a client's AI-assisted security or compliance workflow and returns an AI usage inventory, a risk map, signed sample verification records, and a CMMC/NIST-mapped audit packet — without ripping out the existing security stack.

Attackers test code. Auditors test evidence.
№ 03 · AI VENDORS $3,500–$10,000

AI Insurability Evidence Pack

AI vendor founders · GCs · Heads of Product

Enterprise buyers want proof your AI can be trusted. In five business days, Mejepa returns an output sample review, a failure-boundary summary, an abstention and escalation policy, signed verification examples, and a procurement-ready evidence memo with underwriter-facing language.

Unblock the security questionnaire
№ 04 · ENGINEERING $500–$2,500 / audit

AI Code Done-Claim Audit

Dev agencies · AI coding shops · Teams using agents

Your coding agent says it is done. Mejepa checks reality. Send one AI-generated PR, repo change, or agent session — Mejepa returns whether the claimed change exists, whether the tests prove it, where the edge cases hide, and a signed verification record.

"Done" should mean something

Building a vCISO or GRC platform? Mejepa OEMs an AI Risk Verification Module you can embed. · Future applications: AI underwriting · financial-services model risk · healthcare post-market monitoring.

Cybersecurity & compliance wedge

Your clients are using AI. Mejepa gives you the evidence auditors will ask for.

MSPs, MSSPs, vCISOs, and CMMC consultants are adding AI to ticket triage, policy drafting, security summaries, compliance evidence, and client reporting. The new gap no one can fill: how do you know the AI didn't invent, leak, or misstate something?

Mejepa hands you the auditor-ready packet — mapped to CMMC 2.0, NIST SP 800-171, and the controls your clients already care about. Not another EDR. Not a SIEM. Not a GRC dashboard. An evidence service that sits beside your existing stack.

See the wedge
[PASS] AI ticket triage · client=acme · sig=0x9a3f
[PASS] M365 anomaly summary · client=northshore · sig=0x7e21
[ABSTAIN] Compliance doc draft · low conf · human review · sig=0x4c08
[PASS] CMMC 800-171 evidence pack · client=acme · sig=0x1d77
[PASS] vCISO advisory memo · sig=0xb0e3
[FAIL] Phishing remediation note · unsupported claim · sig=0x5f29
→ download_audit_packet_2026Q2.pdf
§4 THE TRUST MODEL

Three primitives. Replayable, refusable, signed.

Mejepa's verdicts hold up under audit because the architecture makes hand-waving impossible. The math is public. The signatures are verifiable. The chain is append-only.

PRIMITIVE / 01

Frozen instruments

Thirteen calibrated lenses that do not change between releases. Same input, same reading — every time. The foundation for any verdict that can be defended later.

PRIMITIVE / 02

Conformal guard

A statistical boundary with a published coverage rate. When the prediction drifts outside what Mejepa learned, it abstains. The abstention is signed too.

PRIMITIVE / 03

Witness chain

Every verdict is signed with an ed25519 key and hashed into an append-only chain. Six months later, an auditor replays the verdict and verifies the signature offline.

Mejepa is not AI governance software, AI safety tooling, AI monitoring, EDR, SIEM, CASB, SASE, DLP, a model wrapper, or a chatbot. It is an AI verification service that produces a packet your auditor can verify themselves, offline, in thirty seconds — without trusting us.

Provenance · Built on Teleox.ai · Published research

Mejepa is the commercial productization of Teleox.ai — an independent research framework on meaning compression. The 13-instrument panel, Gτ guard, and witness chain are open-research primitives (Derived Data Abundance + Teleological Constellation Training). No black box. Read the papers.

Zenodo DOI: 10.5281/zenodo.19977981 · Dynamic / ME-JEPA: An Audited, Domain-Portable World-Model Runtime · Royse, May 2026

MEJEPA · AI VERIFICATION SERVICE ED25519 · WITNESSED · SIGNED m VOL · I № — TAMPER-PROOF
§5 WHAT WE PROMISE · WHAT WE'RE BUILDING

What ships today. What's on the roadmap.

Every Mejepa packet is built on shipping, peer-reviewable primitives. The vision below is where the methodology goes — sold as direction, never delivered as a feature you haven't paid for.

Today · The promise
  • Full-State Verification — every claim has a corresponding RocksDB row, file SHA-256, and a separate verifier code path. The verifier doesn't share code with the writer.
  • External frozen instruments — declared by TOML domain pack. The list is the contract. Not learned. Not hidden.
  • Goodhart-immune Gτ guard — constellation centroids are frozen; the predictor cannot learn to fool its own guard.
  • Audited execution traces as a training-data class — substantially lower recursion-replacement collapse hazard than synthetic LLM output (§7 of the paper).
  • Witness chain — SHAKE-256 hashed, ed25519 segment-signed, append-only. Private key on operator hardware only.
  • Single-GPU on commodity hardware — no multi-tenant cloud, no SaaS attack surface. Reproducible from Zenodo DOI 10.5281/zenodo.19977981.
Tomorrow · The vision
  • Domain packs as the unit of expansion — ME-JEPA-Legal, -Security, -Insurance, -Voice, -Image, -Robotics, -Math ship as TOML manifests, not new binaries.
  • Counterfactual minimum-edit — return the smallest change that would have made the verdict Pass. The agent-developer product surface.
  • Cross-customer patch-similarity graph — anonymized recurring failure patterns: "your bug looks like 47 others, here is the fix."
  • Live underwriter / regulator API — today: static signed pack. Tomorrow: live coverage stream into carriers and AI assurance practices.
  • Verified-by-Mejepa — a procurement-side trust mark that does for AI evidence what SOC 2 did for cloud trust.
Vision items are sold as direction. The packet you buy this week is bounded by the Promise column.
§6 WHO & WHAT THIS RUNS ON

Two co-founders. One research substrate. Two productized lines.

Mejepa is built and operated by two people, on a research stack we publish openly. The methodology can be read, cited, and verified by anyone — including the auditor on the other side of the verdict.

Co-Founder № 01

Steve Abbey

Positioning · Commercialization · Claim discipline

Multi-decade serial founder. Currently also co-founder of Leapable.ai, SyntheticJuror.ai, and RealLifeAI — the parallel productizations of the Teleox research substrate. Steve owns the buyer side of Mejepa: which packets we sell, how they're priced, and what we will and will not put a signature on.

"I help lawyers and operators remove themselves from daily grinds — with patent-pending synthetic juror and scenario-simulation technologies."
LinkedIn →
Co-Founder № 02

Chris Royse

Research · Systems · Verification primitives

AI engineer and educator at Kansas State University; specializes in neuromorphic computing and cognitive architectures. Founder of Frontier Tech Strategies. Open-source builder: Pheromind (AI agent swarm framework, 380★), CodeGraph, ContextGraph, NeuralShrink. The 13 frozen instruments and conformal guard inside every Mejepa packet are Chris's research, productized.

"I bridge the gap between biological brains and silicon."
LinkedIn → GitHub →
§6.1 WHAT MEJEPA RUNS ON

The research substrate & the sister products.

Mejepa is one of three productizations of the same research line. Each product addresses a different buyer; they share the methodology, the frozen instruments, and the witness chain.

Research substrate teleox.ai →

Teleox.ai

Teleox is the open research line on meaning compression — Derived Data Abundance (DDA), Teleological Constellation Training (TCT), Context Graph, and ME-JEPA. Mejepa runs every customer artifact through the Teleox Context Graph (N=13 instruments) and uses the Teleox conformal-coverage method to decide when to abstain. The benefit for the buyer: methodology you can cite. When an underwriter, assessor, or judge asks "how does this work?" we hand them the Teleox papers, not a vendor pitch deck.

Sister product leapable.ai →

Leapable.ai

Leapable is the productized OCR Provenance pipeline from the Teleox stack, applied to creator knowledge marketplaces — every AI answer cites the exact page and paragraph it came from. Mejepa uses Leapable's hardened OCR Provenance MCP to ingest, parse, and witness the artifacts customers submit for verification (filings, PRs, evidence files, security workflows). The benefit for the buyer: every page reference in a Mejepa packet is traceable to a source line in your original artifact. There is no hallucinated citation, because the citation is a provenance receipt, not an LLM output.

Two co-founders. Three productized lines (Mejepa · Leapable · SyntheticJuror). One open research substrate (Teleox.ai). No black box.

§7 FREQUENTLY ASKED

The questions auditors, founders, and AI engines ask.

Answer-first. Statistics-grounded. Cited where citable. Written for human buyers and indexed for AI search.

Why is this a new market?

AI is now producing code, filings, compliance evidence, procurement answers, underwriting material, and operational decisions. The missing category is Trust Verification: a system outside the AI that compiles reality, predicts failure modes, signs the evidence, and creates durable reward signals. Mejepa is built for that category. It is not a prompt wrapper, governance dashboard, or eval score.

What is Mejepa?

Mejepa is the Trust Verification Layer for AI: an AI verification service and reality compiler that produces signed, tamper-proof evidence of AI-assisted work. Customers send one AI-assisted artifact — a filing, security workflow, AI product output, or code change — and receive a signed verification packet anyone can verify offline in 30 seconds. Mejepa is built on the published Teleox.ai research framework (Zenodo DOI 10.5281/zenodo.19977981) and is not an AI governance platform, AI safety tool, or model wrapper.

How does Mejepa work?

Mejepa runs every AI output through four invariants: (1) 13 frozen instruments declared by a TOML domain pack — same readings every time; (2) a learned predictive world model; (3) a conformal Gτ guard with frozen centroids that the predictor cannot fool; (4) a witness chain that SHAKE-256 hashes and ed25519-signs every verdict into an append-only log. The result is a verdict — Pass, Fail, Abstain, or Out-of-distribution — that an auditor can replay offline using a published public key.

How much does Mejepa cost?

Mejepa sells productized fixed-scope packets, not subscriptions. Legal AI Filing Verification: $750 Express (24 hours) or $2,500 Counsel-Reviewed (48 hours). Cybersecurity AI Evidence Snapshot: $1,500–$3,500 plus optional $1,000–$5,000/month retainer (5 business days). AI Insurability Evidence Pack: $3,500–$10,000 across three tiers (5 business days). AI Code Done-Claim Audit: $500–$2,500 per audit, from 2 hours scaled by PR size.

How is Mejepa different from AI governance platforms like Credo AI or Holistic AI?

AI governance platforms produce policies, dashboards, and risk scores. Mejepa produces a signed verification packet for a single AI-assisted artifact that an auditor can verify offline without trusting Mejepa. The pure-play AI governance category was sub-$50M total revenue in 2025 (Credo AI ~$2.2M ARR, Fiddler ~$2.0M ARR, Robust Intelligence sold to Cisco at $9.3M ARR). Mejepa restructures the frame: an AI evidence service, productized, sold by the packet, built on peer-reviewable research.

Who built Mejepa?

Mejepa was built by two co-founders: Steve Abbey (multi-decade serial founder; co-founder of Leapable.ai, SyntheticJuror.ai, RealLifeAI) and Chris Royse (AI engineer and educator at Kansas State University; specializes in neuromorphic computing and cognitive architectures; author of the Dynamic / ME-JEPA research paper at Zenodo DOI 10.5281/zenodo.19977981). The technology is the commercial productization of the Teleox.ai research framework.

Can I verify a Mejepa packet myself without trusting Mejepa?

Yes. Every Mejepa packet ships with a machine-verifiable JSON witness chain signed with an ed25519 key. The Mejepa public key is published at mejepa.com/keys. Any party — opposing counsel, judge, malpractice carrier, CMMC assessor, insurance underwriter, or board auditor — can verify the packet offline in approximately 30 seconds using standard ed25519 tooling, without contacting Mejepa or trusting the vendor.

What does "signed AI output" actually mean?

A signed AI output is a cryptographically attested record that a specific AI-assisted artifact was verified by a specific methodology at a specific time. Mejepa signs every verdict with an ed25519 private key that never leaves operator-controlled hardware, hashes the verdict into a SHAKE-256 append-only chain, and publishes the verification public key. The resulting receipt is tamper-evident, replayable, and admissible as evidence of pre-deployment diligence.

One AI artifact becoming a sealed Mejepa verification packet with signed proof.
§5 GET STARTED

Send one artifact. Get signed proof.

Send the AI-assisted artifact you'd most hate to defend in front of an auditor, judge, regulator, buyer, insurer, or release manager. Mejepa returns a signed verification packet — and a path to a paid engagement.

Best first artifacts: AI-generated PR, legal filing, compliance memo, security workflow, AI product output sample, or procurement evidence request.

Mejepa · [email protected] · ED25519 · 2026