Lexis is a security-first, content-addressed programming language where programs are JSON DAGs — designed from the ground up for AI agents to write, verify, compose, and safely execute.
When people say "Python is best for AI," they mean Python has the best libraries for humans to build AI systems. But nobody has asked the real question: what language should AI agents actually write in when they need to produce reliable, secure, auditable programs?
Lexis isn't a wrapper around existing languages. It's a fundamentally new approach to computation where code is data, identity is cryptographic, and security is structural.
Programs are directed acyclic graphs serialized as JSON. No parser needed. No syntax to get wrong. Every program is a data structure an AI can read, write, and modify natively.
Every node, subgraph, and result has a cryptographic hash. Same computation always produces the same hash. This makes caching trivial, tampering detectable, and deduplication automatic.
Programs declare exactly what capabilities they need. Everything else is denied. File access, network calls, GUI rendering — each requires explicit permission. No capability, no access.
Division by zero doesn't crash the program — it produces an ErrorValue that flows through the graph like any other data. Downstream nodes can catch it, inspect it, or let it propagate.
Reusable logic is defined as self-contained DAGs with numbered input/output ports. No naming collisions. Content-addressed for identity. Compose two subgraphs into a pipeline with a single opcode.
Built-in agent identity, trust levels, capability scoping, and provenance tracking. Multiple AI agents can collaborate on shared programs with cryptographic audit trails.
Before any code runs, Lexis validates structure, checks dependencies, and enforces security. This is what makes AI-generated code trustworthy.
"An AI model generated 16 programs from a 54-line spec document. Every program passed all 5 pipeline stages on the first attempt. No examples. No trial-and-error. 100% success rate."
Lexis isn't trying to replace your general-purpose language. It's the execution layer between AI agents and real-world actions — making sure what runs is what was intended.
An AI assistant pulls sales data, calculates regional totals, and emails a summary.
Why Lexis: The entire pipeline is a JSON DAG you can inspect before execution. Capability manifests ensure the agent can only access what you explicitly allow. If it tries to touch the filesystem when it should only be making API calls, the sandbox blocks it.
A financial firm calculates risk scores. Regulators need proof the formula hasn't changed since audit.
Why Lexis: Every computation has a BLAKE3 content hash. Same formula = same hash. If a single node changes, the hash changes. Tamper-evident computation is built into the language, not bolted on after the fact.
Claude handles analysis, GPT handles summarization, a local model handles classification — all contributing to a shared pipeline.
Why Lexis: Three-way capability intersection ensures each agent only does what it's trusted to do. Agent A reads files, Agent B makes API calls — neither can exceed their boundaries. Provenance chains track who built which piece, with cryptographic audit trails.
A data team maintains dozens of ETL transforms: clean, merge, normalize, report.
Why Lexis: Each transform is a content-addressed subgraph. Same hash = guaranteed same result. Build a catalog, compose transforms with a single opcode, and cache results automatically. When the input hasn't changed, the 3-layer cache skips re-computation entirely.
A platform lets users or third-party AI agents submit custom logic — discount rules, workflow automation, data processing.
Why Lexis: User-submitted programs run with only the capabilities you grant. A discount calculator gets PURE_COMPUTE and nothing else — it literally cannot read files or make network calls. The program is a JSON blob you can store, version, and audit. No sandboxing hacks required.
An AI generates automation scripts that need to be verified correct before running in production.
Why Lexis: The lexis_check MCP tool validates structure, checks security, AND executes in one call. The AI generates a program, the system verifies it, and only then does it run. With meta-programming, the AI can use REFLECT to inspect what it built and EVAL to test sub-programs before assembling the final pipeline.
A CI/CD pipeline or business process needs to run the exact same way every time, with proof.
Why Lexis: If the program hash and input hashes match a previous run, you know the result is identical without re-running. The execution trace shows exactly which nodes fired in which order. Useful for compliance, debugging, and audit.
Python is optimized for humans building AI. Lexis is optimized for AI building software.
| Requirement | Python | Lexis |
|---|---|---|
| Structured format AI can natively produce | Text with syntax traps | JSON DAGs — no syntax errors |
| Validate before execution | Must run to find errors | 5-stage pipeline catches issues pre-execution |
| Security by default | All sandboxes have been broken | Capability manifests, closed by default |
| Deterministic & cacheable | Side effects make caching unreliable | Content-addressing guarantees identical results |
| Safe composition | Naming conflicts, import collisions | Content-addressed subgraphs, no global state |
| Tamper detection | Requires external tooling | BLAKE3 hashing on every node and result |
| Multi-agent collaboration | No built-in trust or provenance | Agent identity, trust levels, audit trails |
| Self-inspection | Limited reflection capabilities | REFLECT, QUOTE, EVAL — programs inspect themselves |
Every Lexis program declares exactly what it needs. Everything else is denied. This isn't bolted-on security — it's the foundation the language is built on.
Programs declare capabilities like PURE_COMPUTE, IO_STDOUT, FS_READ, NETWORK_OUT. The verifier checks these before execution. Missing a capability? The program doesn't run.
Network-capable programs must list every domain they contact. Requests to unlisted domains are blocked. HTTPS only by default. SSRF prevention blocks private IP ranges.
In multi-agent scenarios, effective capabilities = trust ceiling ∩ agent declared ∩ program manifest. All three must agree. No capability laundering possible.
Level 0: pure compute only. Level 1: add stdout. Level 2: add file read. Level 3: full capabilities including file write and networking. Escalation requires explicit grant.
Every node execution is logged with agent identity, BLAKE3 hash, and timestamp. Append-only audit trails. Provenance chains track lineage across multi-agent composition.
Meta-programming's EVAL opcode enforces a ceiling: inner programs can only use capabilities the parent has, minus META_EVAL itself. No privilege escalation. Recursion depth capped at 3.
Lexis was built incrementally across 20 development phases, each adding capabilities while maintaining full backward compatibility and zero test regressions.
Core pipeline, 28 opcodes, error-as-value system, functions as subgraphs, MAP/FILTER/REDUCE, standard library of 18 reusable subgraphs. Proved AI can generate valid programs from spec alone (100% success rate).
File I/O, stdin, runtime parameters, HTTP networking with SSRF prevention and domain allowlists. 3-layer caching system. Benchmarked against 5 local AI models: 91% success rate on 22 tasks.
Agent identity and trust levels. Wire protocol with 19 message types. Transport layer, event loop, discovery, and negotiation. Adversarial hardening with 51 security tests.
MCP server with 7 tools for IDE integration (Claude Code, Cursor, Windsurf). Enhanced error reporting with "did you mean?" suggestions. Tiered spec for progressive learning. lexis_check validates and executes in one call.
6 GUI opcodes for native Windows applications (tkinter). Browser-based DAG visualization with 3 modes: static structure, step-through trace replay, and real-time live tracing.
5 opcodes for self-bootstrapping: EMIT_NODE, BUILD_SUBGRAPH, QUOTE, REFLECT, EVAL. Programs can now construct, inspect, and execute other programs — with capability-ceilinged security preventing privilege escalation.
Lexis is for when AI agents need to write, verify, compose, and safely execute programs — and you need to trust the results.
SQL isn't a general-purpose language, but it's the right tool for declarative data queries. Lexis aims to be that for AI-generated computation — declarative, verifiable, composable, and secure by default. Not competing with Python for human developers. Competing for the role of what AI agents reach for when they need to build something trustworthy.