Type-Checked Compliance: Deterministic Guardrails for Agentic Financial Systems Using Lean 4 Theorem Proving
Abstract
A formal-verification-based AI guardrail platform uses neural-symbolic models to ensure regulatory compliance in autonomous financial AI systems with cryptographic-level certainty.
The rapid evolution of autonomous, agentic artificial intelligence within financial services has introduced an existential architectural crisis: large language models (LLMs) are probabilistic, non-deterministic systems operating in domains that demand absolute, mathematically verifiable compliance guarantees. Existing guardrail solutions -- including NVIDIA NeMo Guardrails and Guardrails AI -- rely on probabilistic classifiers and syntactic validators that are fundamentally inadequate for enforcing complex multi-variable regulatory constraints mandated by the SEC, FINRA, and OCC. This paper presents the Lean-Agent Protocol, a formal-verification-based AI guardrail platform that leverages the Aristotle neural-symbolic model developed by Harmonic AI to auto-formalize institutional policies into Lean 4 code. Every proposed agentic action is treated as a mathematical conjecture: execution is permitted if and only if the Lean 4 kernel proves that the action satisfies pre-compiled regulatory axioms. This architecture provides cryptographic-level compliance certainty at microsecond latency, directly satisfying SEC Rule 15c3-5, OCC Bulletin 2011-12, FINRA Rule 3110, and CFPB explainability mandates. A three-phase implementation roadmap from shadow verification through enterprise-scale deployment is provided.
Community
Hello everyone, thank you for following our work on the Lean-Agent Protocol. Given the architectural challenges of deploying probabilistic large language models (LLMs) in financial domains, we aim to provide a deterministic, formal-verification-based framework for agentic AI guardrails. Our research leverages the Aristotle neural-symbolic model to auto-formalize natural language institutional policies into Lean 4 code, ensuring that every proposed agentic action is mathematically proven to be compliant before execution. We hope this establishes a standard for "Type-Checked Compliance" that offers cryptographic-level certainty at microsecond-level latency. If you are interested in formal methods for AI safety or would like to contribute to this direction, please feel free to raise an issue or explore our code and live demo here: https://github.com/arkanemystic/lean-agent-protocol.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Before the Tool Call: Deterministic Pre-Action Authorization for Autonomous AI Agents (2026)
- Agentic AI as a Cybersecurity Attack Surface: Threats, Exploits, and Defenses in Runtime Supply Chains (2026)
- Protecting Context and Prompts: Deterministic Security for Non-Deterministic AI (2026)
- Autonomous Action Runtime Management(AARM):A System Specification for Securing AI-Driven Actions at Runtime (2026)
- SentinelAgent: Intent-Verified Delegation Chains for Securing Federal Multi-Agent AI Systems (2026)
- TraceGuard: Process-Guided Firewall against Reasoning Backdoors in Large Language Models (2026)
- LOGIGEN: Logic-Driven Generation of Verifiable Agentic Tasks (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.01483 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper