Blog / Framework Comparison

LangGraph vs Lancelot: When You Need More Than a State Machine

LangGraph is the most popular framework for building stateful AI agents. It is excellent at what it does. But if your agent will touch production infrastructure, customer PII, or financial systems, you need to ask a different question: not "how does my agent work?" but "what can my agent not do?"

March 26, 2026  ·  Myles Hamilton  ·  5 min read
Credit Where It's Due

What LangGraph Does Well

LangGraph deserves its popularity. It brought directed graph semantics to AI agent orchestration at a time when most frameworks were still chaining prompts sequentially. The core abstraction is clean: define nodes, define edges, let state flow through the graph. For many use cases, this is exactly the right model.

Its strengths are real. Graph-based state machines give developers explicit control over agent workflow, making complex multi-step reasoning tractable and debuggable. Checkpointing and state persistence let you pause, resume, and replay agent execution, which is invaluable during development. The LangChain ecosystem provides a deep integration surface with vector stores, retrievers, and tool libraries. And the community is large, active, and well-supported.

If you are building code assistants, research tools, internal developer utilities, or any system where the blast radius of a mistake is a bad commit or a wrong answer, LangGraph is a strong choice. It solves the orchestration problem well, and orchestration is often the hardest part of building useful agents.

This is not a post about why LangGraph is bad. It is a post about what happens when orchestration is not the hardest part of your problem.

The Missing Layer

Where the Gap Appears

LangGraph tells you how an agent works. It does not tell you what an agent cannot do. This distinction is academic until your agent has access to production databases, customer records, or financial APIs. Then it becomes the only question that matters.

Here is what you will not find in LangGraph, or in most orchestration frameworks:

No constitutional governance. There is no equivalent to a Soul document that defines hard behavioral boundaries enforced at the system level, immune to prompt injection.
No risk classification pipeline. Every action is treated equally. There is no mechanism to apply proportional controls based on the potential impact of an operation.
Binary trust model. An agent either has access or it does not. There is no progressive trust system where autonomy is earned through demonstrated competence and revoked instantly on failure.
No immutable audit trail. LangGraph has trace logging through LangSmith, and it is useful for debugging. But trace logs are not structured governance receipts. They do not record the governance chain: what was checked, what was approved, what the rollback path is.
No local PII scrubbing. Data flows through the graph without architectural protections against PII leakage to external model providers.
No compliance export. There is no one-click path from agent activity to SOC 2, ISO 27001, or GDPR audit evidence.
No kill switches with dependency resolution. You cannot shut down a single subsystem while understanding and managing the downstream impact on other running processes.

None of these are criticisms of LangGraph's design. They are outside its scope. LangGraph is an orchestration framework. These are governance problems.

A Different Architecture

The Governance-First Approach

Lancelot does not define how agents work. It defines what they cannot do. This is a fundamental architectural difference, not a feature comparison.

The Soul is a versioned, immutable constitutional document that establishes hard behavioral boundaries. These constraints are enforced at the system level across pre-execution, runtime, and post-execution stages. The model cannot reason its way around them because the model never sees them as suggestions. They are architectural walls, not policy guidelines.

Every action an agent takes produces a structured receipt. The receipt records the full governance chain: the action attempted, its risk classification (T0 through T3), the Soul check result, the verification outcome, and the rollback reference. If there is no receipt, the action did not happen. Both success and failure paths are recorded. This is what makes compliance export possible: the audit trail is not reconstructed from logs after the fact. It is produced as a first-class artifact of every operation.

Trust is not a configuration setting. It is earned. The Trust Ledger tracks agent performance across operations. Fifty consecutive successes at a given tier triggers a graduation proposal. A single failure triggers instant revocation. This creates a system where autonomy is proportional to demonstrated reliability, not to an engineer's confidence at deployment time.

All twenty subsystems are individually kill-switchable with dependency resolution. You can shut down PII scrubbing, or the approval system, or the compliance exporter, and the system will tell you exactly what downstream processes will be affected before you confirm.

Decision Framework

When to Use Which

These frameworks solve different problems for different risk profiles. The choice depends on what your agent touches, not on which framework has more features.

Use LangGraph When

  • Building code assistants and developer tools
  • Prototyping research workflows
  • Internal tools where mistakes are recoverable
  • The blast radius of a failure is a bad output, not a compliance violation
  • You need deep LangChain ecosystem integration
  • Your team's primary challenge is orchestration complexity

Use Lancelot When

  • Agents touch customer PII or sensitive data
  • Agents interact with financial systems or billing
  • Agents modify production infrastructure
  • You need compliance evidence for SOC 2, ISO 27001, or GDPR
  • A failure could trigger regulatory, legal, or reputational consequences
  • Your team's primary challenge is governance, not orchestration

The honest answer for many teams is that they need both: an orchestration layer for agent workflow and a governance layer for production safety. Lancelot is model-agnostic and orchestration-agnostic by design. It governs what the agent can do regardless of how the agent's internal logic is structured.

See the full comparison

Lancelot's competitive matrix evaluates seven AI agent frameworks across governance, security, compliance, and architectural criteria.