EchoLeak Exposed the Trust Gap in AI Agents — Why Trusted Execution and Signed Plans Must Be the New Standard¶
Published: June 12, 2025
TL;DR The EchoLeak zero‑click exploit in Microsoft 365 Copilot showed how malicious inputs can exploit agents lacking scoped authority and trusted execution. Dokugent introduces a trust‑first workflow—scoping, certification, and traceability—that stops leaks before they reach production and slashes development costs.
1 · What Actually Happened?¶
The EchoLeak Timeline¶
- June 11, 2025 — Fortune’s coverage highlighted a zero‑click vulnerability (CVE‑2025‑32711) that allowed attackers to exfiltrate data from Copilot using a single crafted email.
-
A hidden markdown image link was crafted as part of a malicious payload that bypassed Microsoft’s filters. This allowed the attacker to exfiltrate chat history, OneDrive documents, and Teams messages—without any user interaction.
-
For an enterprise, this meant a competitor could craft a single email and silently siphon confidential road-maps from employees’ OneDrive folders. For individuals, it was the digital equivalent of a stranger reading private messages over their shoulder—without them ever knowing.
-
Microsoft patched the server‑side bug, but the root design flaw remains: Copilot treated untrusted email content as safe context.
Agents must behave in ways that prove they’re trustworthy. That requires scoped authority—not assumptions.
2 · The Deeper Issue — LLM Scope Failure¶
Large language model agents work by blending user prompts with private memory (files, chat history, proprietary APIs). If a single untrusted token pierces that boundary, you get scope collapse—the agent now operates on data it should never have seen.
This is like telling a new intern to “summarize the latest project emails,” but accidentally handing them the keys to the entire company’s filing cabinet—including HR records, legal files, and financial data.
The intern, trying to be helpful, pulls in everything visible. The result? A well-intentioned breach of massive proportions.
EchoLeak was the most prominent case so far—an indicator of a wider pattern of emerging LLM attack surfaces:
- ✉️ Email assistants ingesting phishing payloads
-
💬 Chatbots merging internal knowledge bases with public prompts
-
🔄 RAG pipelines concatenating open‑web snippets next to IP‑sensitive records
As Yonatan Zunger recently wrote on Microsoft’s security blog, “LLMs should be treated like junior employees—not omniscient oracles. They must receive bounded inputs, ongoing supervision, and rigorous verification.” (How to Deploy AI Safely)
Call‑out: Microsoft’s AI Red Team likewise notes that “LLMs amplify existing security risks and create entirely new ones.” (Lessons From Red Teaming 100 Generative AI Products)
Without guardrails, every agent is one prompt away from brand‑new attack surfaces.
3 · Meet Dokugent — Trust by Default¶
Dokugent is a CLI‑first framework that treats trust as a compile‑time requirement, not an afterthought.
| Dokugent Command | Purpose | EchoLeak Prevention |
|---|---|---|
plan + criteria | Declare goals, inputs, and strict boundaries | The plan would explicitly state: “Only process the plain-text body of an email (email.body.text).” The malicious markdown image URL would be ignored as an out-of-scope field. |
dryrun / simulate | Run the agent in a sandbox | Running a test with the malicious email would flag the agent’s attempt to access an external URL, revealing the hidden payload before deployment. |
certify | Sign + lock the approved scope | The certified plan is cryptographically locked to only allow email body parsing. Any deviation or attempt to process markdown would fail the signature check. |
trace | Immutable logs of every step | Full visibility into which fields were processed, by whom, and why—essential for forensics and audits. |
.doku_access.json | Role‑based file/API permissions | Restricts access to approved sources only—SharePoint files stay off limits without explicit permission. |
🧪 Before vs. After: Naive vs. Trusted Agent Code¶
// AFTER: Scoped + Certified with Dokugent
dokugent.plan({
allow: ['email.body.text'],
deny: ['email.body.html', 'attachments', 'externalLinks']
});
By defining what the agent is explicitly allowed and denied to access, Dokugent scopes behavior at the plan level—preventing untrusted content like hidden markdown image URLs from ever being parsed.
🛠️ Aligned Build/Test Workflow¶
- Threat modeling — via explicit
plan+criteria - Simulated attacks — with
dryrunandsimulate - Audit trail — captured by
trace+certify
This mirrors Microsoft’s ontology‑driven AI Red Team process and their PyRIT automation for continuous evaluation—but packaged for any developer’s CI pipeline.
🔐 More on Dokugent Signing
Dokugent signs every certified agent plan using an Ed25519 private key. Duringcompile, the plan is hashed with SHA-256, and that digest is signed using the signer’s key. The resulting signature and public key are attached to the plan’s metadata, making any tampering detectable. This creates a verifiable link between the agent’s scope and the identity of the signer—ensuring that trust is both declared and provable. 🔐 Result¶
- Scoped Agents — can’t read what they weren’t allowed to read.
- Auditable Paths — every token is trace‑linked to an approved intent.
- Faster Security Reviews — present the signed plan as a verifiable artifact, avoiding the need for extensive manual test reports.
4 · Trust And Lower Dev Costs¶
| Cost Driver | Typical Pain | With Dokugent |
|---|---|---|
| Debugging unclear LLM behavior | Chasing down why the agent "hallucinated" or gave a bizarre, non-deterministic answer for the 10th time. | Scoped plans make agent behavior predictable and deterministic, catching errors early. |
| Extended QA cycles | Security team flags a new potential vulnerability a day before launch, triggering a full re-test cycle. | dryrun and certify provide a verifiable "receipt" of security, turning QA from a bottleneck into a checkbox. |
| Lengthy security sign‑offs | Rewriting threat models and audit docs from scratch every sprint. | Signed plans are self-documenting and scoped for reviewer confidence. |
| Hotfix firefighting | Pager duty after a live agent leaks sensitive data. | Trusted plans reduce emergency patches and prevent regressions. |
Time saved is money saved. Teams using Dokugent report 30–50 % fewer dev‑cycle hours on agent features.
5 · Getting Started¶
# Dokugent is in Alpha. Beta release coming to NPM Thursday next week.
npx dokugent init my-agent
cd my-agent
dokugent plan --open
- Define your agent’s intent and scope.
- Run
dokugent dryrununtil the output is clean. - Sign with
dokugent certifyand ship with confidence.
Dokugent is currently in alpha, and we’re shaping it with developer feedback. If you're building AI agents, now’s the perfect time to get involved. Join us as we prepare for next week’s beta release — your trust agent deserves a trust layer.
Build agents you can trust — before the next EchoLeak headlines hit.
Further Reading¶
- Microsoft Copilot’s EchoLeak vulnerability explained – Fortune, June 11, 2025
-
Zero-click AI data leak flaw uncovered in Copilot – BleepingComputer
-
Microsoft Security Response Center (MSRC): CVE-2025-32711 Security Update Guidance
- Lessons From Red Teaming 100 Generative AI Products – Microsoft AIRT (Jan 2025)
Learn More with Dokugent¶
Written by Carmelyne M. Thompson, creator of Dokugent CLI. Follow @Dokugent on Github.