When news broke that Asana's MCP server had exposed sensitive data across organizations, it wasn’t just a one-off flaw. It was a warning shot for anyone integrating AI agents into their systems without guardrails.
This is the future security teams have to prepare for: not just more AI, but more autonomous AI interacting with internal systems. These agents will need access to get work done. But without strict, context-aware enforcement, that access can easily be abused—even unintentionally.
In May 2025, Asana launched an MCP server to help customers automate tasks via third-party apps. But a serious bug, identified just a month later, allowed those integrations to overreach. Researchers found that the connector could surface sensitive data from other organizations—a full breakdown in tenant isolation.
Over 1,000 customers were potentially affected.
The vulnerability made it possible for users in one org to see project names, task descriptions, and metadata from entirely separate tenants.
If a user connected an app to help with a project, that connector could access more than it should, including data from outside their own organization.
This wasn’t an edge case. It was a clear example of how agentic access, without guardrails, can pierce fundamental data boundaries.
This is the kind of risk MCP exposes if left unguarded. The protocol gives agents powerful context and access—but without strong guardrails, it can open the door to serious security failures. MCP by itself isn’t enough. You still need an enforcement layer.
Because here's the truth: Connecting tools to MCP without guardrails like continuous authorization, scoped policies, and real-time enforcement is always risky. Without a policy enforcement point like Pomerium in place, you’re trusting AI agents with more access than they should ever be given.
Traditional access controls were designed around users and static roles. AI agents don’t fit that mold.
They:
Operate asynchronously
Chain requests together
Act on behalf of users
Often call internal tools and third-party APIs
You can’t just hand them a long-lived token and hope for the best.
Instead, you need real-time authorization that evaluates context:
Is this request part of a known workflow?
Is this tool allowed for this user?
Is the request occurring at a reasonable time or frequency?
Without that, you get incidents like Asana.
Pomerium provides a secure-by-default gateway for MCP and agentic architectures. Here's how it prevents what went wrong with Asana:
Pomerium can sit in front of any HTTP MCP server, turning it into a secure, compliant endpoint. Only authenticated, authorized requests get through. Access is scoped per policy, and default deny.
Want to let agents use search and fetch, but block delete or upload? Easy:
routes:
- from: https://github.localhost.pomerium.io
to: http://localhost:3020
policy:
allow:
and:
- domain:
is: example.com
- mcp_tool:
in: [“search”,”fetch”]
mcp: {}
This lets you explicitly allow which tools or capabilities an agent can invoke.
Agents never handle long-lived credentials. Pomerium validates external identity and injects short-lived identity assertions downstream (X-Pomerium-Assertion). Internal tools never see the raw OAuth token.
Every decision is logged with identity, action, time, and reason. You get a full paper trail of what happened and why.
Asana won’t be isn’t the last company to learn this lesson the hard way. As agentic architectures become the norm, security teams must:
Shift enforcement to the application layer. Identity alone isn’t enough. You need context.
Wrap internal services with Pomerium. Treat every MCP agent request like a potential threat.
Use policy to constrain tool usage. Grant access just enough to be useful, never more.
With Pomerium, you don’t need to redesign your infrastructure. You just enforce better boundaries.
Don’t wait until your agent integration makes headlines.
Pomerium gives you identity-aware guardrails for every AI action.
Get started with our agentic access demo →
What was the Asana AI connector vulnerability?
A bug in a MCP server exposed cross-tenant data when users connected third-party AI tools, allowing visibility into other organizations’ project metadata.
How many organizations were affected?
Approximately 1,000 customers were potentially exposed between June 5 and June 17, 2025.
What is MCP and why does it matter?
The Model Context Protocol (MCP) provides AI agents with access to relevant internal systems. Without proper guardrails, it can become a serious security liability.
How can I secure MCP servers?
Use a solution like Pomerium to enforce real-time, identity-aware access control and policy-based enforcement across all agent requests.
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.
Company
Quicklinks
Stay Connected
Stay up to date with Pomerium news and announcements.