June was a messy, revealing, and incredibly important month for anyone paying attention to how AI agents interact with real-world systems.
From high-profile security flaws in GitHub and Asana, to ongoing prompt injection risks and zero-click exfiltration attacks, it’s clear we’re rapidly entering a new era: one where agentic access is no longer hypothetical—it’s operational.
And operational means vulnerable.
So here’s a hand-picked roundup of the most insightful, surprising, and sometimes downright alarming developments from the past month. If you’re building or securing AI systems, these are the stories worth your time.
🔓 Hundreds of MCP Servers Expose AI Models to Abuse, RCE
Dark Reading
~7,000 misconfigured MCP servers are live on the public web, with many wide open to RCE. Our take: agentic access without enforcement is just exposed access.
🪞 Asana MCP Feature Leaked Cross-Org Data
Bleeping Computer
Asana’s AI agent let users view project data from other tenants — a brutal reminder of what happens when access boundaries are assumed, not enforced.
🎭 AgentSmith Flaw in LangSmith Exposed Keys and LLM Responses
HackRead
A CVSS 8.8 vulnerability in LangSmith’s Prompt Hub led to key leakage and model hijacking. A real-world case of agents turning against their creators.
📥 GitHub MCP Vulnerability Has Far-Reaching Consequences
CyberNews
A prompt injection in GitHub’s official MCP server jeopardized repository access. When your dev tools become agents, your pipeline becomes attack surface.
🧠 EchoLeak: First Zero-Click AI Data Exfiltration from M365 Copilot
Aim Labs
EchoLeak exploited RAG-style Copilot behavior to leak sensitive data—without clicks or prompts. Subtle bug, massive implications.
🔐 Why a Classic MCP Server Vulnerability Can Undermine Your Entire AI Agent
Trend Micro
An SQL injection flaw in a widely-forked SQLite MCP server could exfiltrate stored prompts and hijack workflows. Old bug, new problem.
🧩 Design Patterns for Securing LLM Agents Against Prompt Injections
Simon Willison
A fantastic roundup of architectural patterns and emerging best practices for defending agents against prompt injection.
🧵 Deploying a Secure Enterprise Agentic AI: MCP + Agent2Agent
The New Stack
Strong overview of real-world concerns when chaining multiple agents together. Includes intro to Google’s Agent2Agent protocol.
🏛️ MCP: A Strategic Foundation for Enterprise-Ready AI Agents
CIO
A non-technical explainer, but worth the read. Shows how IT leadership is starting to treat MCP as infrastructure, not experiment.
🔑 Identity as the Control Plane: Agents Will Outnumber Humans 10:1
VentureBeat
If you think access control is messy today, wait until you’re managing millions of agent identities. IAM is now AIAM.
MCP adoption is moving fast—and breaking things.
From zero-click exploits to agent-induced data leaks, the security implications of autonomous access are unfolding in real time. If you’re not tracking these developments closely, you’re already behind.
At Pomerium, we’re watching this space closely because we think access control is the defining challenge of the agentic era. That’s why we’ve built our system to enforce policies at the moment of access—identity, intent, risk—all in real time.
🔐 See our MCP demo if you want to dig deeper into what secure agentic access can look like.
Meet with the team to dive deeper by Booking a Demo
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.
Company
Quicklinks
Stay Connected
Stay up to date with Pomerium news and announcements.