What Made McKinsey's AI Platform Easy to Hack? And How to Fix it.

March 11, 2026
Share on Bluesky

Enterprise AI assistants are quickly becoming the front door to internal systems.

Organizations are connecting LLM-based platforms to:

  • internal APIs

  • company knowledge bases

  • document repositories

  • SaaS systems

  • internal automation tools

But this new architecture introduces a major security challenge:

AI systems act as intermediaries between users and internal infrastructure.

A recent analysis from codewall.ai titled “How We Hacked McKinsey’s AI Platform” demonstrated how this design can be exploited when security controls are incomplete.

And the problem wasn’t simply prompt injection, it was a failure in access control architecture.

This post explains:

  1. The insecure architecture pattern that made the attack possible

  2. The architectural controls required to secure enterprise AI systems

  3. How Pomerium acts as the security gateway for AI platforms

The Insecure Architecture Pattern Many Internal AI Platforms Share

Most internal AI assistants are built using a pattern like this:

  • A web UI for users

  • An LLM orchestration layer

  • Tool integrations

  • Internal API access

The AI system becomes a central orchestrator of internal actions.

Typical Enterprise AI Platform Architecture

graph TD 
    A[Users] --> B[AI Platform / LLM Orchestrator]
    B --> C[Internal APIs]
    B --> D[Knowledge Bases]
    B --> E[Databases]
    B --> F[SaaS APIs]
    B --> G[Automation Tools]

The core problem:

The AI platform becomes a trusted intermediary with broad access. Furthermore, you can't micro-segment and isolate your way to safety here with layers of VPN access as the LLM models that these platforms are reliant on will never allow you to VPN into their systems, so you must have some part of your platform exposed externally in order to communicate with the LLM model endpoints.

If the system is compromised or manipulated, attackers can:

  • call internal APIs

  • retrieve sensitive data

  • perform administrative actions

  • pivot across internal services

This creates what security researchers call a confused deputy attack, the attacker convinces the AI system to perform actions on their behalf.

Why AI Systems Break Traditional Security Models

Traditional applications usually enforce access control at the application level, however, AI platforms are different.

AI agents dynamically:

  • call tools

  • chain API requests

  • interact with multiple services

  • perform actions on behalf of users

This creates several architectural problems:

1. Fragmented Authorization

Each internal service implements its own access controls.

2. Over-Privileged AI Systems

The AI platform often has access to more resources than any user should have.

3. Implicit Trust Between Services

Internal APIs frequently trust requests coming from the AI system.

4. Limited Visibility

Security teams often cannot see:

  • which tools were called

  • which user triggered the action

  • what data was accessed

Without centralized controls, security becomes inconsistent and difficult to audit

The Correct Architecture: An Identity-Aware Access Gateway

The key principle for securing AI platforms is to never trust the AI system itself. Instead, treat the AI platform like an untrusted client that must pass through a centralized access layer.

This is where Pomerium fits.

Pomerium acts as a Policy Enforcement Point (PEP) that sits between the AI system and internal services.

Secure AI Platform Architecture with Pomerium

graph TD
    A[Users] --> B[AI Platform / LLM Orchestrator]
    B --> C[Pomerium Identity-Aware Proxy]
    C --> D[Internal APIs]
    C --> E[Databases]
    C --> F[SaaS APIs]
    C --> G[Knowledge Systems]
    C --> H[Internal Tools]

Instead of calling internal systems directly, the AI platform must send every request through Pomerium.

Pomerium enforces authentication, authorization, and auditing before the request reaches the target service.

Step-by-Step Request Flow

Here’s what happens when a user asks the AI assistant to perform an action.

Step 1: User Authenticates

The user accesses the AI platform.

Authentication happens through the enterprise identity provider (IdP).

Pomerium integrates directly with enterprise IdPs and establishes a secure session.

Step 2: The AI Platform Requests an Internal Resource

Example request:

“Summarize the financial reports in the internal data warehouse.”

The AI system attempts to query internal data services.

But instead of calling them directly, it must go through Pomerium.

Step 3: Pomerium Verifies Identity

Pomerium attaches a cryptographically signed identity assertion to the request.

This identity includes attributes such as:

  • user ID

  • email

  • group membership

  • session information

This allows downstream services to verify the identity without implementing their own authentication layer. 

Step 4: Pomerium Evaluates Authorization Policies

Before forwarding the request, Pomerium evaluates policy rules.

Policies can consider:

  • user identity

  • group membership

  • API path

  • HTTP method

  • request headers

  • client IP

  • time of day

For example, here's a policy that grants access to database query tools only to data analysts:

Yaml
policy:
  allow:
    and:
      - domain:
          is: company.com
      - groups:
          has: 'data-analysts'
  deny:
    or:
      - mcp_tool:
          in: ['update_data', 'drop_table', 'delete_records']
      - mcp_tool:
          starts_with: 'admin_'

These policies ensure the AI platform cannot perform actions the user is not authorized to perform.

Step 5: The Request Is Forwarded

Only after policy validation does Pomerium forward the request to the internal service.

Each service receives:

  • a verified identity

  • contextual request information

  • a trusted authorization decision

Step 6: Full Audit Logging

Every request generates structured logs including:

  • user identity

  • resource accessed

  • policy decision

  • source IP

  • timestamp

This gives security teams complete visibility into AI-driven activity.

Why This Architecture Prevents the McKinsey Scenario

The exploit described in the McKinsey AI platform relied on the AI system having implicit trust and broad internal access.

With an identity-aware access layer:

The AI system cannot exceed user permissions Policies enforce least privilege.

Internal APIs no longer trust the AI platform → They trust Pomerium’s verified identity assertions instead.

Every request is validated → Authorization occurs on every request, not just at login.

All activity is auditable → Security teams can trace every tool call back to a user identity.

AI Platforms Need a Security Gateway

Enterprise AI systems will increasingly rely on:

  • AI agents

  • tool orchestration

  • internal API integrations

  • automated workflows

Without an architectural security layer, these systems become high-value attack surfaces.

Industry analysts already expect AI ecosystems to adopt gateway architectures to control agent access to internal services. 

Pomerium provides that gateway.

By inserting identity-aware access controls between AI platforms and internal infrastructure, organizations can safely enable powerful AI capabilities without expanding their attack surface.

In short, AI systems should never be trusted with direct infrastructure access, they should operate through identity-aware access gateways. That’s exactly what Pomerium provides.

Read more about Pomerium's Agentic Gateway capabilities in the docs.

Share: Share on Bluesky

Stay Connected

Stay up to date with Pomerium news and announcements.

More Blog Posts

See All Blog Posts
Blog
Top Ingress NGINX Controller Migration Pain Points
Blog
Complete Guide: Zero Trust for Air-Gapped Networks
Blog
MCP Apps Are Here. Is Yours Secure on Day One?

Revolutionize
Your Security

Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.