Pomerium secures agentic access to MCP servers.
Learn more

How Shadow AI Impacts SOC 2 and HIPAA, and What to Do About It

Share on Bluesky

Shadow AI refers to the use of AI tools (e.g., ChatGPT, Claude, Gemini, or Perplexity) without approval or oversight from security or compliance teams.

These tools often enter organizations through bottom-up adoption. Employees turn to them to improve productivity, automate tasks, or write code faster. But when used outside sanctioned environments, Shadow AI introduces invisible risk.

According to a 2024 Cisco report, 74% of organizations have already experienced data leakage through unsanctioned AI use, yet most lack visibility into when or how it happens.

Shadow AI introduces policy gaps that can cause regulated organizations to fall out of compliance. SOC 2 and HIPAA both rely on structured control systems. When AI tools are used outside of approved workflows, those systems stop working.

This guide walks through the specific risks and shows how per-route policy enforcement with Pomerium helps close those gaps while supporting real-world AI adoption.

Key Compliance Controls at Risk

SOC 2 is built on five trust principles. Shadow AI most directly affects these three:

  • Security (CC6): Organizations must restrict access and prevent unauthorized use.

  • Availability (A1): Systems must remain operational and resilient.

  • Confidentiality (C1): Sensitive data must be protected throughout its lifecycle.

The moment employees use tools like ChatGPT or Claude outside of sanctioned systems, those controls are bypassed. This includes both casual data pasting and structured workflow automation that operates without oversight.

HIPAA focuses on administrative and technical safeguards for PHI. One core requirement (§164.308) is a documented risk assessment process. If organizations cannot detect shadow AI activity, they cannot assess the risk or apply safeguards.

Common Failure Points from Shadow AI

  1. Access Controls Are Bypassed
    Shadow AI tools operate under consumer terms of service. Users authenticate with personal accounts. There is no organizational identity, group, or role mapping, which breaks access policies at the root.

  2. Visibility Into Data Flow Are Lacking
    SOC 2 and HIPAA both require visibility into how data moves through systems. AI tools used outside official channels do not generate logs or audit trails.

  3. Incident Response Is Blocked
    Without logs or alerting, security teams cannot detect when sensitive data is sent to third-party LLMs. Investigations often happen only after an issue is discovered elsewhere.

  4. No Vendor Review
    Compliance frameworks expect third-party risk assessments, contracts, and monitoring. Employees using shadow AI typically skip all of these steps.

Example Policy Patterns Using Pomerium

Pomerium applies policy at the routing layer based on user identity, device trust, time, and request context. These examples show how to allow legitimate AI usage while protecting sensitive data.

Limit access to ChatGPT for approved users and block uploads

Yaml
# Route: https://chat.openai.com  ➜  egress via Pomerium
policy:
  allow:
    and:
      - authenticated_user: true
      - groups: ai-approved
  deny:
    or:
      - http_method: POST        # Blocks uploads / multi-part forms
      - http_path:
          contains: "/upload"

HIPAA-Safe Use of Claude for Clinical Staff

Yaml
policy:
  allow:
    and:
      - groups: clinical-staff
      - device:
          is: managed
      - time_of_day:
          after: "06:00"
          before: "20:00"
  deny:
    or:
      - http_path:
          contains: "/upload"
      - http_path:
          contains: "/attachment"

How Audit Logs Help Meet Framework Requirements

Pomerium generates structured, queryable logs that show:

  • Who accessed which AI service

  • When the access occurred

  • What data was transferred (including size and headers)

  • Which policy was applied

These logs support:

  • SOC 2 CC6.1, CC6.7, CC7.2

  • HIPAA §164.312(a)(1) and §164.312(b)

Pomerium also integrates with SIEM tools to support continuous monitoring, alerting, and reporting.

Pomerium provides the control needed to support safe and compliant use of AI tools inside regulated organizations. Per-route policy enforcement lets teams adopt new tools with confidence while maintaining full visibility and policy coverage.

Read Next: The Shadow AI Risk Playbook

Share: Share on Bluesky

Stay Connected

Stay up to date with Pomerium news and announcements.

More Blog Posts

See All Blog Posts
Blog
Shadow AI Is Already in Your Org. Here’s the 5-Minute Playbook to Secure It
Blog
Top 10 Articles in Agentic Access - MCP, Models, and Clients (June 2025)
Blog
Announcing Pomerium v0.30

Revolutionize
Your Security

Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.