Shadow AI refers to the use of AI tools (e.g., ChatGPT, Claude, Gemini, or Perplexity) without approval or oversight from security or compliance teams.
These tools often enter organizations through bottom-up adoption. Employees turn to them to improve productivity, automate tasks, or write code faster. But when used outside sanctioned environments, Shadow AI introduces invisible risk.
According to a 2024 Cisco report, 74% of organizations have already experienced data leakage through unsanctioned AI use, yet most lack visibility into when or how it happens.
Shadow AI introduces policy gaps that can cause regulated organizations to fall out of compliance. SOC 2 and HIPAA both rely on structured control systems. When AI tools are used outside of approved workflows, those systems stop working.
This guide walks through the specific risks and shows how per-route policy enforcement with Pomerium helps close those gaps while supporting real-world AI adoption.
SOC 2 is built on five trust principles. Shadow AI most directly affects these three:
Security (CC6): Organizations must restrict access and prevent unauthorized use.
Availability (A1): Systems must remain operational and resilient.
Confidentiality (C1): Sensitive data must be protected throughout its lifecycle.
The moment employees use tools like ChatGPT or Claude outside of sanctioned systems, those controls are bypassed. This includes both casual data pasting and structured workflow automation that operates without oversight.
HIPAA focuses on administrative and technical safeguards for PHI. One core requirement (§164.308) is a documented risk assessment process. If organizations cannot detect shadow AI activity, they cannot assess the risk or apply safeguards.
Access Controls Are Bypassed
Shadow AI tools operate under consumer terms of service. Users authenticate with personal accounts. There is no organizational identity, group, or role mapping, which breaks access policies at the root.
Visibility Into Data Flow Are Lacking
SOC 2 and HIPAA both require visibility into how data moves through systems. AI tools used outside official channels do not generate logs or audit trails.
Incident Response Is Blocked
Without logs or alerting, security teams cannot detect when sensitive data is sent to third-party LLMs. Investigations often happen only after an issue is discovered elsewhere.
No Vendor Review
Compliance frameworks expect third-party risk assessments, contracts, and monitoring. Employees using shadow AI typically skip all of these steps.
Pomerium applies policy at the routing layer based on user identity, device trust, time, and request context. These examples show how to allow legitimate AI usage while protecting sensitive data.
Limit access to ChatGPT for approved users and block uploads
# Route: https://chat.openai.com ➜ egress via Pomerium
policy:
allow:
and:
- authenticated_user: true
- groups: ai-approved
deny:
or:
- http_method: POST # Blocks uploads / multi-part forms
- http_path:
contains: "/upload"
HIPAA-Safe Use of Claude for Clinical Staff
policy:
allow:
and:
- groups: clinical-staff
- device:
is: managed
- time_of_day:
after: "06:00"
before: "20:00"
deny:
or:
- http_path:
contains: "/upload"
- http_path:
contains: "/attachment"
Pomerium generates structured, queryable logs that show:
Who accessed which AI service
When the access occurred
What data was transferred (including size and headers)
Which policy was applied
These logs support:
SOC 2 CC6.1, CC6.7, CC7.2
HIPAA §164.312(a)(1) and §164.312(b)
Pomerium also integrates with SIEM tools to support continuous monitoring, alerting, and reporting.
Pomerium provides the control needed to support safe and compliant use of AI tools inside regulated organizations. Per-route policy enforcement lets teams adopt new tools with confidence while maintaining full visibility and policy coverage.
Read Next: The Shadow AI Risk Playbook
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.