TL;DR: Generative AI outran IT governance the moment ChatGPT appeared. More than 55 percent of employees now use GenAI on unapproved tools, often pasting sensitive code, customer data, or road-map docs straight into public models. Shadow AI is a fast track to data leaks, audit findings, and lost IP unless you wrap it in zero-trust guardrails.
Shadow AI shows up in internal agents, IDE plug-ins, and unsanctioned SaaS. These tools operate outside visibility and introduce serious risk.
Metric | Figure | So What? |
Corporate data flowing to GenAI tools | 485 percent YoY jump (Mar 2023 → Mar 2024) | Weekend ChatGPT traffic now tops last year’s mid-week peak. |
GenAI apps per enterprise | 67 on average, 90 percent lack approval | Endpoint blocks miss browser copy-paste and self-serve SaaS. |
Confidential data in ChatGPT inputs | 11 percent of all pasted content | Source code, patient charts, and M&A decks train someone else’s model. |
High-profile breach | Samsung engineers leaked code then triggered a global AI ban | Bans pushed staff to personal laptops, mobile hotspots, and burner accounts. |
Why the threat is rising
55 percent of employees use GenAI without approval. 40 percent admit to using banned tools (UNLEASH)
27 percent of AI-bound data is now classified as sensitive (Cyberhaven)
Many GenAI safety tools inspect text only after it leaves the building.
They lack context about who sent the prompt or why.
Data is already outside your perimeter before redaction.
A blocked prompt often pushes users to another tool.
Security that ignores identity and policy amounts to guesswork.
Security that ignores identity and policy is guesswork. Gartner notes that 30 percent of AI-adopting enterprises have already experienced GenAI-related security incidents (StackAware).
Conventional controls miss modern behavior. In a recent article, Pomerium CEO Bobby DeSimone shared how he watched a CISO brag about blocking ChatGPT at the firewall while an engineer next to him pasted code into Claude on a personal laptop, tethered to a phone.
Firewall blocks can be bypassed with phones or VPNs
Regex monitors miss plug-ins and API calls
Raw traffic logs do not connect actions to users or data
"If you can’t see violations, policy is just talk." — ShadowDV, Reddit
To counter shadow AI in your organization, you need visibility, policy, and the ability to enforce policies for AI workflows.
Stage | Solution | Why It Matters |
Visibility | Discover every outbound call to AI APIs or web UIs. | Governance starts with a full inventory. |
Policy | Define who, what, when rules tied to identity, device, data class, and request path. | Granular allow-lists preserve productivity and reduce risk. |
Enforcement | Stop or transform traffic before it leaves and log every decision. | Auditors need proof, attackers need roadblocks. |
Identity-Aware Gateway
Pomerium sits between users (humans, services, agents) and any GenAI endpoint.
Enforce who can do what, when.
Why: Least-privilege access stops accidental data leaks and insider misuse.
Routes traffic to the right destination based on identity and policy.
Why: Sensitive data must stay in your organization and not be shared unintentionally.
Issues short-lived credentials on every request.
Why: Long-lived API keys stored in agents or scripts are a breach waiting to happen.
Log every request and every decision with full context.
Why: Auditors don’t want raw IP flows, they want user, policy, and outcome.
Learn more about Pomerium Agentic Access Management.
Deploy Pomerium Zero within your infrastructure.
Import IdP groups (Okta, Azure AD, and others) for identity context.
Write your first policy to allow approved roles and block everything else.
Send Pomerium’s enriched logs to your SIEM. Each entry already includes user, role, policy decision, and context. Build dashboards that show who queried which AI tool, when, and why—not just a raw traffic count.
Share the before-and-after view with leadership. Highlight blocked uploads, approved usage, and changes in risk posture so they see progress at a glance.
Shadow AI adoption happens everywhere and usually starts with well-meaning employees.
Zero trust enforcement restores visibility, applies policy in real time, and provides an auditable record.
Ready to regain control? Watch the Agentic Access Management demo or contact a solution expert to learn more.
Question: What is Shadow AI?
Answer: Shadow AI refers to any generative-AI tool, plug-in, or internal agent that employees use without formal approval. Because it operates outside IT visibility, it can expose sensitive data and create compliance gaps.
Question: Why is Shadow AI risky?
Answer: These tools often log or train on submitted prompts. If an employee pastes proprietary code or personal data, that information can leave your control, violating SOC 2, ISO 27001, HIPAA, or GDPR requirements.
Question: How can I detect Shadow AI activity?
Answer: Route all outbound traffic through an identity-aware proxy like Pomerium. This surfaces every call to AI domains, browser plug-ins, or API endpoints so you can build usage reports and alert on outliers.
Question: Can Pomerium block uploads to ChatGPT?
Answer: Yes. You can create a policy that denies POST requests containing sensitive classifications or blocks the ChatGPT web UI entirely while still allowing access to approved internal AI services.
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.