TL;DR — The OWASP Top 10 for Large Language Model (LLM) applications highlights prompt injection, insecure output handling, and data exposure as critical risks. This guide walks through the Top 10 and shows where Zero Trust access controls, starting free with Pomerium Zero, are most effective: LLM01 Prompt Injection and LLM02 Sensitive Information Disclosure.
OWASP publishes the most recognized list of software security risks. In 2023, they introduced a list specifically for LLMs. In 2025, the list was updated. Teams building copilots, chatbots, and agents now have a framework for understanding the threats unique to prompt-driven applications.
Malicious prompts alter model behavior, exfiltrate data, or call tools.
Models expose private or regulated data in responses.
Risks enter through external APIs, plugins, or libraries.
Attackers insert harmful data into training or fine-tuning pipelines.
Model output leads to unsafe actions in downstream systems.
Agents perform actions beyond their intended scope.
Attackers extract system prompts that contain hidden instructions or secrets.
Manipulation of embeddings or vector stores exposes vulnerabilities.
Models generate inaccurate or misleading outputs that influence decisions.
LLMs consume excessive resources, leading to performance degradation or DoS.
Pomerium focuses on access control. Two risks in the OWASP Top 10 align directly with what Pomerium provides.
The risk: A model can be manipulated into reaching data or services that should not be available. Prompt filters reduce surface area but do not prevent over-permissioned access.
How Pomerium helps:
Identity-aware access at your network’s edge
Route-level policy to restrict tools and data sources by user or group
Enforcement that blocks unauthorized calls before they reach the target
The risk: Models often connect to sensitive systems such as customer databases or code repositories. Without strict access boundaries, a single request can reveal more than intended.
How Pomerium helps:
Short-lived, identity-bound credentials replace static API keys
Per-route authorization ensures only approved users or agents access sensitive systems
Decision logs create a clear audit trail for compliance and incident response
Pomerium Zero gives teams an easy way to put these protections in place:
Deploy a lightweight identity-aware proxy in your infrastructure
Connect your identity provider (Okta, Azure AD, and others)
Define your first policy to secure an AI route or internal application
Monitor logs for denied attempts, including prompt injection patterns
The OWASP Top 10 for LLMs outlines the risks every team building with AI needs to understand. Pomerium Zero helps reduce exposure to two of the most pressing risks: prompt injection and sensitive data disclosure. Identity-aware, policy-backed access is the starting point for secure adoption of LLMs.
Read the docs to get started building with MCP
Explore the latest MCP security risks and how to address them.
Learn how agentic access management keeps AI workflows secure.
The OWASP Top 10 for Large Language Model Applications is a list of the most critical security risks for AI systems. It helps teams building copilots, chatbots, and agents identify and mitigate threats specific to prompt-driven applications.
Prompt injection is when a malicious input causes a model to ignore its instructions or access unauthorized resources. It can lead to data leaks, misuse of tools, or exposure of sensitive systems.
Pomerium enforces identity-aware, policy-based access at the edge. This prevents unauthorized calls caused by prompt injection and limits sensitive data exposure, especially for OWASP risks LLM01 and LLM02.
Yes. Pomerium Zero is a free, self-hosted gateway that allows teams to secure AI apps and internal services with Zero Trust guardrails.
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.