Pomerium secures agentic access to MCP servers.
Learn more

The OWASP Top 10 for LLMs and How to Defend Against Them

Share on Bluesky

TL;DR — The OWASP Top 10 for Large Language Model (LLM) applications highlights prompt injection, insecure output handling, and data exposure as critical risks. This guide walks through the Top 10 and shows where Zero Trust access controls, starting free with Pomerium Zero, are most effective: LLM01 Prompt Injection and LLM02 Sensitive Information Disclosure.

Why the OWASP Top 10 Matters

OWASP publishes the most recognized list of software security risks. In 2023, they introduced a list specifically for LLMs.  In 2025, the list was updated. Teams building copilots, chatbots, and agents now have a framework for understanding the threats unique to prompt-driven applications.

The OWASP Top 10 for LLMs (2025)

Prompt Injection (LLM01)

Malicious prompts alter model behavior, exfiltrate data, or call tools.

Sensitive Information Disclosure (LLM02)

Models expose private or regulated data in responses.

Supply Chain Vulnerabilities (LLM03)

Risks enter through external APIs, plugins, or libraries.

Data and Model Poisoning (LLM04)

Attackers insert harmful data into training or fine-tuning pipelines.

Improper Output Handling (LLM05)

Model output leads to unsafe actions in downstream systems.

Excessive Agency (LLM06)

Agents perform actions beyond their intended scope.

System Prompt Leakage (LLM07)

Attackers extract system prompts that contain hidden instructions or secrets.

Vector and Embedding Weaknesses (LLM08)

Manipulation of embeddings or vector stores exposes vulnerabilities.

Misinformation (LLM09)

Models generate inaccurate or misleading outputs that influence decisions.

Unbounded Consumption (LLM10)

LLMs consume excessive resources, leading to performance degradation or DoS.

Where Pomerium Delivers the Most Value

Pomerium focuses on access control. Two risks in the OWASP Top 10 align directly with what Pomerium provides.

LLM01: Prompt Injection

The risk: A model can be manipulated into reaching data or services that should not be available. Prompt filters reduce surface area but do not prevent over-permissioned access.

How Pomerium helps:

  • Identity-aware access at your network’s edge

  • Route-level policy to restrict tools and data sources by user or group

  • Enforcement that blocks unauthorized calls before they reach the target

LLM02: Sensitive Information Disclosure

The risk: Models often connect to sensitive systems such as customer databases or code repositories. Without strict access boundaries, a single request can reveal more than intended.

How Pomerium helps:

  • Short-lived, identity-bound credentials replace static API keys

  • Per-route authorization ensures only approved users or agents access sensitive systems

  • Decision logs create a clear audit trail for compliance and incident response

Start Free with Pomerium Zero

Pomerium Zero gives teams an easy way to put these protections in place:

  1. Deploy a lightweight identity-aware proxy in your infrastructure

  2. Connect your identity provider (Okta, Azure AD, and others)

  3. Define your first policy to secure an AI route or internal application

  4. Monitor logs for denied attempts, including prompt injection patterns

The OWASP Top 10 for LLMs outlines the risks every team building with AI needs to understand. Pomerium Zero helps reduce exposure to two of the most pressing risks: prompt injection and sensitive data disclosure. Identity-aware, policy-backed access is the starting point for secure adoption of LLMs.

Frequently Asked Questions (FAQ)

What is the OWASP Top 10 for LLMs?

The OWASP Top 10 for Large Language Model Applications is a list of the most critical security risks for AI systems. It helps teams building copilots, chatbots, and agents identify and mitigate threats specific to prompt-driven applications.

What is prompt injection and why is it dangerous?

Prompt injection is when a malicious input causes a model to ignore its instructions or access unauthorized resources. It can lead to data leaks, misuse of tools, or exposure of sensitive systems.

How does Pomerium help with LLM security?

Pomerium enforces identity-aware, policy-based access at the edge. This prevents unauthorized calls caused by prompt injection and limits sensitive data exposure, especially for OWASP risks LLM01 and LLM02.

Is Pomerium Zero free to use?

Yes. Pomerium Zero is a free, self-hosted gateway that allows teams to secure AI apps and internal services with Zero Trust guardrails.

Share: Share on Bluesky

Stay Connected

Stay up to date with Pomerium news and announcements.

More Blog Posts

See All Blog Posts
Blog
LiteLLM Alternatives: Best Open-Source and Secure LLM Gateways in 2025
Blog
LiteLLM vs. Pomerium: What's the Difference and Which One Do You Need?
Blog
Why Traditional Access Controls Fail in LLM Deployments

Revolutionize
Your Security

Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.