SANS Institute is a global leader in cybersecurity training and research, known for equipping defenders with the skills and tools to protect critical systems.
Their latest release — the Critical AI Security Guidelines — offers clear, actionable recommendations for building and securing AI systems safely.
The report opens like every other piece of content you’ve read in the past 24 months, throwing lines like “AI continues to revolutionize enterprises and security practices” into the echo chamber. But once you push past that generic framing, the report offers valuable insights across six key areas, each including practical steps to help you implement AI securely and reduce real risk.
This post focuses on three of those six categories: access controls, monitoring, and governance, risk, and compliance (GRC). We'll look at each through the lens of reducing access risk with Pomerium.
LLMs exploded onto the scene with use cases poised to displace search for organizations and individuals. From my own experiences, the most effective workers are those that can acquire the right context in the shortest amount of time. The best developers know where to find answers quickly. The best security professionals stay plugged into threats and vulnerabilities, actively patching holes before they spread. AI follows the same pattern. It becomes most effective when given the right organizational context.
Access controls enforce the principle of least privilege, keeping unauthorized individuals from actions that could create catastrophic damage. Builders often start with hard-coded privileges or OAuth2 for agentic access for builders using protocols like Model Context Protocol (MCP) or Agent-to-Agent (A2A). SANS makes it clear: as you deploy AI throughout your organization, apply least privilege and adopt a Zero Trust model to authorize and authenticate, preventing unauthorized access and model tampering.
“AI access controls now extend beyond users to connected devices, applications, APIs, and other systems. As you deploy AI, least privilege and Zero Trust should be central to your strategy.” — SANS Critical AI Security Guidelines, v1.2
Learn about why per-request authorization is the foundation of Zero Trust here.
Retrieval-augmented generation (RAG) architectures often rely on vector databases to store and retrieve semantically indexed data fed into LLMs. The risk comes when that augmentation data isn’t secured properly. If tampered with, it can push models toward misleading or even dangerous outputs.
“Protecting augmentation data requires more than just applying access controls. Data stored in these databases should be treated as sensitive, especially if it influences LLM responses. If tampered with, this data can cause models to generate misleading or dangerous outputs.” — SANS Critical AI Security Guidelines
Basic controls like least privilege still matter, but they’re not enough. Read and write permissions should be precise, and every change needs to be logged and auditable. The strongest defenses also provide clear explainability: what happened, which policy allowed it, and why it was permitted. That level of traceability is what turns access control from a checkbox into a safeguard.
An organization must have visibility into the type of work employees are performing with internal LLM applications. Any of these applications touching, containing, or working with sensitive data should have clear logging for auditing by defaults to know which identity requested an action (human, service, or agent), which policies allowed or denied the request, and should these actions be restricted according to the business.
“Effective monitoring is essential to maintaining AI security over time. AI models and systems must be continuously observed for performance degradation, adversarial attacks, and unauthorized access. Implementing logging, anomaly detection, and drift monitoring ensures AI applications remain reliable and aligned with intended behaviors.”
Security teams can’t credibly say “AI won’t be used here.” Nearly every modern enterprise tool already embeds AI, and blocking adoption only increases the chance of unmanaged usage and adds to the Shadow AI problem. The true challenge is not prevention, it is governance.
SANS highlights the importance of governance frameworks that align with regulations, risk-based decision-making, and continuous testing of both applications and models. This includes practices such as red teaming, penetration testing of connected systems, and monitoring for drift over time. Governance must be treated as an ongoing process.
Two concepts stand out:
Model registries and AI bills of materials (AIBOMs): These help track model versions, provenance, and dependencies. They provide traceability and rollback while also introducing sensitive metadata that must be secured.
AI GRC boards: Formal structures that align AI adoption with policy, security standards, and business objectives. They also provide visible oversight to regulators and stakeholders.
Most governance frameworks define what should happen but leave gaps when it comes to enforcement. Pomerium addresses this by applying governance policies at the point of access. Every AI request, whether from a human, service, or agent, is authenticated, authorized, logged, and tied back to policy. This creates the audit trail compliance requires and gives organizations the transparency needed to demonstrate responsible AI use.
Securing AI is not just about building smarter models, it is about making sure every interaction with those models is authorized, auditable, and aligned with policy. SANS lays out the principles. Pomerium makes them enforceable in practice.
With a layered security approach built around how AI interacts with sensitive data, security teams can safely deploy, monitor, and audit tools, workflows, and processes to scale an organization.
If you’re unsure of where to begin, reach out to the Pomerium team today.
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.