A quick-reference checklist for AIUC-1 compliance. Five layers, 28 controls, one page. Print it, pin it, pass the audit.
No single tool passes AIUC-1. You need five layers. Here's the checklist.
Deploy first. Everything else depends on this.
Centralized gateway deployed — all agent-to-tool traffic flows through a single enforcement point
Tool-level authorization active — agents can only access specific tools they're authorized for, not entire servers
Identity-aware policies configured — every request is authenticated via SSO/JWT with verified user context
Session tracking enabled — multi-step agent workflows tracked as continuous sessions
Comprehensive audit logging active — every request logged with: agent identity, user context, tool called, parameters, policy decision, timestamp, session ID
Rate limiting configured — per-agent and per-route request limits prevent scraping and DoS
Multi-tenant isolation validated — Customer A's agents cannot access Customer B's data
Deployment model assessed — cloud vs. self-hosted decision documented
AIUC-1 controls covered: B006, B004, B007, A003, A005, E004, E015, E005
Tool: Pomerium Agentic Gateway
Controls what agents output — independent of what they access.
Input filtering deployed — prompt injection and jailbreak attempts caught before reaching the model
Output filtering deployed — harmful, toxic, biased, or deceptive content blocked before reaching users
PII detection active — personal data in agent outputs is detected and redacted
Hallucination detection enabled — fabricated facts and data flagged before delivery
CBRN guardrails configured — content policies block chemical, biological, radiological, and nuclear misuse
Filtering events logged — every blocked or flagged input/output is logged with reason
AIUC-1 controls covered: C003, D001, B005, F001, F002
Tools: Azure AI Content Safety, AWS Bedrock Guardrails, Protect AI, Anthropic/OpenAI built-in safety
Quarterly validation that Layers 1 and 2 actually work.
Adversarial testing program established — quarterly red-teaming with documented methodology
Pre-deployment testing required — all agent changes tested against risk categories before production
Third-party safety evaluation scheduled — external firm evaluates safety at least quarterly
Hallucination benchmarks tracked — hallucination rates measured and compared to baseline quarterly
Regression suite automated — previous adversarial findings re-tested every quarter
Test results documented — pass/fail, findings, remediation steps, sign-off
AIUC-1 controls covered: B001, C002, C004, D002
Services: Schellman (authorized AIUC-1 auditor), HackerOne AI, Bishop Fox, internal security teams
Turns raw logs into intelligence and compliance reports.
Log aggregation pipeline configured — gateway logs, model logs, and application logs centralized
Anomaly detection rules active — unusual request patterns, suspected attacks flagged automatically
Compliance dashboards built — real-time view of: top agents by request count, denied requests, data access trends
Alerting configured — automated alerts for policy violations, rate limit breaches, suspected exfiltration
Audit report generation automated — quarterly compliance reports generated from aggregated data
AIUC-1 controls covered: B002, E015 (augmented)
Tools: Splunk, Elastic, Microsoft Sentinel, Datadog, Grafana
The organizational layer auditors evaluate alongside technology.
Data use policies written — what agents can access, how outputs can be used, retention periods
Risk taxonomy defined — formal classification of AI risks (hallucination, misuse, PII leak, adversarial attack)
Incident response plans created — playbooks for: data breach, harmful output, hallucination in sensitive context
Accountability matrix documented — RACI for every AIUC-1 control with named owners
Vendor due diligence process established — evaluation checklist for model providers, tool vendors, auditors
CBRN safeguards documented — written guardrails against catastrophic misuse scenarios
All governance docs reviewed and signed off — CISO, legal, business leadership approval
AIUC-1 controls covered: A001, A002, C001, E001, E002, E003, E004, E006, F001, F002
Phase | Weeks | Focus | Controls |
|---|---|---|---|
Phase 1 | 1–4 | Deploy control plane + audit logging | B006, B004, B007, A003, A005, E004, E015, E005 |
Phase 2 | 5–8 | Content filtering + model safety | C003, D001, B005, F001/F002 |
Phase 3 | 9–12 | Governance documentation | A001, A002, C001, E001–E003, E006 |
Phase 4 | Ongoing | Quarterly testing cycle | B001, C002, C004, D002 |
Answer these five questions. If any answer is "no," you have a gap:
Does every agent request flow through a centralized enforcement point? (If no → you need Layer 1)
Are agent outputs filtered for harmful content, PII, and hallucinations? (If no → you need Layer 2)
Do you run adversarial tests at least quarterly? (If no → you need Layer 3)
Can you generate a compliance report from your logs in under an hour? (If no → you need Layer 4)
Does every AIUC-1 control have a documented owner? (If no → you need Layer 5)
The most common mistake is starting with governance (Layer 5) or observability (Layer 4) before deploying a control plane (Layer 1). Without centralized enforcement and logging, everything else is unverifiable.
Deploy the control plane first. The rest follows.
Further reading: NIST AI Risk Management Framework · MITRE ATLAS · OWASP LLM Top 10 · AIUC-1 Standard
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.