Technology RadarTechnology Radar

OWASP Top 10 for LLM Applications

governanceai-security
Adopt

The OWASP Top 10 for LLM Applications (2025 edition) is the definitive community-driven list of critical security risks specific to LLM systems. Developed by 500+ experts, it covers prompt injection (#1 for the second edition), sensitive information disclosure (#2, jumped from #6), supply chain risks, excessive agency, and more. Tools like Semgrep already map their rules to these categories.

The 2025 List

  1. LLM01 - Prompt Injection — LLMs cannot separate instructions from data
  2. LLM02 - Sensitive Information Disclosure — models memorize and reproduce training data
  3. LLM03 - Supply Chain — compromised models, plugins, training data
  4. LLM04 - Data and Model Poisoning — manipulation of training data
  5. LLM05 - Improper Output Handling — insufficient validation of LLM outputs
  6. LLM06 - Excessive Agency — agents given too much functionality or autonomy
  7. LLM07 - System Prompt Leakage — extraction of system instructions
  8. LLM08 - Vector and Embedding Weaknesses — attacks on RAG systems
  9. LLM09 - Misinformation — hallucinations and incorrect outputs
  10. LLM10 - Unbounded Consumption — resource exhaustion / denial-of-wallet

Why It Matters

This is the baseline security standard for anyone building or using LLM-powered tools. If you're deploying AI coding agents, every item on this list is relevant to your threat model. OWASP has also released a separate Top 10 for Agentic Applications (2026) covering autonomous agent risks like goal hijacking, tool misuse, and rogue agents.

Strengths

  • Practical, actionable, community-driven
  • Regularly updated (2023, 2025 editions)
  • Referenced by major security vendors and compliance frameworks
  • Complemented by the new Agentic Applications Top 10

Limitations

  • High-level guidance — implementation details vary by architecture
  • Focused on LLM applications, not classical ML (separate lists exist)
  • Some categories overlap in practice