Trial
Traditional STRIDE threat modeling maps imperfectly to AI systems — prompt injection is "Tampering," excessive agency is "Elevation of Privilege," but nondeterminism and tool expansion are fundamentally new. Microsoft's February 2026 guidance and the CSA's MAESTRO framework both extend STRIDE for agentic AI, while STRIDE GPT automates the process using LLMs.
Key Frameworks
- STRIDE-AI (IEEE): Asset-centered methodology applying FMEA process to identify ML failure modes, mapping them to STRIDE categories.
- MAESTRO (Cloud Security Alliance, Feb 2025): Purpose-built for agentic AI threat modeling. Notes that STRIDE is a good starting point but needs "significant augmentation."
- STRIDE GPT: Open-source tool using LLMs to generate threat models. Updated with OWASP LLM Top 10 and Agentic Top 10 integration.
Microsoft's Three AI-Specific Challenges (Feb 2026)
- Nondeterminism — must reason about ranges of behavior, not single outcomes
- Instruction-following bias — models optimized for helpfulness are vulnerable to manipulation
- System expansion through tools — agentic systems invoke APIs, persist state, and trigger workflows, allowing failures to compound
Strengths
- Universal developer familiarity with STRIDE as a starting point
- Systematic, structured approach
- MAESTRO and STRIDE GPT address AI-specific gaps
Limitations
- Original six STRIDE categories don't fully capture AI threats
- Nondeterminism is fundamentally hard to model
- Requires pairing with LINDDUN (privacy), OWASP LLM Top 10, or MAESTRO