About This Tech Radar
What Is This?
This is a tech radar for security in AI-assisted software development — a living, opinionated snapshot of the tools, practices, and frameworks that matter for keeping AI-generated code and AI-powered workflows secure.
AI coding agents write code fast. They also introduce novel security risks: hallucinated dependencies that could be typosquatted, subtly insecure patterns that pass tests, leaked secrets in prompts, and supply chain attacks targeting the AI toolchain itself. This radar tracks what's worth adopting to defend against those risks.
The radar is inspired by the ThoughtWorks Technology Radar and built with AOE Technology Radar.
How to Read the Radar
The radar is divided into four quadrants, each covering a different dimension of security for AI-powered development:
| Quadrant | What It Covers |
|---|---|
| AI Security Scanning | SAST, DAST, and AI-augmented vulnerability scanners that understand LLM-generated code patterns |
| Supply Chain Security | Dependency scanners, SBOM generators, artifact signing, and defenses against compromised or hallucinated packages |
| Secret Detection & Management | Tools for finding leaked credentials, managing secrets, and preventing AI agents from exfiltrating sensitive data |
| Security Practices & Frameworks | Methodologies, standards, and workflows — threat modeling, secure prompting, OWASP guidelines, and governance |
Looking for AI coding tools? See the Agentic Engineering radar. For model evaluations, see the AI Models & Benchmarks radar.
Each item on the radar sits in one of four rings:
| Ring | Meaning |
|---|---|
| Adopt | Proven and strongly recommended. Use these today. |
| Trial | Worth using on real projects. Ready but not yet standard. |
| Assess | Explore and evaluate. Keep an eye on these — they may be important soon. |
| Hold | Approach with caution. These may be superseded, risky, or not yet mature enough. |
Why a Separate Security Radar?
Security in AI-assisted development is a fast-moving, high-stakes domain that deserves focused attention. The risks are different from traditional AppSec:
- AI-generated code has higher vulnerability rates when accepted without review — studies consistently show this across SQL injection, XSS, and cryptographic misuse.
- Supply chain attacks now target AI toolchains — the ClawHavoc incident (Feb 2026) exploited MCP server trust boundaries, and hallucinated package names create typosquatting opportunities at scale.
- Secrets leak through new vectors — prompts, agent logs, and context windows become exfiltration paths that traditional secret scanners don't cover.
- Governance is still catching up — most organizations lack policies for AI agent permissions, code review requirements for AI output, and acceptable use of AI in security-sensitive contexts.
This radar tracks the tools and practices that address these AI-specific risks, alongside the established AppSec tools that remain essential.
Contributing
This radar is stored as simple Markdown files in the radar/ directory. Each item is one .md file. You don't need to know JavaScript to contribute — just edit or add a Markdown file and open a pull request.
See the GitHub repository for details.
Radar Ring Definitions in Depth
Adopt
Technologies in this ring have proven themselves across a range of projects and teams. The risk of adopting them is low. If you're not using them yet, you should be.
Trial
These are worth pursuing on new projects. There may still be rough edges, but the value is clear enough to justify the investment. Expect some learning curve.
Assess
Technologies we find interesting enough to invest research time in. We haven't used them enough to have a strong opinion, but we think they're worth watching.
Hold
We recommend caution here. This ring doesn't always mean "never use" — sometimes it means "don't start new projects with this" or "be aware of the risks."