About This Tech Radar
What Is This?
This is a personal tech radar for AI-powered software engineering — a living, opinionated snapshot of the tools, models, editors, and techniques that matter right now for developers who want to harness AI effectively.
The radar is inspired by the ThoughtWorks Technology Radar and built with AOE Technology Radar.
How to Read the Radar
The radar is divided into four quadrants, each covering a different category of the AI-for-engineering landscape:
| Quadrant | What It Covers |
|---|---|
| AI-Powered IDEs & Editors | Code editors with AI baked in at a deep level |
| AI Coding Tools & Agents | Plugins, CLI tools, and autonomous agents for coding tasks |
| Techniques & Practices | How to work effectively with AI — workflows, patterns, and pitfalls |
| AI Infrastructure & Platforms | Services and tools for running, storing, and monitoring AI systems |
Looking for AI models? The underlying LLMs (Claude, GPT-4o, Gemini, Llama, and others) now have their own dedicated radar — see the AI Models & Benchmarks radar.
Each item on the radar sits in one of four rings:
| Ring | Meaning |
|---|---|
| Adopt | Proven and strongly recommended. Use these today. |
| Trial | Worth using on real projects. Ready but not yet standard. |
| Assess | Explore and evaluate. Keep an eye on these — they may be important soon. |
| Hold | Approach with caution. These may be superseded, risky, or not yet mature enough. |
Key Concepts for Newcomers
What Is an LLM?
A Large Language Model (LLM) is an AI system trained on vast amounts of text that can understand and generate human-readable content — including code. Models like Claude, GPT-4, and Gemini are LLMs.
What Is a "Context Window"?
The context window is how much text an LLM can read at once (its working memory). Larger context windows let you feed in more of your codebase, documentation, or conversation history.
What Is "Vibe Coding"?
A term for an AI-assisted workflow where you describe what you want in plain language and let the AI generate most of the code, iterating conversationally. Great for prototyping; requires care in production.
What Is an AI Agent?
An AI agent is an LLM that can take actions autonomously — reading files, running commands, browsing the web — rather than just answering questions. Tools like Claude Code and Devin are examples.
Caveats: Where AI-Assisted Engineering Goes Wrong
Agentic engineering is not all sunshine and roses. These are the most common pitfalls — worth knowing before you run into them.
Accepting AI-Generated Code Without Review
Every AI coding tool makes it trivially easy to accept a suggestion with a single keystroke. That speed is the point — but it can become a trap.
Real risks:
- Security vulnerabilities — LLMs can generate subtly insecure code: SQL injection, insufficient input validation, weak cryptographic choices. Studies have found AI-generated code has higher rates of security flaws when accepted without review.
- Correctness theatre — Code that compiles and passes tests isn't necessarily correct. AI handles happy paths well; edge cases not covered by tests are where it quietly fails.
- Maintenance debt — When a bug surfaces in code you don't understand, you won't be able to debug it effectively.
- Hallucinated APIs — AI occasionally imports libraries that don't exist or calls methods with wrong signatures.
The rule of thumb: If you couldn't explain this code to a colleague in a code review, don't merge it.
- Read every line — if you don't understand a block, ask the AI to explain it before accepting
- Test edge cases yourself, not just the tests the AI writes
- Review security-sensitive code especially carefully: auth, input handling, cryptography
- You remain responsible for the code you ship
Over-Relying on AI for Architecture Decisions
AI tools are excellent at implementing well-defined tasks, but they have no knowledge of your system's history, constraints, or non-functional requirements. An AI that confidently proposes a new service boundary or data model may be optimising for the wrong thing entirely. Use AI to explore options and generate boilerplate — not to make architectural calls.
Context Window Blindness
AI coding agents only see what you give them. A suggestion made with only one file open may silently contradict patterns established elsewhere in the codebase. Always verify that AI-generated code is consistent with the broader system — especially naming conventions, error handling patterns, and data access layers.
Further Reading
Not everyone is enthusiastic about AI-assisted engineering, and the critical perspectives are worth reading:
- "Do You Need AI for That?" — Rachel Andrew, March 2026. A measured counterpoint to AI-for-everything thinking: "LLMs are a tool, like spreadsheets — useful for the right tasks, not a replacement for judgment."
- "People Are Not Friction" — Dave Rupert, March 2026. Pushes back on the framing that human review, collaboration, and disagreement are inefficiencies to be automated away. "People can have bad attitudes and wrong opinions… but people are not friction."
Contributing
This radar is stored as simple Markdown files in the radar/ directory. Each item is one .md file. You don't need to know JavaScript to contribute — just edit or add a Markdown file and open a pull request.
See the GitHub repository for details.
Radar Ring Definitions in Depth
Adopt
Technologies in this ring have proven themselves across a range of projects and teams. The risk of adopting them is low. If you're not using them yet, you should be.
Trial
These are worth pursuing on new projects. There may still be rough edges, but the value is clear enough to justify the investment. Expect some learning curve.
Assess
Technologies we find interesting enough to invest research time in. We haven't used them enough to have a strong opinion, but we think they're worth watching.
Hold
We recommend caution here. This ring doesn't always mean "never use" — sometimes it means "don't start new projects with this" or "be aware of the risks."