Technology RadarTechnology Radar

Vibe Coding

workflowagent
This item was not updated in last three versions of the Radar. Should it have appeared in one of the more recent editions, there is a good chance it remains pertinent. However, if the item dates back further, its relevance may have diminished and our current evaluation could vary. Regrettably, our capacity to consistently revisit items from past Radar editions is limited.
Trial

Vibe coding — coined by Andrej Karpathy in February 2025 — has moved from a niche concept to a widely recognized workflow pattern. It's now mainstream enough for Trial, though the caveats about production use remain.

Why It Moved from Assess to Trial

In the year since Karpathy's original post, vibe coding has gone from a meme to an established practice:

  • Mainstream adoption: Shopify CEO Tobi Lutke mandated that teams demonstrate a task can't be done with AI before requesting additional headcount. This isn't vibe coding per se, but it reflects the same "AI-first" workflow philosophy.
  • Tool maturity: Cursor's agent mode, Claude Code, and Vercel's v0 have made the "describe and iterate" workflow genuinely productive for a wide range of tasks.
  • Karpathy's own evolution: In February 2026, Karpathy coined "agent engineering" as the professional counterpart — acknowledging that the production version of vibe coding requires engineering discipline.
  • Industry backlash provides balance: Rachel Andrew's "Do You Need AI for That?" and Dave Rupert's "People Are Not Friction" offer healthy pushback against over-applying the pattern.

Where It Works Brilliantly

  • Prototyping: Idea to working demo in hours
  • Personal projects and internal tools: Where code quality bar is lower
  • Unfamiliar territory: Quickly exploring a new framework or language
  • Boilerplate: Generating repetitive scaffolding

Where It Falls Short

  • The productivity paradox: A METR randomized controlled trial (July 2025) found experienced OSS developers were actually 19% slower with AI coding tools, despite believing they were 20% faster. Perception and reality diverge.
  • Security concerns: CodeRabbit found AI-coauthored code has 1.7x more major issues; Veracode found 45% of AI-generated code introduces security vulnerabilities.
  • Production systems: AI-generated code often lacks proper error handling, observability, and security hardening
  • Complex domains: Financial systems, real-time systems, complex state machines need engineering expertise
  • Maintenance: Code you don't understand is hard to maintain and debug

The Balanced Approach

The most effective pattern is agent engineering — vibe coding's professional counterpart:

  1. Use AI to generate a first draft
  2. Read and understand the generated code
  3. Refine it with your domain knowledge and engineering judgment
  4. Write or review tests to verify correctness

Key Characteristics

Property Value
Coined by Andrej Karpathy (Feb 2025)
Professional counterpart Agentic Engineering (Karpathy, Feb 2026)
Best for Prototyping, personal projects, exploration
Risk area Production code without human review

Security Radar

The security implications of vibe coding are covered in depth on the Security radar — particularly the AI-generated code vulnerability statistics and what tooling can help catch issues before they ship:

  • OWASP LLM Top 10 — the canonical list of AI-specific risks, including prompt injection (#1) and insecure output handling
  • Pre-commit Security Hooks — automated checks that run before every commit, catching common AI-generated vulnerabilities at the source
  • Semgrep / Snyk Code — SAST tools well-suited to scanning AI-generated code at scale

Further Reading