Technology RadarTechnology Radar

Prompt Engineering

workflow
This item was not updated in last three versions of the Radar. Should it have appeared in one of the more recent editions, there is a good chance it remains pertinent. However, if the item dates back further, its relevance may have diminished and our current evaluation could vary. Regrettably, our capacity to consistently revisit items from past Radar editions is limited.
Adopt

Prompt engineering is the practice of crafting inputs to AI models to get better, more reliable outputs. For software engineers, it's the most immediately valuable skill to develop when starting with AI tools — though by 2025 the field has expanded into what Andrej Karpathy termed "context engineering": deciding exactly what goes into the model's working memory at each step of a workflow.

Why It's in Adopt

You don't need a PhD in machine learning to write better prompts. A few key principles dramatically improve results, whether you're prompting interactively or building production systems.

Core Techniques

1. Be specific about what you want Instead of: "Fix this code" Write: "This Python function is throwing a KeyError when the input dict doesn't have a 'user_id' key. Add input validation that raises a ValueError with a helpful message."

2. Provide context

  • Tell the model what language/framework you're using
  • Describe the constraints (performance requirements, coding style, existing patterns)
  • Share relevant error messages or test failures

3. Give the model a role "You are a senior backend engineer reviewing this Python service for security vulnerabilities. Point out any issues and explain the risk of each."

4. Ask for chain-of-thought "Think step by step before writing any code." This reduces errors significantly on complex tasks.

5. Provide few-shot examples Show the model one or two examples of the input/output format you expect before giving it the real task. This is one of the highest-ROI techniques available and is consistently recommended by Anthropic, Google, and OpenAI.

6. Iterate, don't give up after one try If the first response isn't quite right, describe what's wrong and ask for a revised version. Multi-turn conversations consistently outperform single-shot prompts.

Production Engineering Considerations

When building AI-powered systems rather than prompting interactively:

  • Structured outputs: Modern APIs (Anthropic, OpenAI) can enforce JSON schemas at the token generation level via constrained decoding — prompting for structured outputs is now a core production practice, not a workaround.
  • Prompt caching: Anthropic's prompt caching can reduce costs by up to 90% and latency by up to 85%, but requires structuring prompts with static content first. How you write a prompt affects whether caching applies.
  • Prompt versioning and testing: Treat prompts as production code. Tools like Promptfoo let you test prompts in CI/CD and catch regressions as models update.

Newcomer Tip

The biggest mistake new users make is writing vague, short prompts and then being disappointed with vague, short answers. The more context you give, the better the model performs.

Further Learning

Vendor guides (good starting points):

From engineering teams in production:

Foundational reference (2023):

  • Lilian Weng — Prompt Engineering — comprehensive deep-dive covering chain-of-thought, few-shot, self-consistency, and reasoning strategies; predates agentic patterns but remains the most cited foundational reference