Prompt Engineering
workflowPrompt engineering is the practice of crafting inputs to AI models to get better, more reliable outputs. For software engineers, it's the most immediately valuable skill to develop when starting with AI tools — though by 2025 the field has expanded into what Andrej Karpathy termed "context engineering": deciding exactly what goes into the model's working memory at each step of a workflow.
Why It's in Adopt
You don't need a PhD in machine learning to write better prompts. A few key principles dramatically improve results, whether you're prompting interactively or building production systems.
Core Techniques
1. Be specific about what you want Instead of: "Fix this code" Write: "This Python function is throwing a KeyError when the input dict doesn't have a 'user_id' key. Add input validation that raises a ValueError with a helpful message."
2. Provide context
- Tell the model what language/framework you're using
- Describe the constraints (performance requirements, coding style, existing patterns)
- Share relevant error messages or test failures
3. Give the model a role "You are a senior backend engineer reviewing this Python service for security vulnerabilities. Point out any issues and explain the risk of each."
4. Ask for chain-of-thought "Think step by step before writing any code." This reduces errors significantly on complex tasks.
5. Provide few-shot examples Show the model one or two examples of the input/output format you expect before giving it the real task. This is one of the highest-ROI techniques available and is consistently recommended by Anthropic, Google, and OpenAI.
6. Iterate, don't give up after one try If the first response isn't quite right, describe what's wrong and ask for a revised version. Multi-turn conversations consistently outperform single-shot prompts.
Production Engineering Considerations
When building AI-powered systems rather than prompting interactively:
- Structured outputs: Modern APIs (Anthropic, OpenAI) can enforce JSON schemas at the token generation level via constrained decoding — prompting for structured outputs is now a core production practice, not a workaround.
- Prompt caching: Anthropic's prompt caching can reduce costs by up to 90% and latency by up to 85%, but requires structuring prompts with static content first. How you write a prompt affects whether caching applies.
- Prompt versioning and testing: Treat prompts as production code. Tools like Promptfoo let you test prompts in CI/CD and catch regressions as models update.
Newcomer Tip
The biggest mistake new users make is writing vague, short prompts and then being disappointed with vague, short answers. The more context you give, the better the model performs.
Further Learning
Vendor guides (good starting points):
- Anthropic Prompt Engineering Guide
- OpenAI Prompt Engineering Guide
- GitHub's Prompt Engineering Guide for LLMs — practical patterns from 2+ years of Copilot development
From engineering teams in production:
- Stripe: Minions — One-Shot Coding Agents — Stripe's principle: "investing effort into what goes into the prompt yields far better returns than investing in how many times the model reasons"; covers context assembly pipelines at scale
- Spotify's Background Coding Agent ("Honk") series — a three-part deep dive into context engineering, running agents at scale, and feedback loops: Part 1 · Part 2 · Part 3
- Vercel: How We Made v0 an Effective Coding Agent — covers iterative prompt refinement and why system prompts are "the most powerful tool for steering the model"
- LangChain: Context Engineering for Agents — on structuring context for multi-step agentic workflows
Foundational reference (2023):
- Lilian Weng — Prompt Engineering — comprehensive deep-dive covering chain-of-thought, few-shot, self-consistency, and reasoning strategies; predates agentic patterns but remains the most cited foundational reference