GPT-5.4, released March 5, 2026, is OpenAI's current flagship model — unifying its general-purpose and coding model lines into a single frontier offering with native computer use, configurable reasoning effort, and a 1M-token context window at competitive pricing.
Architecture Deep Dive →GPT-5.4 Architecture Breakdown — unified model pipeline, parameterized reasoning effort, context compaction design, native computer use implementation, and comparison with Claude Opus 4.6.
Why It's in Adopt
GPT-5.4 represents a decisive step forward for teams in the OpenAI ecosystem. Key improvements over the GPT-5.2 line:
Unified Codex + GPT: GPT-5.4 incorporates the specialized coding capabilities of GPT-5.3-Codex into the general model — one API, one billing line, top-tier coding.
Native computer use: The first general-purpose OpenAI model with native GUI automation — agents can operate browsers, desktop apps, and complex multi-application workflows.
Configurable reasoning: Set effort to none / low / medium / high / xhigh — pay only for the thinking the task needs.
33% fewer factual errors vs. GPT-5.2.
Tool search: GPT-5.4 finds and uses the right tool from large tool ecosystems more reliably than its predecessors.
Performance
Benchmark
Score
SWE-bench Verified
74.9% (resolving real GitHub issues)
GPQA Diamond
92.8% (expert-level science)
AIME 2026
Leading scores
On SWE-bench, GPT-5.4 is narrowly behind Grok 4.2 (75%) but ahead of Claude Opus 4.6 on this benchmark.
Context Window & Output
1M token context (1.05M: 922K input, 128K output)
Prompts over 272K tokens incur a 2× surcharge (plan accordingly for large codebase ingestion)
GPT-5.4 mini and nano (released March 17, 2026) bring the same architecture to fast, cheap sub-tasks