Technology RadarTechnology Radar
This item was not updated in last three versions of the Radar. Should it have appeared in one of the more recent editions, there is a good chance it remains pertinent. However, if the item dates back further, its relevance may have diminished and our current evaluation could vary. Regrettably, our capacity to consistently revisit items from past Radar editions is limited.
Trial

The OpenAI Agents SDK is the production-ready, officially supported framework for building multi-agent systems using OpenAI's models — replacing the experimental Swarm project. It ships with built-in tracing, guardrails, handoffs, and sessions, making it the most complete out-of-the-box agentic toolkit for teams already in the OpenAI ecosystem.

Buy vs Build

This is a build framework, but with significant buy elements: tracing dashboards, guardrails infrastructure, and session management are provided rather than built. For OpenAI-committed teams, the operational overhead is low.

Why It's in Trial

Launched March 2025 as the direct successor to the Swarm experiment, the Agents SDK reached 19K+ GitHub stars quickly and is the recommended path for anyone who was considering Swarm for production.

Five core primitives:

Primitive What It Does
Agents An LLM with instructions, tools, and a role
Handoffs One agent delegates to another best suited for the next step
Guardrails Input/output validation that runs in parallel with agent execution — fails fast
Sessions Persistent memory within an agent loop — context survives across turns
Tracing Built-in, enabled by default — captures every LLM call, tool use, handoff, and custom event

The Swarm Architecture Pattern Explained

In a swarm pattern, you don't have one giant agent trying to do everything. Instead you have specialised agents that handle different concerns, and hand off control to each other:

User message
    → Triage agent (decides what kind of request this is)
        → Code agent (writes the implementation)
        → Review agent (checks the code)
        → Test agent (generates tests)
    → Final response

Each agent only sees the context relevant to its job. Errors in one agent don't cascade to others. Guardrails on each agent's inputs and outputs catch problems early.

When to Choose This vs LangGraph/CrewAI

  • OpenAI Agents SDK: Best if you're deeply committed to OpenAI models and want built-in production infrastructure (tracing, guardrails) with minimal setup.
  • LangGraph: Better if you need fine-grained graph-based flow control, want model-agnostic freedom, or have complex branching logic.
  • CrewAI: Better if your use case maps to a team of roles and you want to get started quickly with less code.

Provider Agnostic Note

Despite being made by OpenAI, the SDK supports 100+ LLMs via the Chat Completions API standard. It is not OpenAI-model-only.

Getting Started

pip install openai-agents

from agents import Agent, Runner

triage = Agent(name="Triage", instructions="Route requests to the right specialist.")
coder = Agent(name="Coder", instructions="Write clean Python code.")

triage.handoffs = [coder]

result = await Runner.run(triage, input="Write a URL validator function")

Key Characteristics

Property Value
Language Python + TypeScript
Built-in Tracing, guardrails, sessions, handoffs
Model support OpenAI + 100+ via Chat Completions API
Licence MIT
Provider OpenAI
GitHub openai/openai-agents-python
Docs openai.github.io/openai-agents-python

Further Reading