Technology RadarTechnology Radar
Trial

Qwen (Alibaba Cloud) is one of the most actively iterated open-weight model families in AI -- evolving from Qwen 2.5 through Qwen 3 to Qwen 3.5 (February 2026) in just over a year, with sizes spanning 0.5B to 235B parameters, Apache 2.0 licensing, and the broadest multilingual support (29+ languages) of any open-weight family.

Why It's in Trial

Qwen earns Trial through breadth, velocity, and ecosystem integration:

  • Rapid iteration: Qwen 2.5 (Sep 2024) -> Qwen 3 (Apr 2025) -> Qwen 3-Max (Sep 2025) -> Qwen 3.5 (Feb 2026) -- faster release cadence than any other open-weight family
  • Broadest size range: From 0.5B (edge/mobile) to 235B MoE -- no other family covers this full spectrum
  • Strong coding: Qwen 2.5-Coder-32B scored 37.2% on LiveCodeBench (beat GPT-4o's 29.2%) and 36.5% on SWE-bench Verified
  • Apache 2.0 licensing for most variants -- unrestricted commercial use
  • Foundation for R1 distillation: DeepSeek chose Qwen 2.5 as the base for several R1-Distill variants (1.5B, 7B, 32B), validating its architecture quality
  • Ecosystem presence: Referenced across this radar -- Ollama (tool calling support), Groq, Hugging Face, ACP (Qwen Code agent), Lemonade Server
  • 29+ languages: Strongest multilingual coverage in the open-weight tier

The Model Family

Model Parameters Release Strength
Qwen 3.5 Open-weights Feb 2026 Latest generation
Qwen 3 Dense: 0.6B-32B; Sparse: 30B (3B active), 235B (22B active) Apr 2025 Apache 2.0, dense + sparse
Qwen 3-Max Sep 2025 API flagship
Qwen 3-Max-Thinking Jan 2026 Reasoning variant
Qwen 2.5-Coder-32B 32B Nov 2024 Code specialist, 37.2% LiveCodeBench
Qwen 2.5-Max 236B MoE (57B active) Jan 2025 90% HumanEval, $2/M tokens
Qwen 2.5-Omni Multimodal 2025 Text, images, audio, video + speech synthesis

Coding Capabilities

Qwen 2.5-Coder is a standout in the family, trained on 5.5 trillion tokens (45% code, 55% natural language) across 92 programming languages:

Benchmark Qwen 2.5-Coder-32B GPT-4o Claude 3.5 Sonnet
LiveCodeBench 37.2% 29.2%
SWE-bench Verified 36.5% 23.6% 33.4%
McEval (40+ languages) 65.9

Qwen Code Agent

Qwen Code is listed in the ACP (Agent Client Protocol) registry with native support -- see the Agent Client Protocol entry. This positions Qwen alongside Kimi CLI and Mistral Vibe as one of the Chinese-origin coding agents in the ACP ecosystem.

Cautions

  • Same data sovereignty considerations as other Chinese-origin models (Alibaba Cloud)
  • The 3B and 72B variants use a Qwen-specific license rather than Apache 2.0 -- check licensing per variant
  • Benchmark results for Qwen 2.5-Coder predate the current frontier (Claude Opus 4.6, GPT-5.4) -- newer Qwen models may narrow the gap but independent verification is limited

Key Characteristics

Property Value
Size range 0.5B to 235B (MoE)
Latest generation Qwen 3.5 (February 2026)
License Apache 2.0 (most variants)
Languages 29+ natural languages, 92 programming languages
Provider Alibaba Cloud
Weights Hugging Face: Qwen

Further Reading