Technology RadarTechnology Radar

Java AI Tech Radar

All Radars

What Is This?

This is a tech radar for Java developers integrating AI — a focused, opinionated view of the frameworks, SDKs, runtimes, and patterns that matter right now for the JVM ecosystem.

It assumes you already write Java professionally and want to know what's worth your attention in the AI space. It does not explain what LLMs are.


How to Read the Radar

Four quadrants, each covering a different category of the Java AI landscape:

Quadrant What It Covers
AI Frameworks Spring AI, LangChain4j, Quarkus LangChain4j, Micronaut AI, and similar
Model Clients & APIs Java SDKs for OpenAI, Anthropic, Azure AI, Google, Bedrock, and more
Inference & Data Local inference (DJL, ONNX), vector databases, ETL for AI data pipelines
Java Patterns Virtual threads, structured concurrency, Records for AI output, and JVM-specific AI patterns

Four rings:

Ring Meaning
Adopt Proven and strongly recommended for Java teams. Use these today.
Trial Worth using on real projects. Ready but not yet standard.
Assess Explore and evaluate. Worth understanding even if you're not adopting yet.
Hold Approach with caution — superseded, too risky, or not mature enough.

Key Context

Spring AI vs LangChain4j

These are the two dominant Java AI frameworks. The choice matters:

  • Spring AI is the natural fit for Spring Boot shops — it follows familiar Spring patterns (auto-configuration, @Bean, @Autowired), has official backing from Broadcom/VMware, and reached GA in May 2025.
  • LangChain4j is framework-agnostic and more explicit — good for non-Spring projects, teams that want more control, or Quarkus/Micronaut shops via their native extensions.

They can coexist in the same codebase; many teams use Spring AI for the high-level abstractions and LangChain4j for specialised agent patterns.

The "Train in Python, Deploy in Java" Pattern

Most ML training happens in Python (PyTorch, scikit-learn, Hugging Face). But for serving models inside a Java-based backend, you don't have to rewrite in Python: train the model in Python, export it to ONNX, and load it in Java via DJL or ONNX Runtime. This lets each platform do what it's best at.

Virtual Threads Are a Game-Changer for AI

Java 21's virtual threads (Project Loom) eliminate the reason to use reactive programming (WebFlux, RxJava) purely for concurrency. Making 10 concurrent LLM API calls is now as simple as spawning 10 threads — the JVM handles the scheduling. This removes a significant source of complexity in AI orchestration code.


Contributing

Add a Markdown file under radars/java/radar/YYYY-MM-DD/ with the required frontmatter:

---
title: "Entry Name"
ring: adopt | trial | assess | hold
quadrant: ai-frameworks | model-clients-and-apis | inference-and-data | java-patterns
tags: [comma, separated]
featured: true
---

See the GitHub repository for details.