Qdrant is a high-performance, open-source vector database using the HNSW algorithm — well-suited for self-hosted deployments needing fast similarity search at scale, with first-class support from both Spring AI and LangChain4j.
Why Trial
Qdrant's engineering focus is performance and operational simplicity for self-hosted use. Written in Rust, it uses HNSW for approximate nearest-neighbour search and communicates via gRPC for low-overhead Java integration. Both Spring AI and LangChain4j provide auto-configured integrations, making it a drop-in choice in either framework.
Choose Qdrant over pgvector when: you need dedicated vector infrastructure independent of Postgres, need advanced filtering during search, or are running at scale where a Rust-native search engine matters.
Choose Qdrant over Weaviate when: you want a focused, operationally simpler vector store without Weaviate's broader feature surface (multi-tenancy, generative AI integration, graph traversal).
Spring AI Integration
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-vector-store-qdrant</artifactId>
</dependency>
spring:
ai:
vectorstore:
qdrant:
host: localhost
port: 6334 # gRPC port
collection-name: my-documents
initialize-schema: true # auto-creates the collection
The initialize-schema: true flag creates the Qdrant collection on startup if it doesn't exist, sized to match your configured embedding model's dimensions — no manual collection setup required.
Usage is identical to every other Spring AI vector store:
@Autowired VectorStore vectorStore;
vectorStore.add(List.of(
new Document("Qdrant stores vectors and metadata together.", Map.of("source", "docs"))
));
List<Document> results = vectorStore.similaritySearch(
SearchRequest.query("fast vector similarity search")
.withTopK(5)
.withFilterExpression("source == 'docs'")
);
LangChain4j Integration
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-qdrant</artifactId>
<version>1.0.0</version>
</dependency>
EmbeddingStore<TextSegment> qdrant = QdrantEmbeddingStore.builder()
.host("localhost")
.port(6334)
.collectionName("my-documents")
.build();
// Used in a RAG chain
ContentRetriever retriever = EmbeddingStoreContentRetriever.builder()
.embeddingStore(qdrant)
.embeddingModel(embeddingModel)
.maxResults(5)
.build();
Both frameworks use Qdrant's gRPC protocol for efficient, low-latency communication — particularly important for high-throughput embedding insertion pipelines.
Running Locally
# docker-compose.yml
services:
qdrant:
image: qdrant/qdrant:latest
ports:
- "6333:6333" # REST API
- "6334:6334" # gRPC (what Spring AI / LangChain4j use)
volumes:
- qdrant_data:/qdrant/storage
For integration tests, pair with Testcontainers:
@Container
static QdrantContainer qdrant = new QdrantContainer("qdrant/qdrant:latest");
Testcontainers 1.20+ includes a first-class QdrantContainer — no custom container definition needed.
Qdrant Cloud
For managed deployments, Qdrant Cloud offers a hosted version. The Java configuration is identical — change the host to your cloud cluster URL and add an API key:
spring.ai.vectorstore.qdrant:
host: my-cluster.qdrant.io
port: 6334
api-key: ${QDRANT_API_KEY}
use-tls: true
Key Characteristics
| Property | Value |
|---|---|
| Written in | Rust (high performance) |
| Protocol | gRPC (primary), REST |
| Index type | HNSW |
| Spring AI support | spring-ai-starter-vector-store-qdrant |
| LangChain4j support | langchain4j-qdrant |
| Testcontainers | QdrantContainer (1.20+) |
| Deployment | Self-hosted (Docker) or Qdrant Cloud |