Engineering Blog

How we build Vector Panda: architecture decisions, performance deep dives, and lessons from running a distributed vector search system.

Architecture

Why We Chose Exhaustive Search Over Approximate Indexes

Every vector database uses HNSW or IVF to trade recall for speed. We went the opposite direction: exhaustive search with PCA-based dimensionality reduction and aggressive distribution across workers. Here's why that decision gives us 100% recall at competitive latencies.

Mike Brannigan March 2026
Engineering

Hot, Warm, Paused: A Three-Tier Storage Architecture

Not all vectors need sub-10ms query times. Our tiered storage model lets you keep frequently-queried data on NVMe (hot), less active data on SSD (warm), and archived data on HDD (paused) at a fraction of the cost. We explain the coordinator logic that routes queries and manages tier transitions.

Mike Brannigan March 2026
Engineering

Building a Coordinator-Worker Architecture in Rust

Our coordinator distributes vector shards across a fleet of workers connected via WebSocket. We cover the epoch system that ensures consistency, the discovery service that finds workers on the network, and the query fanout that aggregates partial results into a single ranked response.

Mike Brannigan February 2026
Product

Pricing Vector Search: Per-Vector Storage vs. Per-Query Billing

We analyzed billing models across Pinecone, Weaviate Cloud, Qdrant, and Zilliz. Most charge per-query or per-compute-unit, making costs unpredictable. We chose per-vector storage billing with unlimited queries included. Here's the math behind that decision and why it works.

Mike Brannigan February 2026
Engineering

Zero-Config Indexing: How We Eliminate Parameter Tuning

Traditional vector databases require setting nprobe, ef_search, M, and dozens of other parameters. Our approach auto-selects index strategies based on collection size, dimensionality, and query patterns. Upload your vectors and search immediately — no configuration needed.

Mike Brannigan January 2026
Architecture

Epoch-Based Consistency in a Distributed Vector Store

When vectors are appended, deleted, or resharded, every worker needs a consistent view of the data. We use an epoch system where the coordinator bumps a version number and workers transition atomically. This post walks through the protocol messages, the hot-append optimization, and failure recovery.

Mike Brannigan January 2026