Add PiPNN: high-performance alternative index builder (7.6-10x faster graph build)#856
Add PiPNN: high-performance alternative index builder (7.6-10x faster graph build)#856SeliMeli wants to merge 22 commits intomicrosoft:mainfrom
Conversation
Implements the PiPNN algorithm (arXiv:2602.21247) as a new graph index builder for DiskANN. PiPNN replaces incremental beam-search insertion with partition-then-build using GEMM-based all-pairs distance and LSH-based HashPrune edge merging. Key components: - Randomized Ball Carving partitioning with fused GEMM + assignment - GEMM-based leaf building with bi-directed k-NN - HashPrune with per-point Mutex reservoirs for parallel edge merging - DiskANN-compatible graph output format Achieves 11.2x build speedup on SIFT-1M (128d) and 3.1x on higher- dimensional datasets while maintaining equivalent graph quality. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add thread-local LeafBuffers to avoid repeated allocation of distance matrices, dot products, and local data arrays during leaf building. Reduces leaf+merge wall time by 15% on 384d data. - Add fp16 input support (auto-converts to f32 on load) - Add cosine_normalized distance metric support - Add --save-path to write DiskANN-compatible graph file - Remove jemalloc (slower for this workload) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace matrixmultiply with cblas_sgemm (OpenBLAS) for the leaf build and partition GEMM kernels. OpenBLAS has highly optimized AVX2 micro- kernels for AMD EPYC that outperform matrixmultiply on high-dimensional data. Results (384d Enron): 23.8s -> 21.0s (12% faster) Results (128d SIFT): 8.4s -> 7.7s (8% faster) Requires libopenblas-dev: apt install libopenblas-dev Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Skip recursive partition for clusters at depth >= 2 that are < 3x c_max (force-split is cheaper than another full GEMM + assignment cycle) - Smaller c_max (512) works better for 384d since leaf GEMM scales O(d) - Fix test signatures for cosine parameter Results: SIFT-1M (128d): 8.0s build = 10.2x speedup, recall@10=0.986 (L=100) Enron (384d): 15.2s build = 5.1x speedup, recall@1000=0.949 (L=2000) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add sub-phase timing to parallel_partition for profiling. Update README with accurate final benchmark numbers: SIFT-1M: 8.0s (10.2x), recall@10=0.985 Enron: 15.2s (5.1x), recall@1000=0.949 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Batched edge insertion: sort edges by source point, acquire each lock once per unique source (cleaner than per-edge locking) - Add dynamic set_blas_threads() API (not currently used for partition since multi-threaded BLAS is slower for tall-thin matrices) - Remove unused partition_assign_impl wrapper - Remove dead RP code Tested: SIFT-1M 7.9s (10.3x), Enron 15.0s (5.2x), recall unchanged Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Callgrind/cachegrind profiling revealed instruction breakdown: 42% sgemm_kernel (compute — irreducible) 7% memset (zeroing buffers — partially eliminated) 8% hash_prune (lock + insert) 5% quicksort (edge sorting — kept: reduces lock ops 10x) 3% malloc/free Applied optimizations: - Cosine distance: convert dot->distance in-place, eliminating one n*n buffer allocation + memset per leaf (saves 6.8% of instructions) - Sorted edge batching restored (10x fewer lock acquisitions) - Dynamic BLAS threading API added (not used: tall-thin matrices don't benefit from multi-threaded BLAS) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reuses DiskANN's scalar quantization approach: train per-dimension shift/scale, pack vectors to 1 bit/dim, use Hamming distance for both partition assignment and leaf building. Two modes: --quantize-bits 1: full quantized (partition + leaf use Hamming) Without flag: full precision (original GEMM-based approach) Enron 384d results vs Vamana 1-bit baseline (28.8s, recall 0.958): PiPNN 1bit-leaf: 12.5s (2.3x faster), recall 0.945 PiPNN 1bit-full: 10.2s (2.8x faster), recall 0.932 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Blocked all-pairs Hamming distance matrix (L1 cache friendly) - u64 XOR+popcount fast path (8 bytes/op instead of 1) - Pre-extract leader/point data into contiguous arrays - Larger partition stripes (32K) for quantized path Enron 384d 1-bit: 10.2s -> 8.5s (17% faster), 3.4x vs Vamana 1-bit SIFT FP: unchanged at 7.9s (1-bit not beneficial for 128d) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Inline XOR+popcount in distance matrix and partition to eliminate function call overhead (was 38% of 1-bit instructions per cachegrind). Reverted pre-sorted edge experiment — per-source Vec construction and dedup overhead exceeded the sort savings. Final stable results: SIFT FP: 7.9s (10.3x vs Vamana) Enron FP: 15.5s (5.0x vs Vamana FP) Enron 1-bit: 8.3s (3.5x vs Vamana 1-bit) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Eliminate per-point Vec allocation in extract_knn by reusing a single buffer. Use u32 instead of usize for index storage (halves memory). SIFT: 7.7s (was 7.9s, 3% faster) Others: within variance Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Integrate PiPNN as an alternative graph construction algorithm selectable via BuildAlgorithm::PiPNN in the disk-index build config. Key changes: - Add BuildAlgorithm enum to diskann-disk with pipnn feature flag - Wire PiPNN through benchmark JSON config and diskann-tools API - Use DiskANN's Distance<f32,f32> functor for all metrics (SIMD) - Use DiskANN's ScalarQuantizer for 1-bit SQ (no duplicate training) - Reuse DiskANN's alpha, medoid (L2), and graph format - Remove standalone pipnn-bench binary (use diskann-benchmark) - Production hardening: tracing, Result types, config validation, aligned quantized storage, mutex poison recovery - 111 tests (102 pipnn + 9 build_algorithm), 23 criterion benchmarks - Regression verified: SIFT recall@10=98.3%, Enron recall@1000=94.9% Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…sort Switch GEMM backend from OpenBLAS (cblas_sgemm) to diskann-linalg (faer) to align with DiskANN's linalg choice and remove the C library dependency. Optimize extract_knn by sorting 4-byte u32 indices instead of 8-byte (u32, f32) pairs, reducing memory movement during quickselect (~1.5x faster kNN extraction, ~18% faster full pipeline on Enron fp16 1M). - Remove cblas-sys, matrixmultiply deps; add diskann-linalg - Delete build.rs (no more OpenBLAS linker search) - Remove dead gemm_aat/gemm_abt functions from leaf_build and hash_prune - Remove set_blas_threads/openblas_set_num_threads FFI Enron fp16 regression (PiPNN FP, k=5, 8 threads): Build: 53.8s → 44.3s (-17.6%), Recall: 94.9% (unchanged), QPS: 16.5 → 16.7 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…uncapped leaders - Pack ReservoirEntry to 8 bytes (was 12) using bf16 distance (u16): 4B neighbor + 2B hash + 2B bf16 distance = 8 bytes, zero padding. bf16 bit patterns are monotonic for non-negative values, enabling integer comparison instead of f32 partial_cmp. - Merge undersized partition clusters into nearest large cluster by centroid L2 distance instead of by size (matches paper Algorithm 2). - Remove hardcoded 1000 leader cap; let p_samp control leader count directly. Add adaptive stripe sizing in partition_assign to bound per-stripe GEMM memory to ~64 MB regardless of leader count. - Lower default p_samp from 0.05 to 0.005 (practical for 1M+ scale). Enron fp16 (1,087,932 pts, 384d, cosine_normalized): Build: 43-46s, Recall@1000: 94.8-94.9%, QPS: 16-17 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Replace use_cosine bool with Metric enum in partition_assign and build_leaf. Each metric variant (L2, Cosine, CosineNormalized, InnerProduct) now has explicit distance computation — no catch-all _ => branches. - Cosine (unnormalized) now correctly normalizes by ||a||*||b|| in both partition and leaf build, instead of treating it the same as CosineNormalized. - Remove dead compute_distance_matrix and compute_distance_matrix_direct functions. The GEMM-based path in build_leaf_with_buffers is the single source of truth for leaf distance computation. - Partition now receives metric via PartitionConfig and uses it for leader assignment distances (was always L2 before). Enron fp16 (cosine_normalized): Build 42.9s, Recall 94.7%, QPS 17.3 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Change extract_graph(&self) to extract_graph(self) so that sketches (~50 MB) are dropped before extraction and each reservoir is freed immediately after its neighbors are extracted via into_par_iter(). This prevents ~1 GB of reservoir + sketch memory from staying alive during final_prune. Benchmark (Enron 1M, 384d, cosine_normalized): no regression. Build 45.1s, Recall@1000 94.74%, Peak RSS 3.05 GB (was 3.2 GB). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
RSS profiling showed ~600 MB of freed-but-retained memory after PiPNN build completes, inflating peak RSS for downstream disk layout/search. Root causes identified via /proc/self/statm instrumentation: 1. Partition GEMM buffers freed but held in glibc per-thread arenas 2. Thread-local LeafBuffers pinning arena heap segments 3. Freed reservoir memory not returned to OS Fixes: - Call malloc_trim(0) after partition, extraction, and build completion to return freed pages across all glibc arenas - Release thread-local LeafBuffers after leaf build so arena segments containing freed reservoir data can be reclaimed - (Prior commit) Consuming extract_graph(self) frees reservoirs during extraction via into_par_iter Benchmark (Enron 1M, 384d, cosine_normalized): no regression. Build 44.3s, Recall@1000 94.744%, QPS 17.2, Peak RSS 3.21 GB. PiPNN completion RSS reduced ~140 MB (2337 vs 2478 MB). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After PiPNN build completes, the f32 data (~1.6 GB) and graph are freed but glibc retains the pages. malloc_trim(0) returns them to the OS, dropping RSS from ~2.4 GB to ~129 MB before disk layout starts. This prevents the freed build memory from inflating RSS during the subsequent disk layout and search phases. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
VmHWM profiling (kernel peak RSS tracker) revealed the true peak occurs during partition_assign_impl, when 8 concurrent stripes each allocate ~90 MB of GEMM buffers (p_data + dots), spiking RSS by ~1.4 GB. Fix: reduce per-stripe GEMM output target from 64 MB to 16 MB. Each stripe now uses ~22 MB instead of ~90 MB. With 8 threads, concurrent GEMM memory drops from ~720 MB to ~180 MB. Partition is <5% of total build time, so throughput cost is negligible. Build-only peak RSS: 3.19 GB -> 2.58 GB (-19%). Benchmark: Build 44.0s, Recall@1000 94.744%, Peak RSS 2.61 GB. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Make build_internal, partition, leaf_build, hash_prune sketches, find_medoid, and final_prune generic over T: VectorRepr instead of requiring &[f32]. Each component already copies data into local buffers (partition stripes, LeafBuffers, etc.), so T->f32 conversion happens naturally during those copies with zero extra allocation. build_typed<T> no longer allocates a full f32 copy upfront — it passes &[T] directly through the build pipeline. For f32 data, as_f32_into is a memcpy (zero overhead). For f16, it converts on-the-fly, saving the 793 MB f32 copy that was the single largest contributor to peak RSS. diskann-disk's FP build path now loads data as native T and calls build_typed directly. SQ path retains f32 conversion (quantizer needs it). Build-only peak RSS: 2.58 GB -> 1.77 GB (-31%). Full benchmark: Build 41.5s, Recall@1000 94.744%, Peak RSS 1.80 GB (-44%). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add PiPNNBuildStats struct with per-phase timing (sketches, partition,
leaf build, graph extraction, final prune). Printed to stdout during
benchmark so users can see the timing breakdown like Vamana does.
Example output:
PiPNN Build Timing
LSH sketches: 0.422s
Partition: 4.044s (22936 leaves)
Leaf build: 6.996s (71065098 edges)
Graph extract: 0.466s
Final prune: 0.035s
Total: 12.284s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…thms The PerfLogger checkpoints go to tracing (OpenTelemetry spans) which are not visible in benchmark stdout. Add println! for the three outer phases (PQ compression, graph build, disk layout) so both Vamana and PiPNN show timing breakdown alongside the existing "Build time: Xs" line. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@SeliMeli please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
There was a problem hiding this comment.
Pull request overview
Adds PiPNN as a new (feature-gated) disk-index graph construction algorithm, integrating it into the disk index build pipeline and benchmark/tooling so builders can choose between Vamana and PiPNN.
Changes:
- Introduce new
diskann-pipnncrate implementing partitioning, leaf-build, HashPrune merging, and optional final pruning. - Add
BuildAlgorithmto disk build parameters/config and wire it throughdiskann-disk,diskann-tools, anddiskann-benchmark. - Add build-phase timing output and PiPNN-specific 1-bit SQ reuse path in the disk index builder.
Reviewed changes
Copilot reviewed 24 out of 25 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| diskann-tools/src/utils/build_disk_index.rs | Adds build_algorithm to tool-level disk index build parameters and passes it into disk build configuration. |
| diskann-tools/Cargo.toml | Enables diskann-disk’s pipnn feature for tools. |
| diskann-providers/src/model/graph/provider/async_/inmem/scalar.rs | Exposes quantizer() accessor to reuse trained SQ params for PiPNN. |
| diskann-pipnn/src/lib.rs | Defines PiPNN public API, config, error types, and metric serde. |
| diskann-pipnn/src/builder.rs | Implements end-to-end PiPNN build orchestration + optional final prune + graph save format. |
| diskann-pipnn/src/partition.rs | Implements RBC partitioning (float + quantized variants) and associated tests. |
| diskann-pipnn/src/leaf_build.rs | Implements GEMM-based all-pairs distances, kNN extraction, and edge materialization. |
| diskann-pipnn/src/hash_prune.rs | Implements LSH sketching + per-point reservoir HashPrune and extraction. |
| diskann-pipnn/src/gemm.rs | Wraps diskann-linalg GEMM calls used by PiPNN. |
| diskann-pipnn/src/quantize.rs | Implements 1-bit quantization + Hamming-distance helpers and tests. |
| diskann-pipnn/benches/pipnn_bench.rs | Adds Criterion benchmarks for PiPNN hot paths. |
| diskann-pipnn/README.md | Documents PiPNN usage, parameters, and expected performance. |
| diskann-pipnn/Cargo.toml | Adds the new crate manifest + deps. |
| diskann-disk/src/lib.rs | Re-exports BuildAlgorithm from build configuration. |
| diskann-disk/src/build/mod.rs | Re-exports BuildAlgorithm for consumers. |
| diskann-disk/src/build/configuration/mod.rs | Adds build_algorithm module and re-export. |
| diskann-disk/src/build/configuration/disk_index_build_parameter.rs | Stores selected build algorithm in disk build parameters and exposes accessor. |
| diskann-disk/src/build/configuration/build_algorithm.rs | Defines BuildAlgorithm enum and PiPNN parameter mapping (feature-gated). |
| diskann-disk/src/build/builder/build.rs | Integrates PiPNN into disk index build pipeline + phase timing output. |
| diskann-disk/Cargo.toml | Adds optional dependency + feature flag for diskann-pipnn and test deps. |
| diskann-benchmark/src/inputs/disk.rs | Adds build_algorithm to disk-index build job input model. |
| diskann-benchmark/src/backend/disk_index/build.rs | Passes build_algorithm into disk build parameters. |
| diskann-benchmark/Cargo.toml | Enables diskann-disk/pipnn and adds diskann-pipnn dependency. |
| Cargo.toml | Adds diskann-pipnn to workspace and workspace deps. |
| Cargo.lock | Records new crate and dependency graph changes. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if actual_k < dists.len() { | ||
| dists.select_nth_unstable_by(actual_k, |a, b| { | ||
| a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal) |
There was a problem hiding this comment.
brute_force_knn uses select_nth_unstable_by(actual_k, ...) and then truncates to actual_k. As with the other quickselect usage, pivoting at actual_k can exclude one of the true k-smallest elements (notably for k=1). Pivot at actual_k - 1 when actual_k > 0 before truncating, then sort/truncate.
| if actual_k < dists.len() { | |
| dists.select_nth_unstable_by(actual_k, |a, b| { | |
| a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal) | |
| if actual_k > 0 && actual_k < dists.len() { | |
| dists.select_nth_unstable_by(actual_k - 1, |a, b| { | |
| a.1 | |
| .partial_cmp(&b.1) | |
| .unwrap_or(std::cmp::Ordering::Equal) |
| if num_assign < buf.len() { | ||
| buf.select_nth_unstable_by(num_assign, |a, b| { | ||
| a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal) | ||
| }); | ||
| } | ||
|
|
||
| let out = &mut assign_chunk[i * num_assign..(i + 1) * num_assign]; | ||
| for k in 0..num_assign { | ||
| out[k] = buf[k].0; | ||
| } |
There was a problem hiding this comment.
buf.select_nth_unstable_by(num_assign, ...) partitions around the (num_assign)-th element, but the code then reads buf[0..num_assign]. This can miss one of the true num_assign nearest leaders (e.g., fanout=1 should pivot at 0). Pivot at num_assign - 1 (when num_assign > 0) before taking the first num_assign entries.
| group.bench_with_input( | ||
| BenchmarkId::new("m_n_k", format!("{}x{}x{}", m, n, k)), | ||
| &(m, n, k), | ||
| |b_iter, &(m, k, _)| { | ||
| b_iter.iter(|| { | ||
| gemm::sgemm_abt(&a, m, k, &b, n, &mut result); | ||
| }); | ||
| }, |
There was a problem hiding this comment.
The benchmark input tuple is (m, n, k), but the closure destructures it as &(m, k, _). This drops n and re-binds k to the second element, so the GEMM dimensions passed to sgemm_abt are incorrect and the code likely won't compile due to unused/mismatched bindings. Destructure as &(m, n, k) and pass the correct values.
|
|
||
| fn bench_build_leaf(c: &mut Criterion) { | ||
| let mut group = c.benchmark_group("leaf_build/build_leaf"); | ||
| gemm::set_blas_threads(1); |
There was a problem hiding this comment.
gemm::set_blas_threads(1) is called here, but diskann_pipnn::gemm (src/gemm.rs) does not define set_blas_threads. This will fail to compile under --all-targets (benches included). Either remove these calls or add an implementation (likely in diskann-linalg) behind an appropriate feature/target gate.
| gemm::set_blas_threads(1); |
| |b, _| { | ||
| b.iter(|| { | ||
| leaf_build::build_leaf(&data, ndims, &indices, k, false); | ||
| }); |
There was a problem hiding this comment.
leaf_build::build_leaf takes a Metric as its last argument (see src/leaf_build.rs), but the benchmark passes false. This will not compile and also makes the benchmark ambiguous about the distance metric being measured. Pass an explicit diskann_vector::distance::Metric value instead.
| diskann-tools = { workspace = true } | ||
| diskann-disk = { workspace = true, optional = true } | ||
| diskann-disk = { workspace = true, optional = true, features = ["pipnn"] } | ||
| diskann-pipnn = { workspace = true } |
There was a problem hiding this comment.
diskann-pipnn is added as an unconditional dependency, but it is not referenced from this crate (and diskann-disk is already optional behind the disk-index feature). This forces PiPNN to compile even when disk-index is disabled, increasing build times/binary size. Consider removing it or making it conditional (e.g., only pulled in via diskann-disk's pipnn feature under disk-index).
| diskann-pipnn = { workspace = true } |
| if actual_k < n { | ||
| indices.select_nth_unstable_by(actual_k, |&a, &b| { | ||
| let da = unsafe { *row.get_unchecked(a as usize) }; | ||
| let db = unsafe { *row.get_unchecked(b as usize) }; | ||
| da.partial_cmp(&db).unwrap_or(std::cmp::Ordering::Equal) | ||
| }); | ||
| } | ||
|
|
||
| for idx in 0..actual_k { | ||
| let j = unsafe { *indices.get_unchecked(idx) } as usize; | ||
| edges.push((i, j, row[j])); | ||
| } |
There was a problem hiding this comment.
select_nth_unstable_by(actual_k, ...) uses actual_k as the pivot index, but the code then reads indices[0..actual_k]. For k-nearest selection this can omit the true minimums (e.g., k=1 should pivot at 0, not 1). Use actual_k - 1 as the pivot (when actual_k > 0) and then take the first actual_k indices (optionally sorting that prefix if order matters).
| if num_assign < buf.len() { | ||
| buf.select_nth_unstable_by(num_assign, |a, b| { | ||
| a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal) | ||
| }); | ||
| } | ||
|
|
||
| let out = &mut assign_chunk[i * num_assign..(i + 1) * num_assign]; | ||
| for k in 0..num_assign { | ||
| out[k] = buf[k].0; | ||
| } |
There was a problem hiding this comment.
Same quickselect issue as above: buf.select_nth_unstable_by(num_assign, ...) followed by reading buf[0..num_assign] does not guarantee selecting the num_assign closest leaders. Use pivot index num_assign - 1 (when > 0), then take the first num_assign entries.
| async fn build_inmem_index(&mut self, pool: &RayonThreadPool) -> ANNResult<()> { | ||
| // Check for PiPNN algorithm | ||
| #[cfg(feature = "pipnn")] | ||
| if let BuildAlgorithm::PiPNN { .. } = self.disk_build_param.build_algorithm() { |
There was a problem hiding this comment.
build_algorithm() returns &BuildAlgorithm, but this if let BuildAlgorithm::PiPNN { .. } = ... pattern matches a by-value BuildAlgorithm and will not compile. Match on a reference (e.g., if matches!(self.disk_build_param.build_algorithm(), &BuildAlgorithm::PiPNN { .. })) or deref to a value if you change the accessor to return an owned value.
| if let BuildAlgorithm::PiPNN { .. } = self.disk_build_param.build_algorithm() { | |
| if matches!( | |
| self.disk_build_param.build_algorithm(), | |
| &BuildAlgorithm::PiPNN { .. } | |
| ) { |
| #[cfg(not(feature = "pipnn"))] | ||
| if !matches!( | ||
| self.disk_build_param.build_algorithm(), | ||
| BuildAlgorithm::Vamana |
There was a problem hiding this comment.
Similarly, this matches!(self.disk_build_param.build_algorithm(), BuildAlgorithm::Vamana) is matching &BuildAlgorithm against a by-value pattern and will not compile under #[cfg(not(feature = "pipnn"))]. The pattern needs to match a reference (e.g., &BuildAlgorithm::Vamana).
| BuildAlgorithm::Vamana | |
| &BuildAlgorithm::Vamana |
|
This is great! and thanks for the pointer. On the index side: Could you share the benchmark jsons for diskann baseline so we can replicate. As Mark suggests, the baseline seems slower than expected. On the query side: Please run queries on SSD. While we have a lot to learn from this paper, I want to be careful about disk based graphs. Diskann graphs are designed to have lower diameter which are better for SSD. We would want to see the diameter of graphs PiPNN generated. You could for example measure the number of points that are "n" hops from start for n=1,2,3..10, by fixing degree. A SSD friendly index would have more nodes closer to origin. The prune method is very interesting, we should consider overall what we can learn and put in core diskann prune and how to make it streaming. |
Summary
Adds PiPNN (Pick-in-Partitions Nearest Neighbors, arXiv:2602.21247) as an alternative graph construction algorithm for DiskANN's disk index.
PiPNN replaces Vamana's sequential greedy-insert with batch GEMM-based construction:
This makes the build embarrassingly parallel with no sequential point-to-point dependency.
Benchmark Results (8 threads, 1M points)
Build Performance
Search Quality (same disk index format — search path is identical)
Recall gap is tunable via parameters (replicas, l_max, leaf_k). SQ1 gap needs further tuning.
Memory
Peak build RSS is ~1.8 GB vs Vamana's ~1.2 GB (Mimir QG 1M FP). Optimized from initial 3.2 GB via:
T: VectorRepr, converts on-the-fly)extract_graph(self)to free reservoirs during extractionmalloc_trimbetween build phasesChanges
diskann-pipnn(~5700 lines) — builder, partition, leaf_build, hash_prune, gemm, quantizediskann-disk— PiPNN integration into disk index build pipeline, build phase timing outputdiskann-benchmark—build_algorithm: PiPNNconfig supportpipnn— zero impact on existing Vamana buildsTest plan
diskann-pipnnpass/usr/bin/time -vand/proc/self/statusVmHWM🤖 Generated with Claude Code