Skip to content

Add PiPNN: high-performance alternative index builder (7.6-10x faster graph build)#856

Open
SeliMeli wants to merge 22 commits intomicrosoft:mainfrom
SeliMeli:pipnn
Open

Add PiPNN: high-performance alternative index builder (7.6-10x faster graph build)#856
SeliMeli wants to merge 22 commits intomicrosoft:mainfrom
SeliMeli:pipnn

Conversation

@SeliMeli
Copy link

Summary

Adds PiPNN (Pick-in-Partitions Nearest Neighbors, arXiv:2602.21247) as an alternative graph construction algorithm for DiskANN's disk index.

PiPNN replaces Vamana's sequential greedy-insert with batch GEMM-based construction:

  1. Partition — Random Ball Carving splits dataset into overlapping clusters
  2. Leaf Build — All-pairs distance via BLAS GEMM within each cluster, extract bidirectional k-NN
  3. HashPrune — LSH-based reservoir merges edges from overlapping partitions
  4. Final Prune — Optional RobustPrune pass (same alpha as Vamana)

This makes the build embarrassingly parallel with no sequential point-to-point dependency.

Benchmark Results (8 threads, 1M points)

Build Performance

Mimir QG 1M (384d, cosine) Mimir QG 1M SQ1 (1-bit) SIFT 1M (128d, L2)
Graph build speedup 10x (201s → 20s) 7.6x (136s → 18s) 9.7x (73s → 7.5s)
End-to-end speedup 5.4x (222s → 41s) 4.2x (157s → 38s) 6.3x (88s → 14s)

Search Quality (same disk index format — search path is identical)

Dataset Metric Vamana Recall PiPNN Recall Gap Vamana QPS PiPNN QPS
Mimir QG 1M R@1000 96.19% 94.74% -1.45% 15.6 15.0
Mimir QG 1M SQ1 R@1000 95.75% 90.15% -5.60% 15.3 15.9
SIFT 1M (L=100) R@10 99.50% 98.53% -0.97% 258 159
SIFT 1M (L=200) R@10 99.85% 99.59% -0.26% 153 147

Recall gap is tunable via parameters (replicas, l_max, leaf_k). SQ1 gap needs further tuning.

Memory

Peak build RSS is ~1.8 GB vs Vamana's ~1.2 GB (Mimir QG 1M FP). Optimized from initial 3.2 GB via:

  • f16-native build (generic over T: VectorRepr, converts on-the-fly)
  • Smaller partition GEMM stripes (16 MB target)
  • Consuming extract_graph(self) to free reservoirs during extraction
  • malloc_trim between build phases

Changes

  • New crate: diskann-pipnn (~5700 lines) — builder, partition, leaf_build, hash_prune, gemm, quantize
  • diskann-disk — PiPNN integration into disk index build pipeline, build phase timing output
  • diskann-benchmarkbuild_algorithm: PiPNN config support
  • Feature-gated behind pipnn — zero impact on existing Vamana builds

Test plan

  • 102 unit tests in diskann-pipnn pass
  • Mimir QG 1M FP benchmark: recall 94.74%, no regression vs prior runs
  • Mimir QG 1M SQ1 benchmark: recall 90.15%
  • SIFT 1M benchmark: recall 98.53% (L=100), 99.59% (L=200)
  • Build time verified across 3 datasets, 3 configurations
  • Peak RSS profiled with /usr/bin/time -v and /proc/self/status VmHWM

🤖 Generated with Claude Code

WeiyaoLuo and others added 22 commits March 17, 2026 11:39
Implements the PiPNN algorithm (arXiv:2602.21247) as a new graph index
builder for DiskANN. PiPNN replaces incremental beam-search insertion
with partition-then-build using GEMM-based all-pairs distance and
LSH-based HashPrune edge merging.

Key components:
- Randomized Ball Carving partitioning with fused GEMM + assignment
- GEMM-based leaf building with bi-directed k-NN
- HashPrune with per-point Mutex reservoirs for parallel edge merging
- DiskANN-compatible graph output format

Achieves 11.2x build speedup on SIFT-1M (128d) and 3.1x on higher-
dimensional datasets while maintaining equivalent graph quality.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add thread-local LeafBuffers to avoid repeated allocation of distance
  matrices, dot products, and local data arrays during leaf building.
  Reduces leaf+merge wall time by 15% on 384d data.
- Add fp16 input support (auto-converts to f32 on load)
- Add cosine_normalized distance metric support
- Add --save-path to write DiskANN-compatible graph file
- Remove jemalloc (slower for this workload)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace matrixmultiply with cblas_sgemm (OpenBLAS) for the leaf build
and partition GEMM kernels. OpenBLAS has highly optimized AVX2 micro-
kernels for AMD EPYC that outperform matrixmultiply on high-dimensional
data.

Results (384d Enron): 23.8s -> 21.0s (12% faster)
Results (128d SIFT): 8.4s -> 7.7s (8% faster)

Requires libopenblas-dev: apt install libopenblas-dev

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Skip recursive partition for clusters at depth >= 2 that are < 3x c_max
  (force-split is cheaper than another full GEMM + assignment cycle)
- Smaller c_max (512) works better for 384d since leaf GEMM scales O(d)
- Fix test signatures for cosine parameter

Results:
  SIFT-1M (128d): 8.0s build = 10.2x speedup, recall@10=0.986 (L=100)
  Enron (384d):  15.2s build = 5.1x speedup, recall@1000=0.949 (L=2000)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add sub-phase timing to parallel_partition for profiling.
Update README with accurate final benchmark numbers:
  SIFT-1M: 8.0s (10.2x), recall@10=0.985
  Enron:  15.2s (5.1x),  recall@1000=0.949

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Batched edge insertion: sort edges by source point, acquire each
  lock once per unique source (cleaner than per-edge locking)
- Add dynamic set_blas_threads() API (not currently used for partition
  since multi-threaded BLAS is slower for tall-thin matrices)
- Remove unused partition_assign_impl wrapper
- Remove dead RP code

Tested: SIFT-1M 7.9s (10.3x), Enron 15.0s (5.2x), recall unchanged

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Callgrind/cachegrind profiling revealed instruction breakdown:
  42% sgemm_kernel (compute — irreducible)
   7% memset (zeroing buffers — partially eliminated)
   8% hash_prune (lock + insert)
   5% quicksort (edge sorting — kept: reduces lock ops 10x)
   3% malloc/free

Applied optimizations:
- Cosine distance: convert dot->distance in-place, eliminating one
  n*n buffer allocation + memset per leaf (saves 6.8% of instructions)
- Sorted edge batching restored (10x fewer lock acquisitions)
- Dynamic BLAS threading API added (not used: tall-thin matrices
  don't benefit from multi-threaded BLAS)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reuses DiskANN's scalar quantization approach: train per-dimension
shift/scale, pack vectors to 1 bit/dim, use Hamming distance for both
partition assignment and leaf building.

Two modes:
  --quantize-bits 1: full quantized (partition + leaf use Hamming)
  Without flag: full precision (original GEMM-based approach)

Enron 384d results vs Vamana 1-bit baseline (28.8s, recall 0.958):
  PiPNN 1bit-leaf:  12.5s (2.3x faster), recall 0.945
  PiPNN 1bit-full:  10.2s (2.8x faster), recall 0.932

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Blocked all-pairs Hamming distance matrix (L1 cache friendly)
- u64 XOR+popcount fast path (8 bytes/op instead of 1)
- Pre-extract leader/point data into contiguous arrays
- Larger partition stripes (32K) for quantized path

Enron 384d 1-bit: 10.2s -> 8.5s (17% faster), 3.4x vs Vamana 1-bit
SIFT FP: unchanged at 7.9s (1-bit not beneficial for 128d)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Inline XOR+popcount in distance matrix and partition to eliminate
function call overhead (was 38% of 1-bit instructions per cachegrind).

Reverted pre-sorted edge experiment — per-source Vec construction
and dedup overhead exceeded the sort savings.

Final stable results:
  SIFT FP:       7.9s (10.3x vs Vamana)
  Enron FP:     15.5s (5.0x vs Vamana FP)
  Enron 1-bit:   8.3s (3.5x vs Vamana 1-bit)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Eliminate per-point Vec allocation in extract_knn by reusing a single
buffer. Use u32 instead of usize for index storage (halves memory).

SIFT: 7.7s (was 7.9s, 3% faster)
Others: within variance

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Integrate PiPNN as an alternative graph construction algorithm selectable
via BuildAlgorithm::PiPNN in the disk-index build config. Key changes:

- Add BuildAlgorithm enum to diskann-disk with pipnn feature flag
- Wire PiPNN through benchmark JSON config and diskann-tools API
- Use DiskANN's Distance<f32,f32> functor for all metrics (SIMD)
- Use DiskANN's ScalarQuantizer for 1-bit SQ (no duplicate training)
- Reuse DiskANN's alpha, medoid (L2), and graph format
- Remove standalone pipnn-bench binary (use diskann-benchmark)
- Production hardening: tracing, Result types, config validation,
  aligned quantized storage, mutex poison recovery
- 111 tests (102 pipnn + 9 build_algorithm), 23 criterion benchmarks
- Regression verified: SIFT recall@10=98.3%, Enron recall@1000=94.9%

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…sort

Switch GEMM backend from OpenBLAS (cblas_sgemm) to diskann-linalg (faer)
to align with DiskANN's linalg choice and remove the C library dependency.

Optimize extract_knn by sorting 4-byte u32 indices instead of 8-byte
(u32, f32) pairs, reducing memory movement during quickselect (~1.5x
faster kNN extraction, ~18% faster full pipeline on Enron fp16 1M).

- Remove cblas-sys, matrixmultiply deps; add diskann-linalg
- Delete build.rs (no more OpenBLAS linker search)
- Remove dead gemm_aat/gemm_abt functions from leaf_build and hash_prune
- Remove set_blas_threads/openblas_set_num_threads FFI

Enron fp16 regression (PiPNN FP, k=5, 8 threads):
  Build: 53.8s → 44.3s (-17.6%), Recall: 94.9% (unchanged), QPS: 16.5 → 16.7

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…uncapped leaders

- Pack ReservoirEntry to 8 bytes (was 12) using bf16 distance (u16):
  4B neighbor + 2B hash + 2B bf16 distance = 8 bytes, zero padding.
  bf16 bit patterns are monotonic for non-negative values, enabling
  integer comparison instead of f32 partial_cmp.

- Merge undersized partition clusters into nearest large cluster by
  centroid L2 distance instead of by size (matches paper Algorithm 2).

- Remove hardcoded 1000 leader cap; let p_samp control leader count
  directly. Add adaptive stripe sizing in partition_assign to bound
  per-stripe GEMM memory to ~64 MB regardless of leader count.

- Lower default p_samp from 0.05 to 0.005 (practical for 1M+ scale).

Enron fp16 (1,087,932 pts, 384d, cosine_normalized):
  Build: 43-46s, Recall@1000: 94.8-94.9%, QPS: 16-17

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Replace use_cosine bool with Metric enum in partition_assign and
  build_leaf. Each metric variant (L2, Cosine, CosineNormalized,
  InnerProduct) now has explicit distance computation — no catch-all
  _ => branches.

- Cosine (unnormalized) now correctly normalizes by ||a||*||b|| in
  both partition and leaf build, instead of treating it the same as
  CosineNormalized.

- Remove dead compute_distance_matrix and compute_distance_matrix_direct
  functions. The GEMM-based path in build_leaf_with_buffers is the
  single source of truth for leaf distance computation.

- Partition now receives metric via PartitionConfig and uses it for
  leader assignment distances (was always L2 before).

Enron fp16 (cosine_normalized): Build 42.9s, Recall 94.7%, QPS 17.3

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Change extract_graph(&self) to extract_graph(self) so that sketches
(~50 MB) are dropped before extraction and each reservoir is freed
immediately after its neighbors are extracted via into_par_iter().
This prevents ~1 GB of reservoir + sketch memory from staying alive
during final_prune.

Benchmark (Enron 1M, 384d, cosine_normalized): no regression.
Build 45.1s, Recall@1000 94.74%, Peak RSS 3.05 GB (was 3.2 GB).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
RSS profiling showed ~600 MB of freed-but-retained memory after PiPNN
build completes, inflating peak RSS for downstream disk layout/search.

Root causes identified via /proc/self/statm instrumentation:
1. Partition GEMM buffers freed but held in glibc per-thread arenas
2. Thread-local LeafBuffers pinning arena heap segments
3. Freed reservoir memory not returned to OS

Fixes:
- Call malloc_trim(0) after partition, extraction, and build completion
  to return freed pages across all glibc arenas
- Release thread-local LeafBuffers after leaf build so arena segments
  containing freed reservoir data can be reclaimed
- (Prior commit) Consuming extract_graph(self) frees reservoirs during
  extraction via into_par_iter

Benchmark (Enron 1M, 384d, cosine_normalized): no regression.
Build 44.3s, Recall@1000 94.744%, QPS 17.2, Peak RSS 3.21 GB.
PiPNN completion RSS reduced ~140 MB (2337 vs 2478 MB).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After PiPNN build completes, the f32 data (~1.6 GB) and graph are freed
but glibc retains the pages. malloc_trim(0) returns them to the OS,
dropping RSS from ~2.4 GB to ~129 MB before disk layout starts.

This prevents the freed build memory from inflating RSS during the
subsequent disk layout and search phases.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
VmHWM profiling (kernel peak RSS tracker) revealed the true peak occurs
during partition_assign_impl, when 8 concurrent stripes each allocate
~90 MB of GEMM buffers (p_data + dots), spiking RSS by ~1.4 GB.

Fix: reduce per-stripe GEMM output target from 64 MB to 16 MB.
Each stripe now uses ~22 MB instead of ~90 MB. With 8 threads,
concurrent GEMM memory drops from ~720 MB to ~180 MB.

Partition is <5% of total build time, so throughput cost is negligible.

Build-only peak RSS: 3.19 GB -> 2.58 GB (-19%).
Benchmark: Build 44.0s, Recall@1000 94.744%, Peak RSS 2.61 GB.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Make build_internal, partition, leaf_build, hash_prune sketches,
find_medoid, and final_prune generic over T: VectorRepr instead of
requiring &[f32]. Each component already copies data into local buffers
(partition stripes, LeafBuffers, etc.), so T->f32 conversion happens
naturally during those copies with zero extra allocation.

build_typed<T> no longer allocates a full f32 copy upfront — it passes
&[T] directly through the build pipeline. For f32 data, as_f32_into is
a memcpy (zero overhead). For f16, it converts on-the-fly, saving the
793 MB f32 copy that was the single largest contributor to peak RSS.

diskann-disk's FP build path now loads data as native T and calls
build_typed directly. SQ path retains f32 conversion (quantizer needs it).

Build-only peak RSS: 2.58 GB -> 1.77 GB (-31%).
Full benchmark: Build 41.5s, Recall@1000 94.744%, Peak RSS 1.80 GB (-44%).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add PiPNNBuildStats struct with per-phase timing (sketches, partition,
leaf build, graph extraction, final prune). Printed to stdout during
benchmark so users can see the timing breakdown like Vamana does.

Example output:
  PiPNN Build Timing
    LSH sketches:   0.422s
    Partition:      4.044s  (22936 leaves)
    Leaf build:     6.996s  (71065098 edges)
    Graph extract:  0.466s
    Final prune:    0.035s
    Total:          12.284s

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…thms

The PerfLogger checkpoints go to tracing (OpenTelemetry spans) which are
not visible in benchmark stdout. Add println! for the three outer phases
(PQ compression, graph build, disk layout) so both Vamana and PiPNN
show timing breakdown alongside the existing "Build time: Xs" line.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@SeliMeli SeliMeli requested review from a team and Copilot March 20, 2026 01:45
@microsoft-github-policy-service

@SeliMeli please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds PiPNN as a new (feature-gated) disk-index graph construction algorithm, integrating it into the disk index build pipeline and benchmark/tooling so builders can choose between Vamana and PiPNN.

Changes:

  • Introduce new diskann-pipnn crate implementing partitioning, leaf-build, HashPrune merging, and optional final pruning.
  • Add BuildAlgorithm to disk build parameters/config and wire it through diskann-disk, diskann-tools, and diskann-benchmark.
  • Add build-phase timing output and PiPNN-specific 1-bit SQ reuse path in the disk index builder.

Reviewed changes

Copilot reviewed 24 out of 25 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
diskann-tools/src/utils/build_disk_index.rs Adds build_algorithm to tool-level disk index build parameters and passes it into disk build configuration.
diskann-tools/Cargo.toml Enables diskann-disk’s pipnn feature for tools.
diskann-providers/src/model/graph/provider/async_/inmem/scalar.rs Exposes quantizer() accessor to reuse trained SQ params for PiPNN.
diskann-pipnn/src/lib.rs Defines PiPNN public API, config, error types, and metric serde.
diskann-pipnn/src/builder.rs Implements end-to-end PiPNN build orchestration + optional final prune + graph save format.
diskann-pipnn/src/partition.rs Implements RBC partitioning (float + quantized variants) and associated tests.
diskann-pipnn/src/leaf_build.rs Implements GEMM-based all-pairs distances, kNN extraction, and edge materialization.
diskann-pipnn/src/hash_prune.rs Implements LSH sketching + per-point reservoir HashPrune and extraction.
diskann-pipnn/src/gemm.rs Wraps diskann-linalg GEMM calls used by PiPNN.
diskann-pipnn/src/quantize.rs Implements 1-bit quantization + Hamming-distance helpers and tests.
diskann-pipnn/benches/pipnn_bench.rs Adds Criterion benchmarks for PiPNN hot paths.
diskann-pipnn/README.md Documents PiPNN usage, parameters, and expected performance.
diskann-pipnn/Cargo.toml Adds the new crate manifest + deps.
diskann-disk/src/lib.rs Re-exports BuildAlgorithm from build configuration.
diskann-disk/src/build/mod.rs Re-exports BuildAlgorithm for consumers.
diskann-disk/src/build/configuration/mod.rs Adds build_algorithm module and re-export.
diskann-disk/src/build/configuration/disk_index_build_parameter.rs Stores selected build algorithm in disk build parameters and exposes accessor.
diskann-disk/src/build/configuration/build_algorithm.rs Defines BuildAlgorithm enum and PiPNN parameter mapping (feature-gated).
diskann-disk/src/build/builder/build.rs Integrates PiPNN into disk index build pipeline + phase timing output.
diskann-disk/Cargo.toml Adds optional dependency + feature flag for diskann-pipnn and test deps.
diskann-benchmark/src/inputs/disk.rs Adds build_algorithm to disk-index build job input model.
diskann-benchmark/src/backend/disk_index/build.rs Passes build_algorithm into disk build parameters.
diskann-benchmark/Cargo.toml Enables diskann-disk/pipnn and adds diskann-pipnn dependency.
Cargo.toml Adds diskann-pipnn to workspace and workspace deps.
Cargo.lock Records new crate and dependency graph changes.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +305 to +307
if actual_k < dists.len() {
dists.select_nth_unstable_by(actual_k, |a, b| {
a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal)
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

brute_force_knn uses select_nth_unstable_by(actual_k, ...) and then truncates to actual_k. As with the other quickselect usage, pivoting at actual_k can exclude one of the true k-smallest elements (notably for k=1). Pivot at actual_k - 1 when actual_k > 0 before truncating, then sort/truncate.

Suggested change
if actual_k < dists.len() {
dists.select_nth_unstable_by(actual_k, |a, b| {
a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal)
if actual_k > 0 && actual_k < dists.len() {
dists.select_nth_unstable_by(actual_k - 1, |a, b| {
a.1
.partial_cmp(&b.1)
.unwrap_or(std::cmp::Ordering::Equal)

Copilot uses AI. Check for mistakes.
Comment on lines +107 to +116
if num_assign < buf.len() {
buf.select_nth_unstable_by(num_assign, |a, b| {
a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal)
});
}

let out = &mut assign_chunk[i * num_assign..(i + 1) * num_assign];
for k in 0..num_assign {
out[k] = buf[k].0;
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

buf.select_nth_unstable_by(num_assign, ...) partitions around the (num_assign)-th element, but the code then reads buf[0..num_assign]. This can miss one of the true num_assign nearest leaders (e.g., fanout=1 should pivot at 0). Pivot at num_assign - 1 (when num_assign > 0) before taking the first num_assign entries.

Copilot uses AI. Check for mistakes.
Comment on lines +61 to +68
group.bench_with_input(
BenchmarkId::new("m_n_k", format!("{}x{}x{}", m, n, k)),
&(m, n, k),
|b_iter, &(m, k, _)| {
b_iter.iter(|| {
gemm::sgemm_abt(&a, m, k, &b, n, &mut result);
});
},
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmark input tuple is (m, n, k), but the closure destructures it as &(m, k, _). This drops n and re-binds k to the second element, so the GEMM dimensions passed to sgemm_abt are incorrect and the code likely won't compile due to unused/mismatched bindings. Destructure as &(m, n, k) and pass the correct values.

Copilot uses AI. Check for mistakes.

fn bench_build_leaf(c: &mut Criterion) {
let mut group = c.benchmark_group("leaf_build/build_leaf");
gemm::set_blas_threads(1);
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gemm::set_blas_threads(1) is called here, but diskann_pipnn::gemm (src/gemm.rs) does not define set_blas_threads. This will fail to compile under --all-targets (benches included). Either remove these calls or add an implementation (likely in diskann-linalg) behind an appropriate feature/target gate.

Suggested change
gemm::set_blas_threads(1);

Copilot uses AI. Check for mistakes.
Comment on lines +131 to +134
|b, _| {
b.iter(|| {
leaf_build::build_leaf(&data, ndims, &indices, k, false);
});
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leaf_build::build_leaf takes a Metric as its last argument (see src/leaf_build.rs), but the benchmark passes false. This will not compile and also makes the benchmark ambiguous about the distance metric being measured. Pass an explicit diskann_vector::distance::Metric value instead.

Copilot uses AI. Check for mistakes.
diskann-tools = { workspace = true }
diskann-disk = { workspace = true, optional = true }
diskann-disk = { workspace = true, optional = true, features = ["pipnn"] }
diskann-pipnn = { workspace = true }
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

diskann-pipnn is added as an unconditional dependency, but it is not referenced from this crate (and diskann-disk is already optional behind the disk-index feature). This forces PiPNN to compile even when disk-index is disabled, increasing build times/binary size. Consider removing it or making it conditional (e.g., only pulled in via diskann-disk's pipnn feature under disk-index).

Suggested change
diskann-pipnn = { workspace = true }

Copilot uses AI. Check for mistakes.
Comment on lines +108 to +119
if actual_k < n {
indices.select_nth_unstable_by(actual_k, |&a, &b| {
let da = unsafe { *row.get_unchecked(a as usize) };
let db = unsafe { *row.get_unchecked(b as usize) };
da.partial_cmp(&db).unwrap_or(std::cmp::Ordering::Equal)
});
}

for idx in 0..actual_k {
let j = unsafe { *indices.get_unchecked(idx) } as usize;
edges.push((i, j, row[j]));
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

select_nth_unstable_by(actual_k, ...) uses actual_k as the pivot index, but the code then reads indices[0..actual_k]. For k-nearest selection this can omit the true minimums (e.g., k=1 should pivot at 0, not 1). Use actual_k - 1 as the pivot (when actual_k > 0) and then take the first actual_k indices (optionally sorting that prefix if order matters).

Copilot uses AI. Check for mistakes.
Comment on lines +261 to +270
if num_assign < buf.len() {
buf.select_nth_unstable_by(num_assign, |a, b| {
a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal)
});
}

let out = &mut assign_chunk[i * num_assign..(i + 1) * num_assign];
for k in 0..num_assign {
out[k] = buf[k].0;
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same quickselect issue as above: buf.select_nth_unstable_by(num_assign, ...) followed by reading buf[0..num_assign] does not guarantee selecting the num_assign closest leaders. Use pivot index num_assign - 1 (when > 0), then take the first num_assign entries.

Copilot uses AI. Check for mistakes.
async fn build_inmem_index(&mut self, pool: &RayonThreadPool) -> ANNResult<()> {
// Check for PiPNN algorithm
#[cfg(feature = "pipnn")]
if let BuildAlgorithm::PiPNN { .. } = self.disk_build_param.build_algorithm() {
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

build_algorithm() returns &BuildAlgorithm, but this if let BuildAlgorithm::PiPNN { .. } = ... pattern matches a by-value BuildAlgorithm and will not compile. Match on a reference (e.g., if matches!(self.disk_build_param.build_algorithm(), &BuildAlgorithm::PiPNN { .. })) or deref to a value if you change the accessor to return an owned value.

Suggested change
if let BuildAlgorithm::PiPNN { .. } = self.disk_build_param.build_algorithm() {
if matches!(
self.disk_build_param.build_algorithm(),
&BuildAlgorithm::PiPNN { .. }
) {

Copilot uses AI. Check for mistakes.
#[cfg(not(feature = "pipnn"))]
if !matches!(
self.disk_build_param.build_algorithm(),
BuildAlgorithm::Vamana
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, this matches!(self.disk_build_param.build_algorithm(), BuildAlgorithm::Vamana) is matching &BuildAlgorithm against a by-value pattern and will not compile under #[cfg(not(feature = "pipnn"))]. The pattern needs to match a reference (e.g., &BuildAlgorithm::Vamana).

Suggested change
BuildAlgorithm::Vamana
&BuildAlgorithm::Vamana

Copilot uses AI. Check for mistakes.
@harsha-simhadri
Copy link
Contributor

This is great! and thanks for the pointer.

On the index side: Could you share the benchmark jsons for diskann baseline so we can replicate. As Mark suggests, the baseline seems slower than expected.

On the query side: Please run queries on SSD. While we have a lot to learn from this paper, I want to be careful about disk based graphs. Diskann graphs are designed to have lower diameter which are better for SSD. We would want to see the diameter of graphs PiPNN generated. You could for example measure the number of points that are "n" hops from start for n=1,2,3..10, by fixing degree. A SSD friendly index would have more nodes closer to origin.

The prune method is very interesting, we should consider overall what we can learn and put in core diskann prune and how to make it streaming.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants