Getting Started with bigANNOY

bigANNOY is an approximate nearest-neighbour package for bigmemory::big.matrix data. It builds a persisted Annoy index from a reference matrix, searches that index with either self-search or external queries, and returns results in a shape aligned with bigKNN.

This vignette walks through the first workflow most users need:

  1. create a small reference matrix
  2. build an index on disk
  3. run self-search and external-query search
  4. inspect the returned neighbours and distances
  5. reopen and validate the index in a later step

The examples are intentionally small, but the same API is designed for larger file-backed big.matrix inputs.

Load the Packages

library(bigANNOY)
library(bigmemory)

Create a Small Reference Matrix

bigANNOY is built around bigmemory::big.matrix, so we will start from a dense matrix and convert it into a big.matrix.

ref_dense <- matrix(
  c(
    0.0, 0.1, 0.2, 0.3,
    0.1, 0.0, 0.1, 0.2,
    0.2, 0.1, 0.0, 0.1,
    1.0, 1.1, 1.2, 1.3,
    1.1, 1.0, 1.1, 1.2,
    1.2, 1.1, 1.0, 1.1,
    3.0, 3.1, 3.2, 3.3,
    3.1, 3.0, 3.1, 3.2
  ),
  ncol = 4,
  byrow = TRUE
)

ref_big <- as.big.matrix(ref_dense)
dim(ref_big)
#> [1] 8 4

The reference matrix has r nrow(ref_dense) rows and r ncol(ref_dense) columns. Each row is a candidate neighbour in the final search results.

Build the First Annoy Index

annoy_build_bigmatrix() streams the reference rows into a persisted Annoy index and writes a sidecar metadata file next to it.

index_path <- tempfile(fileext = ".ann")

index <- annoy_build_bigmatrix(
  ref_big,
  path = index_path,
  n_trees = 20L,
  metric = "euclidean",
  seed = 123L,
  load_mode = "lazy"
)

index
#> <bigannoy_index>
#>   path: /var/folders/h9/npmqbtmx4wlblg4wks47yj5c0000gn/T//RtmpBEyDSE/fileb1f61d3b7074.ann
#>   metadata: /var/folders/h9/npmqbtmx4wlblg4wks47yj5c0000gn/T//RtmpBEyDSE/fileb1f61d3b7074.ann.meta
#>   index_id: annoy-20260327203933-d4a614637b45
#>   metric: euclidean
#>   trees: 20
#>   items: 8
#>   dimension: 4
#>   build_seed: 123
#>   build_threads: -1
#>   build_backend: cpp
#>   load_mode: lazy
#>   loaded: FALSE
#>   file_size: 2816
#>   file_md5: d4a614637b45839a9eb126d130a96397
#>   prefault: FALSE

A few details are worth noticing:

You can check the current loaded state directly.

annoy_is_loaded(index)
#> [1] FALSE

With query = NULL, annoy_search_bigmatrix() searches the indexed reference rows against themselves. In self-search mode, the nearest neighbour for each row is another row, not the row itself.

self_result <- annoy_search_bigmatrix(
  index,
  k = 2L,
  search_k = 100L
)

self_result$index
2 3
1 3
2 1
5 6
4 6
5 4
8 4
7 4
round(self_result$distance, 3)
0.200 0.346
0.200 0.200
0.200 0.346
0.200 0.346
0.200 0.200
0.200 0.346
0.200 4.000
0.200 3.904

Because the first search loads the lazy index, the handle is now available for reuse.

annoy_is_loaded(index)
#> [1] TRUE

The result object follows the same high-level shape as bigKNN:

str(self_result, max.level = 1)
#> List of 8
#>  $ index   : int [1:8, 1:2] 2 1 2 5 4 5 8 7 3 3 ...
#>  $ distance: num [1:8, 1:2] 0.2 0.2 0.2 0.2 0.2 ...
#>  $ k       : int 2
#>  $ metric  : chr "euclidean"
#>  $ n_ref   : int 8
#>  $ n_query : int 8
#>  $ exact   : logi FALSE
#>  $ backend : chr "annoy"

In particular:

Search with an External Query Matrix

External queries are often the more common workflow in practice. Here we build a small dense query matrix with rows close to the first, middle, and final clusters in the reference data.

query_dense <- matrix(
  c(
    0.05, 0.05, 0.15, 0.25,
    1.05, 1.05, 1.10, 1.25,
    3.05, 3.05, 3.15, 3.25
  ),
  ncol = 4,
  byrow = TRUE
)

query_result <- annoy_search_bigmatrix(
  index,
  query = query_dense,
  k = 3L,
  search_k = 100L
)

query_result$index
1 2 3
5 4 6
7 8 4
round(query_result$distance, 3)
0.100 0.100 0.265
0.087 0.132 0.240
0.100 0.100 3.951

The three query rows each return three approximate neighbours from the indexed reference matrix. For small examples like this one, the results will typically look exact, but the important point is that the API stays the same for larger problems where approximate search is preferable.

Tune the Main Search Controls

Two arguments matter most when you begin tuning:

As a starting point:

The package also supports "angular", "manhattan", and "dot" metrics, but Euclidean is usually the easiest place to begin.

Stream Results into big.matrix Outputs

For larger workloads, you may not want to keep neighbour matrices in ordinary R memory. bigANNOY can write directly into destination big.matrix objects.

index_out <- big.matrix(nrow(query_dense), 2L, type = "integer")
distance_out <- big.matrix(nrow(query_dense), 2L, type = "double")

streamed <- annoy_search_bigmatrix(
  index,
  query = query_dense,
  k = 2L,
  xpIndex = index_out,
  xpDistance = distance_out
)

bigmemory::as.matrix(index_out)
1 2
5 4
7 8
round(bigmemory::as.matrix(distance_out), 3)
0.100 0.100
0.087 0.132
0.100 0.100

The returned object still reports the same metadata, but the actual neighbour matrices live in the destination big.matrix containers.

Reopen and Validate a Persisted Index

One of the main v3 improvements is explicit index lifecycle support. You can close a loaded handle, reopen the same index from disk, and validate its metadata before reuse.

annoy_close_index(index)
annoy_is_loaded(index)
#> [1] FALSE
reopened <- annoy_open_index(index$path, load_mode = "eager")
annoy_is_loaded(reopened)
#> [1] TRUE

Validation checks the recorded metadata against the current Annoy file and can also verify that the index loads successfully.

validation <- annoy_validate_index(reopened, strict = TRUE, load = TRUE)

validation$valid
#> [1] TRUE
validation$checks[, c("check", "passed", "severity")]
check passed severity
index_file TRUE error
metric TRUE error
dimensions TRUE error
items TRUE error
file_size TRUE error
file_md5 TRUE error
file_mtime TRUE warning
load TRUE error

This is especially helpful when you want to reuse an index across sessions or share the .ann file and its .meta sidecar with someone else.

What Inputs Are Accepted?

For the quick start above we used:

The package also accepts:

That broader file-backed workflow is covered in the dedicated vignette on bigmemory persistence and descriptors.

Recap

You have now seen the full first-run workflow:

  1. create a big.matrix reference
  2. build a persisted Annoy index
  3. search it in self-search and external-query modes
  4. stream results into destination big.matrix objects when needed
  5. reopen, validate, and reuse the index

From here, the most useful next steps are: