Datasets:

License:
Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

This repository contains the datasets that are meant to be used with VIBE (Vector Index Benchmark for Embeddings):

https://github.com/vector-index-bench/vibe

The datasets can be downloaded manually from this repository, but the benchmark framework also downloads them automatically.

Datasets

Name Type n d Distance
agnews-mxbai-1024-euclidean Text 769,382 1024 euclidean
arxiv-nomic-768-normalized Text 1,344,643 768 any
gooaq-distilroberta-768-normalized Text 1,475,024 768 any
imagenet-clip-512-normalized Image 1,281,167 512 any
landmark-nomic-768-normalized Image 760,757 768 any
yahoo-minilm-384-normalized Text 677,305 384 any
celeba-resnet-2048-cosine Image 201,599 2048 cosine
ccnews-nomic-768-normalized Text 495,328 768 any
codesearchnet-jina-768-cosine Code 1,374,067 768 cosine
glove-200-cosine Word 1,192,514 200 cosine
landmark-dino-768-cosine Image 760,757 768 cosine
simplewiki-openai-3072-normalized Text 260,372 3072 any
coco-nomic-768-normalized Text-to-Image 282,360 768 any
imagenet-align-640-normalized Text-to-Image 1,281,167 640 any
laion-clip-512-normalized Text-to-Image 1,000,448 512 any
yandex-200-cosine Text-to-Image 1,000,000 200 cosine
yi-128-ip Attention 187,843 128 IP
llama-128-ip Attention 256,921 128 IP

Credit

The glove-200-cosine dataset uses embeddings from Glove (released under PDDL 1.0): https://nlp.stanford.edu/projects/glove/

The laion-clip-512-normalized dataset uses a subset of embeddings from LAION-400M (released under CC-BY 4.0): https://laion.ai/blog/laion-400-open-dataset/

The yandex-200-cosine dataset uses a subset of embeddings from Yandex Text2Image (released under CC-BY 4.0): https://big-ann-benchmarks.com/neurips23.html

Dataset structure

Each dataset is distributed as an HDF5 file.

The HDF5 files contain the following attributes:

  • dimension: The dimensionality of the data.
  • distance: The distance metric to use.
  • point_type: The precision of the vectors, one of "float", "uint8", or "binary".

The HDF5 files contain the following HDF5 datasets:

  • train: numpy array of size (n_corpus, dim) containing the embeddings used to build the vector index
  • test: numpy array of size (n_test, dim) containing the test query embeddings
  • neighbors: numpy array of size (n_test, 100) containing the IDs of the true 100 k-nn of each test query
  • distances: numpy array of size (n_test, 100) containing the distances of the true 100 k-nn of each test query
  • avg_distances: numpy array of size n_test containing the average distance from each test query to the corpus points

Additionally, the HDF5 files of OOD datasets contain the following HDF5 datasets:

  • learn: numpy array of size (n_learn, dim) containing a larger sample from the query distribution
  • learn_neighbors: numpy array of size (n_learn, 100) containing the true 100 k-nn (from the corpus) for each point in learn
Downloads last month
93