Datasets:

License:
ejaasaari commited on
Commit
3b79e11
·
verified ·
1 Parent(s): 59849a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -1
README.md CHANGED
@@ -6,4 +6,47 @@ This repository contains the datasets that are meant to be used with VIBE (Vecto
6
 
7
  https://github.com/vector-index-bench/vibe
8
 
9
- The datasets can be downloaded manually from this repository, but the benchmark framework also downloads them automatically.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  https://github.com/vector-index-bench/vibe
8
 
9
+ The datasets can be downloaded manually from this repository, but the benchmark framework also downloads them automatically.
10
+
11
+ ## Datasets
12
+
13
+ | Name | Type | n | d | Distance |
14
+ |---|---|---|---|---|
15
+ | [agnews-mxbai-1024-euclidean](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/agnews-mxbai-1024-euclidean.hdf5) | Text | 769,382 | 1024 | euclidean |
16
+ | [arxiv-nomic-768-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/arxiv-nomic-768-normalized.hdf5) | Text | 1,344,643 | 768 | any |
17
+ | [gooaq-distilroberta-768-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/gooaq-distilroberta-768-normalized.hdf5) | Text | 1,475,024 | 768 | any |
18
+ | [imagenet-clip-512-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/imagenet-clip-512-normalized.hdf5) | Image | 1,281,167 | 512 | any |
19
+ | [landmark-nomic-768-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/landmark-nomic-768-normalized.hdf5) | Image | 760,757 | 768 | any |
20
+ | [yahoo-minilm-384-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/yahoo-minilm-384-normalized.hdf5) | Text | 677,305 | 384 | any |
21
+ | [celeba-resnet-2048-cosine](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/celeba-resnet-2048-cosine.hdf5) | Image | 201,599 | 2048 | cosine |
22
+ | [ccnews-nomic-768-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/ccnews-nomic-768-normalized.hdf5) | Text | 495,328 | 768 | any |
23
+ | [codesearchnet-jina-768-cosine](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/codesearchnet-jina-768-cosine.hdf5) | Code | 1,374,067 | 768 | cosine |
24
+ | [glove-200-cosine](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/glove-200-cosine.hdf5) | Word | 1,192,514 | 200 | cosine |
25
+ | [landmark-dino-768-cosine](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/landmark-dino-768-cosine.hdf5) | Image | 760,757 | 768 | cosine |
26
+ | [simplewiki-openai-3072-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/simplewiki-openai-3072-normalized.hdf5) | Text | 260,372 | 3072 | any |
27
+ | [coco-nomic-768-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/coco-nomic-768-normalized.hdf5) | Text-to-Image | 282,360 | 768 | any |
28
+ | [imagenet-align-640-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/imagenet-align-640-normalized.hdf5) | Text-to-Image | 1,281,167 | 640 | any |
29
+ | [laion-clip-512-normalized](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/laion-clip-512-normalized.hdf5) | Text-to-Image | 1,000,448 | 512 | any |
30
+ | [yandex-200-cosine](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/yandex-200-cosine.hdf5) | Text-to-Image | 1,000,000 | 200 | cosine |
31
+ | [yi-128-ip](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/yi-128-ip.hdf5) | Attention | 187,843 | 128 | IP |
32
+ | [llama-128-ip](https://huggingface.co/datasets/vector-index-bench/vibe/blob/main/llama-128-ip.hdf5) | Attention | 256,921 | 128 | IP |
33
+
34
+ ## Dataset structure
35
+
36
+ Each dataset is distributed as an HDF5 file.
37
+
38
+ The HDF5 files contain the following attributes:
39
+ - dimension: The dimensionality of the data.
40
+ - distance: The distance metric to use.
41
+ - point_type: The precision of the vectors, one of "float", "uint8", or "binary".
42
+
43
+ The HDF5 files contain the following HDF5 datasets:
44
+ - train: numpy array of size (n_corpus, dim) containing the embeddings used to build the vector index
45
+ - test: numpy array of size (n_test, dim) containing the test query embeddings
46
+ - neighbors: numpy array of size (n_test, 100) containing the IDs of the true 100 k-nn of each test query
47
+ - distances: numpy array of size (n_test, 100) containing the distances of the true 100 k-nn of each test query
48
+ - avg_distances: numpy array of size n_test containing the average distance from each test query to the corpus points
49
+
50
+ Additionally, the HDF5 files of OOD datasets contain the following HDF5 datasets:
51
+ - learn: numpy array of size (n_learn, dim) containing a larger sample from the query distribution
52
+ - learn_neighbors: numpy array of size (n_learn, 100) containing the true 100 k-nn (from the corpus) for each point in learn