wikipedia_en / README.md
huoju's picture
Update README.md
87fbabc verified
metadata
license: apache-2.0
task_categories:
  - feature-extraction
language:
  - en
size_categories:
  - 10M<n<100M

wikipedia_en

This is a curated Wikipedia English dataset for use with the II-Commons project.

Dataset Details

Dataset Description

This dataset comprises a curated Wikipedia English pages. Data sourced directly from the official English Wikipedia database dump. We extract the pages, chunk them into smaller pieces, and embed them using Snowflake/snowflake-arctic-embed-m-v2.0. All vector embeddings are 16-bit half-precision vectors optimized for cosine indexing with vectorchord.

Dataset Sources

Based on the wikipedia dumps. Please check this page for the LICENSE of the page data.

Dataset Structure

  1. Metadata Table
  • id: A unique identifier for the page.
  • revid: The revision ID of the page.
  • url: The URL of the page.
  • title: The title of the page.
  • ignored: Whether the page is ignored.
  • created_at: The creation time of the page.
  • updated_at: The update time of the page.
  1. Chunking Table
  • id: A unique identifier for the chunk.
  • title: The title of the page.
  • url: The URL of the page.
  • source_id: The source ID of the page.
  • chunk_index: The index of the chunk.
  • chunk_text: The text of the chunk.
  • vector: The vector embedding of the chunk.
  • created_at: The creation time of the chunk.
  • updated_at: The update time of the chunk.

Prerequisite

PostgreSQL 17 with extensions: vectorchord and pg_search

The easiest way is to use our Docker image, or build your own. Then load the psql_basebackup branch, following the Quick Start

Ensure extensions are enabled, connect to the database using the psql, and run the following SQL:

CREATE EXTENSION IF NOT EXISTS vchord CASCADE;
CREATE EXTENSION IF NOT EXISTS pg_search CASCADE;

Uses

This dataset is available for a wide range of applications.

Here is a demo of how to use the dataset with II-Commons.

Create the metadata and chunking tables in PostgreSQL

CREATE TABLE IF NOT EXISTS ts_wikipedia_en (
    id BIGSERIAL PRIMARY KEY,
    revid BIGINT NOT NULL,
    url VARCHAR NOT NULL,
    title VARCHAR NOT NULL DEFAULT '',
    created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    ignored BOOLEAN NOT NULL DEFAULT FALSE
);

CREATE TABLE IF NOT EXISTS ts_wikipedia_en_embed (
    id BIGSERIAL PRIMARY KEY,
    title VARCHAR NOT NULL,
    url VARCHAR NOT NULL,
    chunk_index BIGINT NOT NULL,
    chunk_text VARCHAR NOT NULL,
    source_id BIGINT NOT NULL,
    vector halfvec(768) DEFAULT NULL,
    created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);

Load csv files to database

  1. Load the dataset from local file system to a remote PostgreSQL server:
\copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER;
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000000.csv' CSV HEADER;
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000001.csv' CSV HEADER;
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000002.csv' CSV HEADER;
...
  1. Load the dataset from the PostgreSQL server's file system:
copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER;
copy ts_wikipedia_en_embed FROM 'data/chunks/0000000.csv' CSV HEADER;
copy ts_wikipedia_en_embed FROM 'data/chunks/0000001.csv' CSV HEADER;
copy ts_wikipedia_en_embed FROM 'data/chunks/0000002.csv' CSV HEADER;
...

Create Indexes

You need to create the following indexes for the best performance.

The vector column is a halfvec(768) column, which is a 16-bit half-precision vector optimized for cosine indexing with vectorchord. You can get more information about the vector index from the vectorchord documentation.

  1. Create the metadata table index:
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_revid_index ON ts_wikipedia_en (revid);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_url_index ON ts_wikipedia_en (url);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_title_index ON ts_wikipedia_en (title);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_ignored_index ON ts_wikipedia_en (ignored);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_created_at_index ON ts_wikipedia_en (created_at);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_updated_at_index ON ts_wikipedia_en (updated_at);
  1. Create the chunking table index:
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_id_index ON ts_wikipedia_en_embed (source_id);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_index_index ON ts_wikipedia_en_embed (chunk_index);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_text_index ON ts_wikipedia_en_embed USING bm25 (id, title, chunk_text) WITH (key_field='id');
CREATE UNIQUE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_index ON ts_wikipedia_en_embed (source_id, chunk_index);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_index ON ts_wikipedia_en_embed USING vchordrq (vector halfvec_cosine_ops) WITH (options = $$
    [build.internal]
    lists = [20000]
    build_threads = 6
    spherical_centroids = true
$$);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_null_index ON ts_wikipedia_en_embed (vector) WHERE vector IS NULL;
SELECT vchordrq_prewarm('ts_wikipedia_en_embed_vector_index');

Query with II-Commons

Click this link to learn how to query the dataset with II-Commons.