Update README.md
Browse files# 📦 Freeze-Align Dataset
The **Freeze-Align Dataset** (`concept_coverage_laion_6m`) is a curated collection of high-quality image-text pairs designed to facilitate efficient multimodal alignment using frozen unimodal encoders. This dataset supports the research presented in our CVPR 2025 paper, **"Harnessing Frozen Unimodal Encoders for Flexible Multimodal Alignment"**, enabling models to achieve CLIP-level performance with significantly reduced computational resources.
The dataset is curated from LAION-400M through a concept-balanced selection of captions, leveraging caption-to-image-prototype similarity to ensure diverse and semantically rich image-text pairs. The code and resources for curating this dataset are available in our [GitHub repository](https://github.com/mayug/freeze-align), enabling further research into concept coverage and reducing computational requirements for modality alignment.
## 📄 Paper
**Title:** Harnessing Frozen Unimodal Encoders for Flexible Multimodal Alignment
**Authors:** Mayug Maniparambil, Raiymbek Akshulakov, Yasser Abdelaziz Dahou Djilali, Sanath Narayan, Ankit Singh, Noel E. O'Connor
**Conference:** CVPR 2025
**Paper:** [arXiv:2409.19425](https://arxiv.org/abs/2409.19425)
**Code:** [GitHub Repository](https://github.com/mayug/freeze-align)
## 📊 Dataset Statistics
- **Total Samples:** 6,000,000 image-text pairs
- **Source:** Curated from LAION-400M using concept-balanced selection via caption-to-image-prototype similarity.
- **Image Resolution:** Variable; standardized during preprocessing
- **Text Language:** Primarily English
- **Data Format:** Parquet files with fields: `image_url`, `caption`, `embedding_vector`, `similarity_score`
- **License:** CC-BY 4.0
## 🧪 Usage
This dataset is intended for training and evaluating multimodal models that align visual and textual representations. It is particularly useful for research in:
- Multimodal representation learning
- Cross-modal retrieval
- Zero-shot image classification
- Efficient training with frozen encoders
- Representational similarity studies
To load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("mayug/concept_coverage_laion_6m")
```
## 📂 Dataset Structure
Each entry in the dataset includes:
- `image_url`: URL to the image
- `caption`: Associated textual description
- `similarity`: Cosine similarity score between image and text embeddings
- `IMGNET_CLASS`: One of 2754 ImageNet-derived classes the datapoint is assigned to
- `SCORE`: Cosine similarity score indicating the datapoint's association with the assigned IMGNET_CLASS
## 📬 Citation
If you use this dataset in your research, please cite our paper:
```bibtex
@inproceedings{maniparambil2025harnessing,
title={Harnessing Frozen Unimodal Encoders for Flexible Multimodal Alignment},
author={Maniparambil, Mayug and Akshulakov, Raiymbek and Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and Singh, Ankit and O'Connor, Noel E},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}
```
---
For more details and updates, please visit our [GitHub Repository](https://github.com/mayug/freeze-align).
@@ -11,6 +11,7 @@ tags:
|
|
11 |
- high-concept-coverage
|
12 |
- laion-subset
|
13 |
- 6M
|
|
|
14 |
pretty_name: free-align-concept_covered_6M
|
15 |
size_categories:
|
16 |
- 1M<n<10M
|
|
|
11 |
- high-concept-coverage
|
12 |
- laion-subset
|
13 |
- 6M
|
14 |
+
- VLM
|
15 |
pretty_name: free-align-concept_covered_6M
|
16 |
size_categories:
|
17 |
- 1M<n<10M
|