mayug commited on
Commit
c5ed89e
·
verified ·
1 Parent(s): 5d33d40

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -1
README.md CHANGED
@@ -15,4 +15,74 @@ tags:
15
  pretty_name: free-align-concept_covered_6M
16
  size_categories:
17
  - 1M<n<10M
18
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  pretty_name: free-align-concept_covered_6M
16
  size_categories:
17
  - 1M<n<10M
18
+ ---
19
+
20
+
21
+
22
+
23
+ # 📦 Freeze-Align Dataset
24
+
25
+ The **Freeze-Align Dataset** (`concept_coverage_laion_6m`) is a curated collection of high-quality image-text pairs designed to facilitate efficient multimodal alignment using frozen unimodal encoders. This dataset supports the research presented in our CVPR 2025 paper, **"Harnessing Frozen Unimodal Encoders for Flexible Multimodal Alignment"**, enabling models to achieve CLIP-level performance with significantly reduced computational resources.
26
+
27
+ The dataset is curated from LAION-400M through a concept-balanced selection of captions, leveraging caption-to-image-prototype similarity to ensure diverse and semantically rich image-text pairs. The code and resources for curating this dataset are available in our [GitHub repository](https://github.com/mayug/freeze-align), enabling further research into concept coverage and reducing computational requirements for modality alignment.
28
+
29
+ ## 📄 Paper
30
+
31
+ **Title:** Harnessing Frozen Unimodal Encoders for Flexible Multimodal Alignment
32
+ **Authors:** Mayug Maniparambil, Raiymbek Akshulakov, Yasser Abdelaziz Dahou Djilali, Sanath Narayan, Ankit Singh, Noel E. O'Connor
33
+ **Conference:** CVPR 2025
34
+ **Paper:** [arXiv:2409.19425](https://arxiv.org/abs/2409.19425)
35
+ **Code:** [GitHub Repository](https://github.com/mayug/freeze-align)
36
+
37
+ ## 📊 Dataset Statistics
38
+
39
+ - **Total Samples:** 6,000,000 image-text pairs
40
+ - **Source:** Curated from LAION-400M using concept-balanced selection via caption-to-image-prototype similarity.
41
+ - **Image Resolution:** Variable; standardized during preprocessing
42
+ - **Text Language:** Primarily English
43
+ - **Data Format:** Parquet files with fields: `image_url`, `caption`, `embedding_vector`, `similarity_score`
44
+ - **License:** CC-BY 4.0
45
+
46
+ ## 🧪 Usage
47
+
48
+ This dataset is intended for training and evaluating multimodal models that align visual and textual representations. It is particularly useful for research in:
49
+
50
+ - Multimodal representation learning
51
+ - Cross-modal retrieval
52
+ - Zero-shot image classification
53
+ - Efficient training with frozen encoders
54
+ - Representational similarity studies
55
+
56
+ To load the dataset using the Hugging Face `datasets` library:
57
+
58
+ ```python
59
+ from datasets import load_dataset
60
+
61
+ dataset = load_dataset("mayug/concept_coverage_laion_6m")
62
+ ```
63
+
64
+ ## 📂 Dataset Structure
65
+
66
+ Each entry in the dataset includes:
67
+ - `image_url`: URL to the image
68
+ - `caption`: Associated textual description
69
+ - `similarity`: Cosine similarity score between image and text embeddings
70
+ - `IMGNET_CLASS`: One of 2754 ImageNet-derived classes the datapoint is assigned to
71
+ - `SCORE`: Cosine similarity score indicating the datapoint's association with the assigned IMGNET_CLASS
72
+
73
+ ## 📬 Citation
74
+
75
+ If you use this dataset in your research, please cite our paper:
76
+
77
+ ```bibtex
78
+ @inproceedings{maniparambil2025harnessing,
79
+ title={Harnessing Frozen Unimodal Encoders for Flexible Multimodal Alignment},
80
+ author={Maniparambil, Mayug and Akshulakov, Raiymbek and Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and Singh, Ankit and O'Connor, Noel E},
81
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
82
+ year={2025}
83
+ }
84
+ ```
85
+
86
+ ---
87
+
88
+ For more details and updates, please visit our [GitHub Repository](https://github.com/mayug/freeze-align).