Safetensors
custom_code
mranzinger commited on
Commit
d858256
·
verified ·
1 Parent(s): bde21fa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +231 -0
README.md ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
5
+ ---
6
+
7
+ # Model Overview
8
+
9
+
10
+ [[**Github**](https://github.com/NVlabs/RADIO)] [[**CVPR 2025**](https://arxiv.org/abs/2412.07679)] [[**CVPR 2024**](https://arxiv.org/abs/2312.06709)]
11
+
12
+
13
+ ## Description
14
+
15
+ This model performs visual feature extraction.
16
+ For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.
17
+
18
+ C-RADIOv2 models are available in multiple sizes:
19
+ * Base (90M parameters).
20
+ * Large (320M parameters).
21
+ * Huge (653M parameters).
22
+ * Gigantic (1.1B parameters).
23
+
24
+ C-RADIOv2 was trained for 1M steps (400k more steps than v1), using inverse frequency sampling for data balancing, and [PHI Standardization](https://arxiv.org/abs/2410.01680) for teacher distribution balancing.
25
+
26
+ This model is ready for commercial/non-commercial use.
27
+
28
+ ### License/Terms of Use
29
+
30
+ GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
31
+
32
+ ## Deployment Geography
33
+
34
+ Global.
35
+
36
+ ## Use Case
37
+
38
+ The embeddings generated by this model are expected to be used by a downstream application.
39
+ For example:
40
+
41
+ * Image-level understanding (image classification, curation, etc.).
42
+ * Dense processing (semantic segmentation, depth estimation, etc.).
43
+ * Integration into a Vision-Language Model.
44
+
45
+ ## Release Date
46
+
47
+ Huggingface: 03/26/2025 via [RADIO Collection of Models](https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6).
48
+
49
+ ## References
50
+
51
+ * \[CVPR 2025\] [**RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models**](https://arxiv.org/abs/2412.07679)
52
+ * \[CVPR 2024\] [**AM-RADIO: Agglomerative Vision Foundation Model - Reduce All Domains Into One**](https://arxiv.org/abs/2312.06709)
53
+
54
+ ## Model Architecture
55
+
56
+ **Architecture Type:** Neural Network <br>
57
+ **Network Architecture:** Vision Transformer <br>
58
+
59
+ ## Input
60
+
61
+ **Input Type(s):** Image <br>
62
+ **Input Format(s):** Red, Green, Blue (RGB) <br>
63
+ **Input Parameters:** Two Dimensional (2D) <br>
64
+ **Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels <br>
65
+
66
+ ## Output
67
+
68
+ **Output Type(s):** Embeddings <br>
69
+ **Output Format:** Tensor <br>
70
+ **Output Parameters:** 2D <br>
71
+ **Other Properties Related to Output:** Downstream model required to leverage image features <br>
72
+
73
+ ## Usage:
74
+
75
+ RADIO will return a tuple with two tensors.
76
+ The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image.
77
+ It has shape `(B,C)` with `B` being the batch dimension, and `C` being some number of channels.
78
+ The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM.
79
+
80
+ ```python
81
+ import torch
82
+ from PIL import Image
83
+ from transformers import AutoModel, CLIPImageProcessor
84
+
85
+ hf_repo = "nvidia/C-RADIOv2-L"
86
+
87
+ image_processor = CLIPImageProcessor.from_pretrained(hf_repo)
88
+ model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True)
89
+ model.eval().cuda()
90
+
91
+ image = Image.open('./assets/radio.png').convert('RGB')
92
+ pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values
93
+ pixel_values = pixel_values.cuda()
94
+
95
+ summary, features = model(pixel_values)
96
+ ```
97
+
98
+ Spatial features have shape `(B,T,D)` with `T` being the flattened spatial tokens, and `D` being the channels for spatial features. Note that `C!=D` in general.
99
+ Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For RADIO, the patch size is 16.
100
+
101
+ ```Python
102
+ from einops import rearrange
103
+ spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
104
+ ```
105
+
106
+ The resulting tensor will have shape `(B,D,H,W)`, as is typically seen with computer vision models.
107
+
108
+ ## Software Integration
109
+
110
+ **Runtime Engine(s):**
111
+ * TAO- 24.10 <br>
112
+
113
+ **Supported Hardware Microarchitecture Compatibility:** <br>
114
+ * NVIDIA Ampere <br>
115
+ * NVIDIA Blackwell <br>
116
+ * NVIDIA Jetson <br>
117
+ * NVIDIA Hopper <br>
118
+ * NVIDIA Lovelace <br>
119
+ * NVIDIA Pascal <br>
120
+ * NVIDIA Turing <br>
121
+ * NVIDIA Volta <br>
122
+
123
+ **[Preferred/Supported] Operating System(s):** <br>
124
+ * Linux
125
+ * Linux 4 Tegra
126
+ * QNX
127
+ * Windows
128
+
129
+ ## Model Version(s)
130
+
131
+ * C-RADIOv2-B (90M parameters).
132
+ * C-RADIOv2-L (320M parameters).
133
+ * C-RADIOv2-H (653M parameters).
134
+ * C-RADIOv2-G (1.8B parameters).
135
+
136
+ **Links:**
137
+
138
+ * https://huggingface.co/nvidia/C-RADIOv2-B
139
+ * https://huggingface.co/nvidia/C-RADIOv2-L
140
+ * https://huggingface.co/nvidia/C-RADIOv2-H
141
+ * https://huggingface.co/nvidia/C-RADIOv2-g
142
+
143
+ # Training and Evaluation Datasets
144
+
145
+ ## Training Dataset
146
+
147
+ NV-CC-Img-Text-Dataset <br>
148
+
149
+ ### Data Collection Method by dataset
150
+
151
+ * Automated <br>
152
+
153
+ ### Labeling Method by dataset
154
+
155
+ * Not Applicable (no labels are needed)
156
+
157
+ ### Properties
158
+
159
+ * 700 Million Images <br>
160
+
161
+ ## Evaluation Dataset
162
+
163
+ **Link:** [ImageNet](https://www.image-net.org/) <br>
164
+
165
+ ### Data Collection Method by dataset
166
+
167
+ * Automated <br>
168
+
169
+ ### Labeling Method by dataset
170
+
171
+ * Human <br>
172
+
173
+ **Properties:** This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images.<br>
174
+
175
+ ## Inference
176
+
177
+ **Engine:** PyTorch <br>
178
+ **Test Hardware:** A100 <br>
179
+
180
+ ## Ethical Considerations
181
+
182
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
183
+
184
+ For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.
185
+
186
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
187
+
188
+ ### Bias
189
+
190
+ Field | Response
191
+ :---------------------------------------------------------------------------------------------------|:---------------
192
+ Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
193
+ Measures taken to mitigate against unwanted bias: | None
194
+
195
+
196
+ ### Explainability
197
+
198
+ Field | Response
199
+ :------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
200
+ Intended Application & Domain: | Visual Feature Extraction
201
+ Model Type: | Vision Transformer
202
+ Intended Users: | Developers of downstream vision applications
203
+ Output: | Image embeddings
204
+ Describe how the model works: | The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings.
205
+ Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
206
+ Technical Limitations: | This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings.
207
+ Verified to have met prescribed NVIDIA quality standards: | Yes
208
+ Performance Metrics: | Image classification accuracy, semantic segmentation mean-over-intersection.
209
+ Potential Known Risks: | This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. Additionally, the generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application.
210
+ Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
211
+
212
+
213
+ ### Privacy
214
+
215
+ Field | Response
216
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
217
+ Generatable or reverse engineerable personal data? | None
218
+ Personal data used to create this model? | None
219
+ How often is dataset reviewed? | Before Every Release
220
+ Is there provenance for all datasets used in training? | Yes
221
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes
222
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes
223
+
224
+ ### Safety
225
+
226
+ Field | Response
227
+ :---------------------------------------------------|:----------------------------------
228
+ Model Application(s): | Generation of visual embeddings
229
+ Describe the life critical impact (if present). | Not Applicable
230
+ Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement
231
+ Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.