Safetensors
clip
File size: 525 Bytes
b298eaf
 
d52c003
 
 
 
1388c2c
b298eaf
d52c003
1388c2c
d52c003
 
 
 
 
 
 
1388c2c
d52c003
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: mit
datasets:
- ILSVRC/imagenet-1k
- mlfoundations/datacomp_small
base_model:
- laion/CLIP-ViT-g-14-laion2B-s12B-b42K
---

The same model in `laion/CLIP-ViT-g-14-laion2B-s12B-b42K` but converted to the CLIPModel format.

To load this model use:

```python
from transformers import CLIPProcessor, CLIPModel

model_name = "LEAF-CLIP/OpenCLIP-ViT-g"
processor_name = "laion/CLIP-ViT-g-14-laion2B-s12B-b42K"

model = CLIPModel.from_pretrained(model_name)
processor = CLIPProcessor.from_pretrained(processor_name)
```