Safetensors
clip
OpenCLIP-ViT-g / README.md
megaelius's picture
Update README.md
1388c2c verified
metadata
license: mit
datasets:
  - ILSVRC/imagenet-1k
  - mlfoundations/datacomp_small
base_model:
  - laion/CLIP-ViT-g-14-laion2B-s12B-b42K

The same model in laion/CLIP-ViT-g-14-laion2B-s12B-b42K but converted to the CLIPModel format.

To load this model use:

from transformers import CLIPProcessor, CLIPModel

model_name = "LEAF-CLIP/OpenCLIP-ViT-g"
processor_name = "laion/CLIP-ViT-g-14-laion2B-s12B-b42K"

model = CLIPModel.from_pretrained(model_name)
processor = CLIPProcessor.from_pretrained(processor_name)