-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
Collections
Discover the best community collections!
Collections including paper arxiv:2407.01449
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 38 -
HuggingFaceFW/fineweb
Viewer • Updated • 52.5B • 311k • 2.32k -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 11.5k • 866 -
cerebras/SlimPajama-627B
Preview • Updated • 43.7k • 489
-
NVLM: Open Frontier-Class Multimodal LLMs
Paper • 2409.11402 • Published • 75 -
BRAVE: Broadening the visual encoding of vision-language models
Paper • 2404.07204 • Published • 19 -
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
Paper • 2403.18814 • Published • 48 -
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models
Paper • 2409.17146 • Published • 122
-
CompCap: Improving Multimodal Large Language Models with Composite Captions
Paper • 2412.05243 • Published • 19 -
GraPE: A Generate-Plan-Edit Framework for Compositional T2I Synthesis
Paper • 2412.06089 • Published • 4 -
SILMM: Self-Improving Large Multimodal Models for Compositional Text-to-Image Generation
Paper • 2412.05818 • Published • 1 -
FLAIR: VLM with Fine-grained Language-informed Image Representations
Paper • 2412.03561 • Published • 2
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 38 -
HuggingFaceFW/fineweb
Viewer • Updated • 52.5B • 311k • 2.32k -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 11.5k • 866 -
cerebras/SlimPajama-627B
Preview • Updated • 43.7k • 489
-
CompCap: Improving Multimodal Large Language Models with Composite Captions
Paper • 2412.05243 • Published • 19 -
GraPE: A Generate-Plan-Edit Framework for Compositional T2I Synthesis
Paper • 2412.06089 • Published • 4 -
SILMM: Self-Improving Large Multimodal Models for Compositional Text-to-Image Generation
Paper • 2412.05818 • Published • 1 -
FLAIR: VLM with Fine-grained Language-informed Image Representations
Paper • 2412.03561 • Published • 2
-
NVLM: Open Frontier-Class Multimodal LLMs
Paper • 2409.11402 • Published • 75 -
BRAVE: Broadening the visual encoding of vision-language models
Paper • 2404.07204 • Published • 19 -
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
Paper • 2403.18814 • Published • 48 -
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models
Paper • 2409.17146 • Published • 122