---
license: cc-by-nc-4.0
tags:
- vidore
- colpali
- multimodal-embedding
- multilingual-embedding
- Text-to-Visual Document (T→VD) retrieval
- feature-extraction
- sentence-similarity
- mteb
language:
- multilingual
library_name: transformers
pipeline_tag: visual-document-retrieval
---
The embedding model trained by Jina AI.
# Jina Embeddings v4: Universal Embeddings for Multimodal Multilingual Retrieval [Original Model](https://huggingface.co/jinaai/jina-embeddings-v4) | [Blog](https://jina.ai/news/jina-embeddings-v4-universal-embeddings-for-multimodal-multilingual-retrieval) | [Technical Report](https://arxiv.org/abs/2506.18902) | [API](https://jina.ai/embeddings) ## Model Overview This repository hosts a vLLM-compatible version of [`jina-embeddings-v4`](https://huggingface.co/jinaai/jina-embeddings-v4) with the **code** adapter merged into the base `Qwen2.5-VL` weights. This architecture modification enables native compatibility with vLLM without requiring custom adapter-handling code. ## Usage ```python import torch from PIL import Image from vllm import LLM from vllm.config import PoolerConfig from vllm.inputs.data import TextPrompt # Initialize model model = LLM( model="jinaai/jina-embeddings-v4-vllm-code", task="embed", override_pooler_config=PoolerConfig(pooling_type="ALL", normalize=False), dtype="float16", ) # Create text prompts query =query = "Find a function that prints a greeting message to the console" query_prompt = TextPrompt( prompt=f"Query: {query}" ) passage = "def hello_world():\n print('Hello, World!')" passage_prompt = TextPrompt( prompt=f"Passage: {passage}" ) # Create image prompt image = Image.open("