YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

SophiaVL-R1-7B Custom Handler

This is a custom inference handler for the SophiaVL-R1-7B model, optimized for Hugging Face Inference Endpoints.

Supported Input Formats:

  • Image URLs
  • Base64 encoded images
  • HuggingFace markdown format: ![](image_url)question

Usage:

import requests

payload = {
    "inputs": {
        "image": "https://example.com/image.jpg",
        "question": "What do you see?",
        "problem_type": "free-form"
    }
}
curl "https://your-endpoint.com" \
  -H "Authorization: Bearer hf_XXXXX" \
  -H "Content-Type: application/json" \
  -d '{
    "inputs": {
      "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png",
      "question": "What do you see in this image?"
    }
  }'

curl "https://your-endpoint.com" \
  -H "Authorization: Bearer hf_XXXXX" \
  -H "Content-Type: application/json" \
  -d '{
    "inputs": "![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png)caption en"
  }'

curl "https://your-endpoint.com" \
  -H "Authorization: Bearer hf_XXXXX" \
  -H "Content-Type: application/json" \
  -d '{
    "inputs": {
      "image": "iVBORw0KGgoAAAANSUhEUgAA...",
      "question": "Describe this image"
    }
  }'
curl "https://your-endpoint.com" \
  -H "Authorization: Bearer hf_XXXXX" \
  -H "Content-Type: application/json" \
  -d '{
    "inputs": {
      "image": "data:image/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAA...",
      "question": "What is this?"
    }
  }'


----
curl "https://ncfejyib823s0v7j.us-east-1.aws.endpoints.huggingface.cloud" \
  -X POST \
  -H "Accept: application/json" \
  -H "Authorization: Bearer hf_XXXXX" \
  -H "Content-Type: application/json" \
  -d '{
    "inputs": "![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png)Please provide a detailed caption for this image."
  }'



---
license: apache-2.0
---
This is the repository for SophiaVL-R1-7B (https://arxiv.org/abs/2505.17018).

For training and evaluation, please refer to the Code: [SophiaVL-R1](https://github.com/kxfan2002/SophiaVL-R1).

A simple inference example:
```python
from transformers import AutoProcessor
import torch
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from PIL import Image

MODEL_PATH = "bunny127/SophiaVL-R1-7B"
# Example usage:
    # {
    #     "problem_id": 1,
    #     "problem": "Subtract 0 cyan cubes. How many objects are left?",
    #     "data_type": "image",
    #     "problem_type": "numerical",
    #     "options": [],
    #     "process": "",
    #     "solution": "<answer>5</answer>",
    #     "path": "./Math/CLEVR-Math/images/CLEVR_train_036427.png",
    #     "data_source": "CLEVR-Math"
    # },
image_path = "/path/to/dataset/Math/CLEVR-Math/images/CLEVR_train_036427.png"
prompt = "Subtract 0 cyan cubes. How many objects are left?"
question_type = "numerical"


model = Qwen2_5_VLForConditionalGeneration.from_pretrained(MODEL_PATH, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2",device_map="auto")
processor = AutoProcessor.from_pretrained(MODEL_PATH)

SYS_PROMPT = """You FIRST think about the reasoning process as an internal monologue and then provide the final answer.
The reasoning process MUST BE enclosed within <think> </think> tagsdd. The final answer MUST BE enclosed within <answer> </answer> tags, for example <think>your_thinking_process</think><answer>your_final_answer</answer>. If you use formula, please use LaTeX format."""

QUESTION_TEMPLATE = (
    "{Question}\n"
    "Please think about this question as if you were a human pondering deeply. "
    "Engage in an internal dialogue using expressions such as 'let me think', 'wait', 'Hmm', 'oh, I see', 'let's break it down', etc, or other natural language thought expressions "
    "It's encouraged to include self-reflection or verification in the reasoning process. "
    "Provide your detailed reasoning between the <think> and </think> tags, and then give your final answer between the <answer> and </answer> tags."
)

TYPE_TEMPLATE = {
    "multiple choice": " Please provide only the single option letter (e.g., A, B, C, D, etc.) within the <answer> </answer> tags.",
    "numerical": " Please provide the numerical value (e.g., 42 or 3.14) within the <answer> </answer> tags.",
    "OCR": " Please transcribe text from the image/video clearly and provide your text answer within the <answer> </answer> tags.",
    "free-form": " Please provide your text answer within the <answer> </answer> tags."
}

def inference(image_path, question, problem_type = "numerical", sys_prompt="You are a helpful assistant.", max_new_tokens=4096, return_input=False):
    image = Image.open(image_path)
    image_local_path = "file://" + image_path
    messages = [
        {"role": "system", "content": sys_prompt},
        {"role": "user", "content": [
                {"type": "text", "text": QUESTION_TEMPLATE.format(Question=question) + TYPE_TEMPLATE[problem_type]},
                {"image": image_local_path},
            ]
        },
    ]
    text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    print("text:", text)
    # image_inputs, video_inputs = process_vision_info([messages])
    inputs = processor(text=[text], images=[image], padding=True, return_tensors="pt")
    inputs = inputs.to('cuda')

    output_ids = model.generate(**inputs, max_new_tokens=max_new_tokens)
    generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
    output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
    if return_input:
        return output_text[0], inputs
    else:
        return output_text[0]

response = inference(image_path, prompt, question_type, sys_prompt=SYS_PROMPT, max_new_tokens=2048)
print(response)
Downloads last month
3
Safetensors
Model size
8.29B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support