Model Summary

Unified-Reward-Think-qwen-7b is the first unified multimodal CoT reward model, capable of multi-dimensional, step-by-step long-chain reasoning for both visual understanding and generation reward tasks.

For further details, please refer to the following resources:

Quick Start

All inference codes are provided in our github.

We take image understanding assessment as example here:

import json
import random
import torch
import tqdm
from PIL import Image
import warnings
import os
from transformers import AutoProcessor, AutoTokenizer, Qwen2_5_VLForConditionalGeneration
from qwen_vl_utils import process_vision_info

warnings.filterwarnings("ignore")

model_path = "CodeGoat24/UnifiedReward-Think-qwen-7b"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    model_path, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_path)


url = "https://github.com/LLaVA-VL/blog/blob/main/2024-10-03-llava-critic/static/images/critic_img_seven.png?raw=True"
image = Image.open(requests.get(url, stream=True).raw)

Query = 'What does this image present?'
R1 = 'The image is a black and white sketch of a line that appears to be in the shape of a cross. The line is a simple and straightforward representation of the cross shape, with two straight lines intersecting at a point.'
R2 = 'This is a handwritten number seven.'

prompt_text = ("Given a question and a reference image, please analyze in detail the two provided answers (Answer 1 and Answer 2). " \
            "Evaluate them based on the following three core dimensions:\n" \
            "1. Semantic accuracy: How well the answer reflects the visual content of the image\n" \
            "2. Correctness: Whether the answer is logically and factually correct\n" \
            "3. Clarity: Whether the answer is clearly and fluently expressed\n" \
            "You may also consider additional dimensions if you find them relevant (e.g., reasoning ability, attention to detail, multimodal grounding, etc.). " \
            "For each dimension, provide a score from 1 to 10 for both answers, and briefly explain your reasoning. " \
            "Then, compute the total score for each answer by explicitly adding the scores for all dimensions and showing the full calculation. " \
            "Enclose your full reasoning within <think> and </think> tags. " \
            "Then, in the <answer> tag, output exactly one of the following: 'Answer 1 is better' or 'Answer 2 is better'. No other text is allowed in the <answer> section.\n\n" \
            "Example format:\n" \
            "<think>\n" \
            "1. Semantic accuracy: Answer 1 (9/10) - ...; Answer 2 (7/10) - ...\n" \
            "2. Correctness: Answer 1 (8/10) - ...; Answer 2 (7/10) - ...\n" \
            "3. Clarity: Answer 1 (9/10) - ...; Answer 2 (8/10) - ...\n" \
            "[Additional dimensions if any]: Answer 1 (6/10) - ...; Answer 2 (7/10) - ...\n" \
            "Total score:\nAnswer 1: 9+8+9+6=32\nAnswer 2: 7+7+8+7=29\n" \
            "</think>\n" \
            "<answer>Answer 1 is better</answer>\n\n" \
            "**Note: In the example above, scores and the final answer are placeholders meant only to demonstrate the format. Your actual evaluation should be based on the quality of two given answers.**\n\n"
            f"Your task is provided as follows:\nQuestion: [{Query}]\nAnswer 1: [{R1}]\nAnswer 2: [{R2}]")

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {"type": "text", "text": prompt_text},
        ],
    }
]

chat_input = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)

inputs = processor(
    text=[chat_input],
    images=image_inputs,
    videos=video_inputs,
    return_tensors="pt",
    padding=True
).to("cuda")

with torch.no_grad():
    generated_ids = model.generate(**inputs, max_new_tokens=4096)
generated_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output = processor.batch_decode(generated_trimmed, skip_special_tokens=True)[0]

print(output)

Citation

@article{UnifiedReward-Think,
  title={Unified Multimodal Chain-of-Thought Reward Model through Reinforcement Fine-Tuning.},
  author={Wang, Yibin and Li, Zhimin and Zang, Yuhang and Wang, Chunyu and Lu, Qinglin, and Jin, Cheng and Wang, Jiaqi},
  journal={arXiv preprint arXiv:2505.03318},
  year={2025}
}
Downloads last month
7
Safetensors
Model size
8.29B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CodeGoat24/UnifiedReward-Think-qwen-7b

Finetuned
(1)
this model
Quantizations
2 models

Datasets used to train CodeGoat24/UnifiedReward-Think-qwen-7b

Collection including CodeGoat24/UnifiedReward-Think-qwen-7b