metadata
tags:
- vllm
- vision
- fp8
license: apache-2.0
license_link: >-
https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: google/gemma-3-4b-it
library_name: transformers
gemma-3-4b-it-FP8-Dynamic
Model Overview
- Model Architecture: gemma-3-4b-it
- Input: Vision-Text
- Output: Text
- Model Optimizations:
- Weight quantization: FP8
- Activation quantization: FP8
- Release Date: 2/24/2025
- Version: 1.0
- Model Developers: Neural Magic
Quantized version of google/gemma-3-4b-it.
Model Optimizations
This model was obtained by quantizing the weights of google/gemma-3-4b-it to FP8 data type, ready for inference with vLLM >= 0.5.2.
Deployment
Use with vLLM
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from vllm import LLM, SamplingParams
from vllm.assets.image import ImageAsset
from transformers import AutoProcessor
# Define model name once
model_name = "RedHatAI/gemma-3-4b-it-FP8-dynamic"
# Load image and processor
image = ImageAsset("cherry_blossom").pil_image.convert("RGB")
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
# Build multimodal prompt
chat = [
{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "What is the content of this image?"}]},
{"role": "assistant", "content": []}
]
prompt = processor.apply_chat_template(chat, add_generation_prompt=True)
# Initialize model
llm = LLM(model=model_name, trust_remote_code=True)
# Run inference
inputs = {"prompt": prompt, "multi_modal_data": {"image": [image]}}
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
# Display result
print("RESPONSE:", outputs[0].outputs[0].text)
vLLM also supports OpenAI-compatible serving. See the documentation for more details.
Creation
This model was created with llm-compressor by running the code snippet below as part a multimodal announcement blog.
Model Creation Code
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
# Load model.
model_id = google/gemma-3-4b-it
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id, device_map="auto", torch_dtype="auto"
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Recipe
recipe = [
QuantizationModifier(
targets="Linear",
scheme="FP8_DYNAMIC",
sequential_targets=["Gemma3DecoderLayer"],
ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
),
]
SAVE_DIR=f"{model_id.split('/')[1]}-FP8-Dynamic"
# Perform oneshot
oneshot(
model=model,
recipe=recipe,
trust_remote_code_model=True,
output_dir=SAVE_DIR
)
Evaluation
The model was evaluated using lm_evaluation_harness for OpenLLM v1 text benchmark. The evaluations were conducted using the following commands:
Evaluation Commands
OpenLLM v1
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True,enforce_eager=True \
--tasks openllm \
--batch_size auto
Accuracy
Category | Metric | google/gemma-3-4b-it | RedHatAI/gemma-3-4b-it-FP8-Dynamic | Recovery (%) |
---|---|---|---|---|
OpenLLM V1 | ARC Challenge | 56.57% | 57.08% | 100.90% |
GSM8K | 76.12% | 75.51% | 99.20% | |
Hellaswag | 74.96% | 74.92% | 99.95% | |
MMLU | 58.38% | 57.98% | 99.32% | |
Truthfulqa (mc2) | 51.87% | 51.62% | 99.52% | |
Winogrande | 70.32% | 71.03% | 101.01%%%% | |
Average Score | 64.70% | 64.69% | 99.98% | |
Vision Evals | MMMU (val) | 39.89%/td> | 38.33% | 96.09% |
ChartQA | 50.76% | 51.60% | 101.65% | |
Average Score | 45.33% | 44.97% | 98.87% |