Sanity Check Model

This model is fine-tuned on the sanity check dataset for multiple choice question answering.

Model Details

  • Base model: Qwen/Qwen3-0.6B
  • Fine-tuning method: LoRA
  • Task: Multiple Choice Question Answering (MCQA)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("RikoteMaster/sanity_check_model")
tokenizer = AutoTokenizer.from_pretrained("RikoteMaster/sanity_check_model")

# Example usage
question = "What is 2+2?"
choices = ["3", "4", "5", "6"]

messages = [{
    "role": "user",
    "content": question + "\n" + "\n".join([f"{chr(65+i)}. {choice}" for i, choice in enumerate(choices)])
}]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=10)
print(tokenizer.decode(outputs[0]))
### Framework versions

- PEFT 0.15.2
Downloads last month
81
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RikoteMaster/sanity_check_model

Finetuned
Qwen/Qwen3-0.6B
Adapter
(15)
this model