Disentangling Reasoning and Knowledge in Medical Large Language Models
We provide our reasoning vs. knowledge classifier, which can be loaded as shown below:
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "zou-lab/BioMedBERT-Knowledge-vs-Reasoning"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
question = "What is the full form of RBC?"
threshold = 0.75
inputs = tokenizer(question, return_tensors="pt", truncation=True, max_length=512)
model.eval()
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).cpu().numpy()
positive_prob = probs[:, 1]
prediction = (positive_prob >= threshold).astype(int)
π Citation
@article{thapa2025disentangling,
title={Disentangling Reasoning and Knowledge in Medical Large Language Models},
author={Thapa, Rahul and Wu, Qingyang and Wu, Kevin and Zhang, Harrison and Zhang, Angela and Wu, Eric and Ye, Haotian and Bedi, Suhana and Aresh, Nevin and Boen, Joseph and Reddy, Shriya and Athiwaratkun, Ben and Song, Shuaiwen Leon and Zou, James},
journal={arXiv preprint arXiv:2505.11462},
year={2025},
url={https://arxiv.org/abs/2505.11462}
}
- Downloads last month
- 36
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support