connectthapa84's picture
Update README.md
1a0d6b3 verified
metadata
license: mit
language:
  - en
base_model:
  - microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
pipeline_tag: text-classification
tags:
  - medical

Disentangling Reasoning and Knowledge in Medical Large Language Models

We provide our reasoning vs. knowledge classifier, which can be loaded as shown below:

import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model_name = "zou-lab/BioMedBERT-Knowledge-vs-Reasoning"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

question = "What is the full form of RBC?"
threshold = 0.75

inputs = tokenizer(question, return_tensors="pt", truncation=True, max_length=512)
model.eval()
with torch.no_grad():
    outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).cpu().numpy()
positive_prob = probs[:, 1]
prediction = (positive_prob >= threshold).astype(int)

πŸ“– Citation

@article{thapa2025disentangling,
  title={Disentangling Reasoning and Knowledge in Medical Large Language Models},
  author={Thapa, Rahul and Wu, Qingyang and Wu, Kevin and Zhang, Harrison and Zhang, Angela and Wu, Eric and Ye, Haotian and Bedi, Suhana and Aresh, Nevin and Boen, Joseph and Reddy, Shriya and Athiwaratkun, Ben and Song, Shuaiwen Leon and Zou, James},
  journal={arXiv preprint arXiv:2505.11462},
  year={2025},
  url={https://arxiv.org/abs/2505.11462}
}