FLAN-T5 Base Fine-Tuned on CNN/DailyMail
This model is a fine-tuned version of google/flan-t5-base
on the CNN/DailyMail dataset using the Hugging Face Transformers library.
π Task
Abstractive Summarization: Given a news article, generate a concise summary.
π Evaluation Results
The model was fine-tuned on 20,000 training samples and validated/tested on 2,000 samples. Evaluation was performed using ROUGE metrics:
Metric | Score |
---|---|
ROUGE-1 | 25.33 |
ROUGE-2 | 11.96 |
ROUGE-L | 20.68 |
ROUGE-Lsum | 23.81 |
π¦ Usage
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("AbdullahAlnemr1/flan-t5-summarizer")
tokenizer = T5Tokenizer.from_pretrained("AbdullahAlnemr1/flan-t5-summarizer")
input_text = "summarize: The US president met with the Senate to discuss..."
inputs = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True)
summary_ids = model.generate(inputs["input_ids"], max_length=128, num_beams=4, early_stopping=True)
print(tokenizer.decode(summary_ids[0], skip_special_tokens=True))
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for AbdullahAlnemr1/flan-t5-summarizer
Base model
google/flan-t5-baseDataset used to train AbdullahAlnemr1/flan-t5-summarizer
Evaluation results
- Rouge-1 on CNN/DailyMailself-reported25.330
- Rouge-2 on CNN/DailyMailself-reported11.960
- Rouge-L on CNN/DailyMailself-reported20.680