YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Model Card for FastChat-T5 3B Q8

The model is quantized version of the lmsys/fastchat-t5-3b-v1.0 with int8 quantization.

Model Details

Model Description

The model being quantized using CTranslate2 with the following command:

ct2-transformers-converter --model lmsys/fastchat-t5-3b --output_dir lmsys/fastchat-t5-3b-ct2 --copy_files generation_config.json added_tokens.json tokenizer_config.json special_tokens_map.json spiece.model --quantization int8 --force --low_cpu_mem_usage

If you want to perform the quantization yourself, you need to install the following dependencies:

pip install -qU ctranslate2 transformers[torch] sentencepiece accelerate
  • Shared by: Lim Chee Kin
  • License: Apache 2.0

How to Get Started with the Model

Use the code below to get started with the model.

import ctranslate2
import transformers

translator = ctranslate2.Translator("limcheekin/fastchat-t5-3b-ct2")
tokenizer = transformers.AutoTokenizer.from_pretrained("limcheekin/fastchat-t5-3b-ct2")

input_text = "translate English to German: The house is wonderful."
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))

results = translator.translate_batch([input_tokens])

output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))

print(output_text)

The code is taken from https://opennmt.net/CTranslate2/guides/transformers.html#t5.

The key method of the code above is translate_batch, you can find out its supported parameters here.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support