TinyLlama-ECommerce-Chatbot
This is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 for e-commerce customer service chatbot applications.
Model Details
- Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Task: E-commerce customer service conversation
- Training Data: ChatML formatted e-commerce conversations
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
tokenizer = AutoTokenizer.from_pretrained("ShenghaoYummy/TinyLlama-ECommerce-Chatbot")
# Load the fine-tuned model
model = PeftModel.from_pretrained(base_model, "ShenghaoYummy/TinyLlama-ECommerce-Chatbot")
# Generate response
def generate_response(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150, temperature=0.8)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
prompt = "<|system|>\nYou are a helpful e-commerce assistant.<|im_end|>\n<|user|>\nWhat's your return policy?<|im_end|>\n<|assistant|>\n"
response = generate_response(prompt)
print(response)
Training Details
This model was fine-tuned using:
- LoRA with rank 16
- Learning rate optimization via Optuna
- MLflow experiment tracking
- DVC pipeline management
Limitations
- Optimized for e-commerce customer service scenarios
- May not perform well on general conversation topics
- Responses are based on training data patterns
Citation
If you use this model, please cite: @misc{ecommerce-chatbot-tinyllama, title={Fine-tuned TinyLlama for E-commerce Customer Service}, author={Your Name}, year={2024}, url={https://huggingface.co/ShenghaoYummy/TinyLlama-ECommerce-Chatbot} }
- Downloads last month
- 28
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ShenghaoYummy/TinyLlama-ECommerce-Chatbot
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0