dora_model-adapter
Adapter only for DoRA-finetuned Llama-400M model
Adapter Details
This is the DoRA adapter for lxaw/dora_model.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the base model first
base_model = AutoModelForCausalLM.from_pretrained("YongganFu/Llama-400M-12L")
# Load the DoRA adapter
model = PeftModel.from_pretrained(base_model, "lxaw/dora_model-adapter")
# Load the tokenizer from the base model
tokenizer = AutoTokenizer.from_pretrained("YongganFu/Llama-400M-12L")
# Example usage
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for lxaw/dora_model-adapter
Base model
YongganFu/Llama-400M-12L