File size: 453 Bytes
97a0e31 cda521b 4f0c4bc 7313df6 97a0e31 4f0c4bc 97a0e31 4f0c4bc 97a0e31 4f0c4bc 97a0e31 4f0c4bc 97a0e31 4f0c4bc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
---
base_model: google/gemma-7b-it
tags:
- adapter
- lora
- gemma
- peft
- causal-lm
---
# Code Alpaca Adapter for Gemma-7B-IT
This is a LoRA adapter trained on **code_alpaca**, compatible with `google/gemma-7b-it`.
## Usage
```python
from transformers import AutoModelForCausalLM
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
model = PeftModel.from_pretrained(base, "RealSilvia/code_alpaca-adapter")
|