Code Alpaca Adapter for Gemma-7B-IT

This is a LoRA adapter trained on code_alpaca, compatible with google/gemma-7b-it.

Usage

from transformers import AutoModelForCausalLM
from peft import PeftModel

base = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
model = PeftModel.from_pretrained(base, "RealSilvia/code_alpaca-adapter")
Downloads last month
5
Safetensors
Model size
8.54B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for RealSilvia/code_alpaca-adapter

Base model

google/gemma-7b
Finetuned
google/gemma-7b-it
Adapter
(98)
this model