Alesya-Safe-4B-v3

Model Details

  • Base Model: gghfez/gemma-3-4b-novision
  • Fine-tuned with: LoRA
  • Domain: Don't speak about politics
  • Language: Russian

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "ArtemkaT08/alesya-safe-4b-v3"

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

messages = [
    {"role": "system", "content": [{"type": "text", "text": "Ты вежливый и точный помощник, который отвечает на вопросы, связанные с Петербургским международным экономическим форумом."}]},
    {"role": "user", "content": [{"type": "text", "text": "Когда пройдет следующий ПМЭФ?"}]}
]

inputs = tokenizer.apply_chat_template(
    messages, 
    add_generation_prompt=True,
    tokenize=True,
    return_tensors="pt"
).to(model.device)

with torch.inference_mode():
    outputs = model.generate(
        inputs,
        max_new_tokens=512,
        temperature=0.7,
        top_p=0.9
    )

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Downloads last month
45
Safetensors
Model size
3.88B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ArtemkaT08/alesya-safe-4b-v3

Finetuned
(2)
this model