Usage

import torch
from transformers import CsmForConditionalGeneration, AutoProcessor

model_id = "beyoru/kafka-sesame"
device = "cuda" if torch.cuda.is_available() else "cpu"

# load the model and the processor
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
model.eval()

model.generation_config.max_length = 250 # big enough to avoid recompilation
model.generation_config.max_new_tokens = None # would take precedence over max_length
model.generation_config.cache_implementation = "static"
model.depth_decoder.generation_config.cache_implementation = "static"

# prepare the inputs
text = "[0]Hello from Sesame." # `[0]` for speaker id 0
inputs = processor(text, add_special_tokens=True).to(device)

# another equivalent way to prepare the inputs
conversation = [
    {"role": "0", "content": [{"type": "text", "text": "Hello from Sesame."}]},
]
inputs = processor.apply_chat_template(
    conversation,
    tokenize=True,
    return_dict=True,
).to(device)

# infer the model
@torch.interface_mode()
audio = model.generate(**inputs, output_audio=True)
processor.save_audio(audio, "example_without_context.wav")
Downloads last month
7
Safetensors
Model size
1.65B params
Tensor type
F32
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for beyoru/kafka_sesame

Base model

sesame/csm-1b
Finetuned
unsloth/csm-1b
Finetuned
(62)
this model

Dataset used to train beyoru/kafka_sesame