Support our open-source dataset and model releases!

image/jpeg

Esper 3: Qwen3-4B, Qwen3-8B, Qwen3-14B

Esper 3 is a coding, architecture, and DevOps reasoning specialist built on Qwen 3.

Prompting Guide

Esper 3 uses the Qwen 3 prompt format.

Esper 3 is a reasoning finetune; we recommend enable_thinking=True for all chats.

Example inference script to get started:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ValiantLabs/Qwen3-14B-Esper3"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "Write a Terraform configuration that uses the `aws_ami` data source to find the latest Amazon Linux 2 AMI. Then, provision an EC2 instance using this dynamically determined AMI ID."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

# parsing thinking content
try:
    # rindex finding 151668 (</think>)
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content)
print("content:", content)

image/jpeg

Esper 3 is created by Valiant Labs.

Check out our HuggingFace page to see all of our models!

We care about open source. For everyone to use.

Downloads last month
90
Safetensors
Model size
14.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ValiantLabs/Qwen3-14B-Esper3

Finetuned
Qwen/Qwen3-14B
Finetuned
(43)
this model
Merges
4 models
Quantizations
2 models

Datasets used to train ValiantLabs/Qwen3-14B-Esper3

Space using ValiantLabs/Qwen3-14B-Esper3 1

Collection including ValiantLabs/Qwen3-14B-Esper3