sequelbox commited on
Commit
9b0caa2
·
verified ·
1 Parent(s): 13069fd
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - esper
8
+ - esper-3
9
+ - valiant
10
+ - valiant-labs
11
+ - qwen
12
+ - qwen-3
13
+ - qwen-3-14b
14
+ - 14b
15
+ - math
16
+ - math-reasoning
17
+ - math-instruct
18
+ - reasoning
19
+ - problem-solving
20
+ - creative
21
+ - analytical
22
+ - expert
23
+ - rationality
24
+ - conversational
25
+ - chat
26
+ - instruct
27
+ base_model: Qwen/Qwen3-14B
28
+ datasets:
29
+ - zwhe99/DeepMath-103K
30
+ - sequelbox/Raiden-DeepSeek-R1
31
+ license: apache-2.0
32
+ ---
33
+
34
+
35
+ **[Support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)**
36
+
37
+
38
+ Cobalt 2 is a math and general reasoning specialist built on Qwen 3.
39
+ - Finetuned on high-difficulty problems from [the math-reasoning DeepMath dataset](https://huggingface.co/datasets/zwhe99/DeepMath-103K) generated with Deepseek R1!
40
+ - Improved [general and creative reasoning](https://huggingface.co/datasets/sequelbox/Raiden-DeepSeek-R1) to supplement problem-solving and general chat performance.
41
+ - Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
42
+
43
+
44
+ Try Esper 3, our full-stack code, architecture, and DevOps assistant: [Qwen3-4B](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3), [Qwen3-8B](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3), [Qwen3-14B](https://huggingface.co/ValiantLabs/Qwen3-14B-Esper3)
45
+
46
+
47
+ ## Prompting Guide
48
+ Cobalt 2 uses the [Qwen 3](https://huggingface.co/Qwen/Qwen3-14B) prompt format.
49
+
50
+ Cobalt 2 is a reasoning finetune; **we recommend enable_thinking=True for all chats.**
51
+
52
+ Example inference script to get started:
53
+
54
+ ```python
55
+ from transformers import AutoModelForCausalLM, AutoTokenizer
56
+
57
+ model_name = "ValiantLabs/Qwen3-14B-Cobalt2"
58
+
59
+ # load the tokenizer and the model
60
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
61
+ model = AutoModelForCausalLM.from_pretrained(
62
+ model_name,
63
+ torch_dtype="auto",
64
+ device_map="auto"
65
+ )
66
+
67
+ # prepare the model input
68
+ prompt = "Evaluate the limit using the Central Limit Theorem: \[ \lim_{n\to\infty}p^{n}\sum_{k \geqslant{n(p^{-1}-1)}}^{\infty}\binom{n+k-1}{n-1}(1-p)^{k}. \]"
69
+ messages = [
70
+ {"role": "user", "content": prompt}
71
+ ]
72
+ text = tokenizer.apply_chat_template(
73
+ messages,
74
+ tokenize=False,
75
+ add_generation_prompt=True,
76
+ enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
77
+ )
78
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
79
+
80
+ # conduct text completion
81
+ generated_ids = model.generate(
82
+ **model_inputs,
83
+ max_new_tokens=32768
84
+ )
85
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
86
+
87
+ # parsing thinking content
88
+ try:
89
+ # rindex finding 151668 (</think>)
90
+ index = len(output_ids) - output_ids[::-1].index(151668)
91
+ except ValueError:
92
+ index = 0
93
+
94
+ thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
95
+ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
96
+
97
+ print("thinking content:", thinking_content)
98
+ print("content:", content)
99
+ ```
100
+
101
+
102
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
103
+
104
+
105
+ Cobalt 2 is created by [Valiant Labs.](http://valiantlabs.ca/)
106
+
107
+ [Check out our HuggingFace page to see Esper 3 and all of our models!](https://huggingface.co/ValiantLabs)
108
+
109
+ We care about open source. For everyone to use.
110
+