Corrected typos in README
Browse files
README.md
CHANGED
@@ -109,7 +109,7 @@ conversation_history = [
|
|
109 |
"content": prompt
|
110 |
}
|
111 |
]
|
112 |
-
inputs =
|
113 |
conversation=conversation_history,
|
114 |
add_generation_prompt=True,
|
115 |
tokenize=True,
|
@@ -118,7 +118,7 @@ inputs = self.tokenizer.apply_chat_template(
|
|
118 |
).to(model.device)
|
119 |
|
120 |
outputs = model.generate(**inputs, max_new_tokens=2048)
|
121 |
-
print(tokenizer.decode(outputs, skip_special_tokens=True))
|
122 |
```
|
123 |
|
124 |
## Training Details
|
@@ -145,7 +145,7 @@ In the original VeriFastScore pipeline, evidence is aggregated at the sentence l
|
|
145 |
|
146 |
- **Training regime:** : bf16 mixed precision
|
147 |
- **Optimizer**: AdamW
|
148 |
-
- **Scheduler**: Cosine decay
|
149 |
- **Batch size**: 8 (effective)
|
150 |
- **Epochs**: 10 (5+5)
|
151 |
|
|
|
109 |
"content": prompt
|
110 |
}
|
111 |
]
|
112 |
+
inputs = tokenizer.apply_chat_template(
|
113 |
conversation=conversation_history,
|
114 |
add_generation_prompt=True,
|
115 |
tokenize=True,
|
|
|
118 |
).to(model.device)
|
119 |
|
120 |
outputs = model.generate(**inputs, max_new_tokens=2048)
|
121 |
+
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
|
122 |
```
|
123 |
|
124 |
## Training Details
|
|
|
145 |
|
146 |
- **Training regime:** : bf16 mixed precision
|
147 |
- **Optimizer**: AdamW
|
148 |
+
- **Scheduler**: Cosine decay
|
149 |
- **Batch size**: 8 (effective)
|
150 |
- **Epochs**: 10 (5+5)
|
151 |
|