Spaces:
Running
on
Zero
Running
on
Zero
File size: 3,173 Bytes
ade11b4 d60b605 e10040f d60b605 ade11b4 e10040f d60b605 e10040f d60b605 e10040f ad72fd3 e10040f d60b605 e10040f ade11b4 ad72fd3 e10040f d60b605 e10040f d60b605 e10040f 18d32fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
joinus = """
## Join us :
🌟TeamTonic🌟 is always making cool demos! Join our active builder's 🛠️community 👻 [](https://discord.gg/qdfnvSPcqP) On 🤗Huggingface:[MultiTransformer](https://huggingface.co/MultiTransformer) On 🌐Github: [Tonic-AI](https://github.com/tonic-ai) & contribute to🌟 [Build Tonic](https://git.tonic-ai.com/contribute)🤗Big thanks to Yuvi Sharma and all the folks at huggingface for the community grant 🤗
"""
title = """# 🙋🏻♂️Welcome to Tonic's 🤖 OpenReasoning-Nemotron-14B Demo 🚀"""
description = """nvidia/🤖OpenReasoning-Nemotron-14B is a reasoning model that is post-trained for reasoning about math, code and science solution generation. It demonstrates exceptional performance across challenging reasoning benchmarks.
"""
presentation1 = """Try this model on [Hugging Face](https://huggingface.co/nvidia/OpenReasoning-Nemotron-14B).
OpenReasoning-Nemotron-14B is a large language model (LLM) which is a derivative of Qwen2.5-14B-Instruct. It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. This model has been evaluated with up to 64K output tokens. The OpenReasoning model is available in the following sizes: 1.5B, 7B, 14B and 32B.
The models demonstrate exceptional performance across a suite of challenging reasoning benchmarks. The 14B model consistently sets new state-of-the-art records for its size class, achieving:
- **AIME24**: 87.8% pass@1
- **AIME25**: 82.0% pass@1
- **HMMT Feb 25**: 71.2% pass@1
- **LiveCodeBench v6**: 67.9% pass@1
- **GPQA**: 71.6% pass@1
- **MMLU-PRO**: 77.5% pass@1
### License
Creative Commons Attribution 4.0 International License (CC-BY-4.0) with Apache 2.0 License"""
presentation2 = """
### Model Architecture
🤖OpenReasoning-Nemotron-14B uses a dense decoder-only Transformer architecture based on Qwen2.5-14B-Instruct. It has 14B model parameters and supports up to 64,000 output tokens for extended reasoning chains.
**Architecture Type:** Dense decoder-only Transformer model
**Network Architecture:** Qwen2.5-14B-Instruct
**Model Size:** 14B parameters
**Max Output Tokens:** 64,000 """
customtool = """{
"name": "custom_tool",
"description": "A custom tool defined by the user",
"parameters": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "First parameter of the custom tool"
},
"param2": {
"type": "string",
"description": "Second parameter of the custom tool"
}
},
"required": ["param1"]
}
}"""
example = """{{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {{
"type": "object",
"properties": {{
"location": {{
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}},
"unit": {{
"type": "string",
"enum": ["celsius", "fahrenheit"]
}}
}},
"required": ["location"]
}}
}}""" |