AngusHuang commited on
Commit
4a280b1
·
verified ·
1 Parent(s): 99ef21a

Rename model

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -12,11 +12,11 @@ tags:
12
  ---
13
 
14
 
15
- # ToolACE-2-8B
16
 
17
- ToolACE-2-8B is a finetuned model of LLaMA-3.1-8B-Instruct with our dataset [ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE) tailored for tool usage.
18
  Compared with [ToolACE-8B](https://huggingface.co/Team-ACE/ToolACE-8B), ToolACE-2-8B enhances the tool-usage ability by self-refinment tuning and task decomposition.
19
- ToolACE-2-8B achieves a state-of-the-art performance on the [Berkeley Function-Calling Leaderboard(BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html#leaderboard), rivaling the latest GPT-4 models.
20
 
21
 
22
  ToolACE is an automatic agentic pipeline designed to generate **A**ccurate, **C**omplex, and div**E**rse tool-learning data.
@@ -42,7 +42,7 @@ Here we provide a code snippet with `apply_chat_template` to show you how to loa
42
  ```python
43
  from transformers import AutoModelForCausalLM, AutoTokenizer
44
 
45
- model_name = "Team-ACE/ToolACE-2-8B"
46
 
47
  tokenizer = AutoTokenizer.from_pretrained(model_name)
48
  model = AutoModelForCausalLM.from_pretrained(
 
12
  ---
13
 
14
 
15
+ # ToolACE-2-Llama-3.1-8B
16
 
17
+ ToolACE-2-Llama-3.1-8B is a fine-tuned model of LLaMA-3.1-8B-Instruct with our dataset [ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE) tailored for tool usage.
18
  Compared with [ToolACE-8B](https://huggingface.co/Team-ACE/ToolACE-8B), ToolACE-2-8B enhances the tool-usage ability by self-refinment tuning and task decomposition.
19
+ ToolACE-2-Llama-3.1-8B achieves a state-of-the-art performance on the [Berkeley Function-Calling Leaderboard(BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html#leaderboard), rivaling the latest GPT-4 models.
20
 
21
 
22
  ToolACE is an automatic agentic pipeline designed to generate **A**ccurate, **C**omplex, and div**E**rse tool-learning data.
 
42
  ```python
43
  from transformers import AutoModelForCausalLM, AutoTokenizer
44
 
45
+ model_name = "Team-ACE/ToolACE-2-Llama-3.1-8B"
46
 
47
  tokenizer = AutoTokenizer.from_pretrained(model_name)
48
  model = AutoModelForCausalLM.from_pretrained(