neural-coder commited on
Commit
a51db12
·
verified ·
1 Parent(s): 00a3411

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -38
README.md CHANGED
@@ -12,29 +12,6 @@ tags:
12
  ---
13
 
14
 
15
- # ToolACE-2-Llama-3.1-8B
16
-
17
- ToolACE-2-Llama-3.1-8B is a fine-tuned model of LLaMA-3.1-8B-Instruct with our dataset [ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE) tailored for tool usage.
18
- Compared with [ToolACE-8B](https://huggingface.co/Team-ACE/ToolACE-8B), ToolACE-2-8B enhances the tool-usage ability by self-refinment tuning and task decomposition.
19
- ToolACE-2-Llama-3.1-8B achieves a state-of-the-art performance on the [Berkeley Function-Calling Leaderboard(BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html#leaderboard), rivaling the latest GPT-4 models.
20
-
21
-
22
- ToolACE is an automatic agentic pipeline designed to generate **A**ccurate, **C**omplex, and div**E**rse tool-learning data.
23
- ToolACE leverages a novel self-evolution synthesis process to curate a comprehensive API pool of 26,507 diverse APIs.
24
- Dialogs are further generated through the interplay among multiple agents, guided by a formalized thinking process.
25
- To ensure data accuracy, we implement a dual-layer verification system combining rule-based and model-based checks.
26
- More details can be found in our paper on arxiv: [*ToolACE: Winning the Points of LLM Function Calling*](https://arxiv.org/abs/2409.00920)
27
-
28
-
29
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/66bf01f45bdd611f9a602087/WmyWOYtg_dbTgwQmvlqcz.jpeg)
30
-
31
-
32
- Here are the winning scores of ToolACE-2-8B in [BFCL-v3]((https://gorilla.cs.berkeley.edu/leaderboard.html#leaderboard)), which wins the highest scores among 8B-scale models.
33
-
34
-
35
-
36
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/646735a98334813a7ae29500/smLEpFoS7W6OIkeE92_5O.jpeg)
37
-
38
 
39
  ### Usage
40
  Here we provide a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate function calling with given functions.
@@ -140,18 +117,3 @@ Then you should be able to see the following output functional calls:
140
 
141
 
142
 
143
- ### Citation
144
-
145
- If you think ToolACE is useful in your work, please cite our paper:
146
-
147
- ```
148
- @inproceedings{
149
- liu2025toolace,
150
- title={Tool{ACE}: Winning the Points of {LLM} Function Calling},
151
- author={Weiwen Liu and Xu Huang and Xingshan Zeng and xinlong hao and Shuai Yu and Dexun Li and Shuai Wang and Weinan Gan and Zhengying Liu and Yuanqing Yu and Zezhong WANG and Yuxian Wang and Wu Ning and Yutai Hou and Bin Wang and Chuhan Wu and Wang Xinzhi and Yong Liu and Yasheng Wang and Duyu Tang and Dandan Tu and Lifeng Shang and Xin Jiang and Ruiming Tang and Defu Lian and Qun Liu and Enhong Chen},
152
- booktitle={The Thirteenth International Conference on Learning Representations},
153
- year={2025},
154
- url={https://arxiv.org/abs/2409.00920}
155
- }
156
- ```
157
-
 
12
  ---
13
 
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  ### Usage
17
  Here we provide a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate function calling with given functions.
 
117
 
118
 
119