asanchez75 commited on
Commit
ade09c5
·
verified ·
1 Parent(s): f2a43ce

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Tool Finetuning Dataset
2
+
3
+ ## Dataset Description
4
+
5
+ ### Dataset Summary
6
+
7
+ This dataset is designed for fine-tuning language models to use tools (function calling) appropriately based on user queries. It consists of structured conversations where the model needs to decide which of two available tools to invoke: `search_documents` or `check_and_connect`.
8
+
9
+ The dataset combines:
10
+ 1. Adapted natural questions that should trigger the `search_documents` tool
11
+ 2. System status queries that should trigger the `check_and_connect` tool
12
+
13
+ ### Dataset Structure
14
+
15
+ The dataset is provided in JSONL format with each entry containing a complete conversation with system, user, and assistant messages.
16
+
17
+ #### Data Instances
18
+
19
+ Each data instance follows this structure:
20
+ ```json
21
+ {
22
+ "conversations": [
23
+ {
24
+ "role": "system",
25
+ "content": "Environment: ipython\nCutting Knowledge Date: December 2023\nToday Date: 18 May 2025"
26
+ },
27
+ {
28
+ "role": "user",
29
+ "content": "Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt. Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}. Do not use variables.\n{ \"type\": \"function\", \"function\": { \"name\": \"check_and_connect\", \"description\": \"check_and_connect\", \"parameters\": { \"properties\": {}, \"type\": \"object\" } } }\n{ \"type\": \"function\", \"function\": { \"name\": \"search_documents\", \"description\": \"\\n Searches for documents based on a user's query string. Use this to find information on a specific topic.\\n\\n \", \"parameters\": { \"properties\": { \"query\": { \"description\": \"The actual search phrase or question. For example, 'What are the causes of climate change?' or 'population of Madre de Dios'.\", \"type\": \"string\" } }, \"required\": [ \"query\" ], \"type\": \"object\" } } }\nSearch information about where will you find earth's magnetic north pole"
30
+ },
31
+ {
32
+ "role": "assistant",
33
+ "content": " {\"name\": \"search_documents\", \"parameters\": {\"query\": \"where will you find earth's magnetic north pole\"}}"
34
+ }
35
+ ]
36
+ }
37
+
38
+ ```
39
+
40
+ For `search_documents` examples:
41
+ - User queries are prefixed with "Search information about"
42
+ - Assistant responses call the `search_documents` function with the original query
43
+
44
+ For `check_and_connect` examples:
45
+ - User queries are variants of system status checks
46
+ - Assistant responses call the `check_and_connect` function with empty parameters
47
+
48
+ #### Data Fields
49
+
50
+ - `conversations`: Array of message objects
51
+ - `role`: String representing the speaker (system, user, or assistant)
52
+ - `content`: String containing the message content
53
+
54
+ ### Dataset Creation
55
+
56
+ #### Source Data
57
+
58
+ The dataset is generated from:
59
+ 1. `maximedb/natural_questions` dataset from Hugging Face (for `search_documents` examples)
60
+ 2. Predefined list of system status queries (for `check_and_connect` examples)
61
+
62
+ #### Processing
63
+
64
+ For `search_documents`:
65
+ - 1,000 questions are selected from Natural Questions
66
+ - Each question is prefixed with "Search information about"
67
+ - The JSON response includes the original question without the prefix
68
+
69
+ For `check_and_connect`:
70
+ - 50 samples are generated from a set of 15 predefined system status queries
71
+ - The JSON response has empty parameters: `{"name": "check_and_connect", "parameters": {}}`
72
+
73
+ The final dataset is shuffled to randomize the order of examples.
74
+
75
+ ### Considerations for Using the Data
76
+
77
+ #### Discussion of Biases
78
+
79
+ The dataset may reflect biases inherent in:
80
+ - The Natural Questions dataset
81
+ - The manual selection of system status queries
82
+
83
+ #### Dataset Metadata
84
+
85
+ - **Size**: 1,050 examples (1,000 search_documents + 50 check_and_connect)
86
+ - **Format**: JSONL
87
+ - **Creation Date**: Generated on May 18, 2025
88
+ - **License**: Inherits from Natural Questions dataset
89
+ - **Tools**:
90
+ - `search_documents`: For information retrieval queries
91
+ - `check_and_connect`: For system status checks
92
+
93
+ ## Additional Information
94
+
95
+ ### Citation
96
+
97
+ If using this dataset, please cite both this work and the original Natural Questions dataset:
98
+
99
+ ```
100
+ @misc{tool_finetuning_dataset,
101
+ title={Tool Finetuning Dataset},
102
+ year={2025}
103
+ }
104
+
105
+ @article{47761,
106
+ title = {Natural Questions: a Benchmark for Question Answering Research},
107
+ author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
108
+ year = {2019},
109
+ journal = {Transactions of the Association of Computational Linguistics}
110
+ }
111
+ ```