sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c4295ca4846141478ffa89ada8d9ded322d0c93a
|
# Dataset Card for "gsm8k-test-3.5-bigger-oasst"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
1a3orn/gsm8k-instruct
|
[
"region:us"
] |
2023-10-12T12:54:45+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1399329, "num_examples": 3288}, {"name": "test", "num_bytes": 77477, "num_examples": 174}], "download_size": 757371, "dataset_size": 1476806}}
|
2023-10-12T12:54:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gsm8k-test-3.5-bigger-oasst"
More Information needed
|
[
"# Dataset Card for \"gsm8k-test-3.5-bigger-oasst\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gsm8k-test-3.5-bigger-oasst\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gsm8k-test-3.5-bigger-oasst\"\n\nMore Information needed"
] |
2a7764a3a98d2988bda0b311f75805392a9f8d0b
|
# Dataset Card for PMEmo
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/PMEmo>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** Dataset; Music Emotion Recognition; Experiment; EDA
### Dataset Summary
Music Emotion Recognition (MER) has recently received considerable attention. To support the MER research which requires large music content libraries, we present the PMEmo dataset containing emotion annotations of 794 songs as well as the simultaneous electrodermal activity (EDA) signals.
### Supported Tasks and Leaderboards
MER, MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav, .txt, .lrc, .csv), .csv
### Data Fields
Audio Serial, Song Metadata, Audio Demo, Pre-computed Audio Features for Use in MER Tasks, Manually Annotated Emotion Labels, EDA Physiological Signals, Song Lyrics (LRC), Song Comments
### Data Splits
train, valid, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for time-based MER
### Source Data
#### Initial Data Collection and Normalization
Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun, Monan Zhou
#### Who are the source language producers?
Teachers & students from NEXT Lab
### Annotations
#### Annotation process
A Music Emotion Experiment was well-designed for collecting the affective-annotated music corpus of high quality, which recruited 457 subjects. The dataset is publically available to the research community, which is foremost intended for benchmarking in music emotion retrieval and recognition. To straightforwardly evaluate the methodologies for music affective analysis, it also involves pre-computed audio feature sets. In addition to that, manually selected chorus excerpts (compressed in MP3) of songs are provided to facilitate the development of chorus-related research.
#### Who are the annotators?
Teachers & students from NEXT Lab
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Advancing the Digitization Process of time-based MER
### Discussion of Biases
Only for pop music
### Other Known Limitations
Time-based MER has high noise
## Additional Information
### Dataset Curators
Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun
### Evaluation
[Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, and Lingyun Sun. 2018. The PMEmo Dataset for Music Emotion Recognition. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR '18). Association for Computing Machinery, New York, NY, USA, 135–142. https://doi.org/10.1145/3206025.3206037](https://doi.org/10.1145/3206025.3206037)
### Licensing Information
```
MIT License
Copyright (c) NEXT Lab
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Yuan Wang, Zhaowen Wang, Wei Li and Zijin Li},
title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for time-based MER
|
ccmusic-database/PMEmo
|
[
"task_categories:audio-classification",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] |
2023-10-12T12:58:27+00:00
|
{"language": ["zh", "en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["audio-classification"], "pretty_name": "PMEmo", "tags": ["music", "art"], "viewer": false}
|
2023-12-04T16:07:52+00:00
|
[] |
[
"zh",
"en"
] |
TAGS
#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #region-us
|
# Dataset Card for PMEmo
## Dataset Description
- Homepage: <URL>
- Repository: <URL
- Paper: <URL
- Leaderboard: <URL
- Point of Contact: Dataset; Music Emotion Recognition; Experiment; EDA
### Dataset Summary
Music Emotion Recognition (MER) has recently received considerable attention. To support the MER research which requires large music content libraries, we present the PMEmo dataset containing emotion annotations of 794 songs as well as the simultaneous electrodermal activity (EDA) signals.
### Supported Tasks and Leaderboards
MER, MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav, .txt, .lrc, .csv), .csv
### Data Fields
Audio Serial, Song Metadata, Audio Demo, Pre-computed Audio Features for Use in MER Tasks, Manually Annotated Emotion Labels, EDA Physiological Signals, Song Lyrics (LRC), Song Comments
### Data Splits
train, valid, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for time-based MER
### Source Data
#### Initial Data Collection and Normalization
Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun, Monan Zhou
#### Who are the source language producers?
Teachers & students from NEXT Lab
### Annotations
#### Annotation process
A Music Emotion Experiment was well-designed for collecting the affective-annotated music corpus of high quality, which recruited 457 subjects. The dataset is publically available to the research community, which is foremost intended for benchmarking in music emotion retrieval and recognition. To straightforwardly evaluate the methodologies for music affective analysis, it also involves pre-computed audio feature sets. In addition to that, manually selected chorus excerpts (compressed in MP3) of songs are provided to facilitate the development of chorus-related research.
#### Who are the annotators?
Teachers & students from NEXT Lab
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Advancing the Digitization Process of time-based MER
### Discussion of Biases
Only for pop music
### Other Known Limitations
Time-based MER has high noise
## Additional Information
### Dataset Curators
Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun
### Evaluation
Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, and Lingyun Sun. 2018. The PMEmo Dataset for Music Emotion Recognition. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR '18). Association for Computing Machinery, New York, NY, USA, 135–142. URL
### Licensing Information
### Contributions
Provide a dataset for time-based MER
|
[
"# Dataset Card for PMEmo",
"## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: Dataset; Music Emotion Recognition; Experiment; EDA",
"### Dataset Summary\nMusic Emotion Recognition (MER) has recently received considerable attention. To support the MER research which requires large music content libraries, we present the PMEmo dataset containing emotion annotations of 794 songs as well as the simultaneous electrodermal activity (EDA) signals.",
"### Supported Tasks and Leaderboards\nMER, MIR, audio classification",
"### Languages\nChinese, English",
"## Dataset Structure",
"### Data Instances\n.zip(.wav, .txt, .lrc, .csv), .csv",
"### Data Fields\nAudio Serial, Song Metadata, Audio Demo, Pre-computed Audio Features for Use in MER Tasks, Manually Annotated Emotion Labels, EDA Physiological Signals, Song Lyrics (LRC), Song Comments",
"### Data Splits\ntrain, valid, test",
"## Dataset Creation",
"### Curation Rationale\nLack of a dataset for time-based MER",
"### Source Data",
"#### Initial Data Collection and Normalization\nKejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun, Monan Zhou",
"#### Who are the source language producers?\nTeachers & students from NEXT Lab",
"### Annotations",
"#### Annotation process\nA Music Emotion Experiment was well-designed for collecting the affective-annotated music corpus of high quality, which recruited 457 subjects. The dataset is publically available to the research community, which is foremost intended for benchmarking in music emotion retrieval and recognition. To straightforwardly evaluate the methodologies for music affective analysis, it also involves pre-computed audio feature sets. In addition to that, manually selected chorus excerpts (compressed in MP3) of songs are provided to facilitate the development of chorus-related research.",
"#### Who are the annotators?\nTeachers & students from NEXT Lab",
"### Personal and Sensitive Information\nNone",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nAdvancing the Digitization Process of time-based MER",
"### Discussion of Biases\nOnly for pop music",
"### Other Known Limitations\nTime-based MER has high noise",
"## Additional Information",
"### Dataset Curators\nKejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun",
"### Evaluation\nKejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, and Lingyun Sun. 2018. The PMEmo Dataset for Music Emotion Recognition. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR '18). Association for Computing Machinery, New York, NY, USA, 135–142. URL",
"### Licensing Information",
"### Contributions\nProvide a dataset for time-based MER"
] |
[
"TAGS\n#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #region-us \n",
"# Dataset Card for PMEmo",
"## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: Dataset; Music Emotion Recognition; Experiment; EDA",
"### Dataset Summary\nMusic Emotion Recognition (MER) has recently received considerable attention. To support the MER research which requires large music content libraries, we present the PMEmo dataset containing emotion annotations of 794 songs as well as the simultaneous electrodermal activity (EDA) signals.",
"### Supported Tasks and Leaderboards\nMER, MIR, audio classification",
"### Languages\nChinese, English",
"## Dataset Structure",
"### Data Instances\n.zip(.wav, .txt, .lrc, .csv), .csv",
"### Data Fields\nAudio Serial, Song Metadata, Audio Demo, Pre-computed Audio Features for Use in MER Tasks, Manually Annotated Emotion Labels, EDA Physiological Signals, Song Lyrics (LRC), Song Comments",
"### Data Splits\ntrain, valid, test",
"## Dataset Creation",
"### Curation Rationale\nLack of a dataset for time-based MER",
"### Source Data",
"#### Initial Data Collection and Normalization\nKejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun, Monan Zhou",
"#### Who are the source language producers?\nTeachers & students from NEXT Lab",
"### Annotations",
"#### Annotation process\nA Music Emotion Experiment was well-designed for collecting the affective-annotated music corpus of high quality, which recruited 457 subjects. The dataset is publically available to the research community, which is foremost intended for benchmarking in music emotion retrieval and recognition. To straightforwardly evaluate the methodologies for music affective analysis, it also involves pre-computed audio feature sets. In addition to that, manually selected chorus excerpts (compressed in MP3) of songs are provided to facilitate the development of chorus-related research.",
"#### Who are the annotators?\nTeachers & students from NEXT Lab",
"### Personal and Sensitive Information\nNone",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nAdvancing the Digitization Process of time-based MER",
"### Discussion of Biases\nOnly for pop music",
"### Other Known Limitations\nTime-based MER has high noise",
"## Additional Information",
"### Dataset Curators\nKejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun",
"### Evaluation\nKejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, and Lingyun Sun. 2018. The PMEmo Dataset for Music Emotion Recognition. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR '18). Association for Computing Machinery, New York, NY, USA, 135–142. URL",
"### Licensing Information",
"### Contributions\nProvide a dataset for time-based MER"
] |
[
46,
8,
47,
69,
18,
7,
6,
32,
56,
10,
5,
18,
4,
35,
18,
5,
137,
17,
10,
8,
20,
12,
15,
5,
26,
83,
6,
14
] |
[
"passage: TAGS\n#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #region-us \n# Dataset Card for PMEmo## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: Dataset; Music Emotion Recognition; Experiment; EDA### Dataset Summary\nMusic Emotion Recognition (MER) has recently received considerable attention. To support the MER research which requires large music content libraries, we present the PMEmo dataset containing emotion annotations of 794 songs as well as the simultaneous electrodermal activity (EDA) signals.### Supported Tasks and Leaderboards\nMER, MIR, audio classification### Languages\nChinese, English## Dataset Structure### Data Instances\n.zip(.wav, .txt, .lrc, .csv), .csv### Data Fields\nAudio Serial, Song Metadata, Audio Demo, Pre-computed Audio Features for Use in MER Tasks, Manually Annotated Emotion Labels, EDA Physiological Signals, Song Lyrics (LRC), Song Comments### Data Splits\ntrain, valid, test## Dataset Creation### Curation Rationale\nLack of a dataset for time-based MER### Source Data#### Initial Data Collection and Normalization\nKejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun, Monan Zhou#### Who are the source language producers?\nTeachers & students from NEXT Lab### Annotations"
] |
7b4c0b32e51a1630578ed5fdf21bf8bc6d743b1c
|
# Touch Rugby Rules Dataset (for embeddings)
train.csv is taken from the [International Touch Website](https://cdn.internationaltouch.org/public/FIT%205th%20Edition%20Rulebook.pdf)
test.csv is copy pasted from abbreviated rules on the [UK Touch website](https://www.englandtouch.org.uk/develop/coaching/the-rules/). Note that I'm bypassing the pdf to text stage.
All text is chunked to a length of 100 tokens with 50% overlap.
For educational and non-commercial use only.
|
IainRatherThanIan/TouchRugby
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"touch rugby",
"region:us"
] |
2023-10-12T12:58:58+00:00
|
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "tags": ["fine-tuning", "touch rugby"]}
|
2023-10-12T13:04:25+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us
|
# Touch Rugby Rules Dataset (for embeddings)
URL is taken from the International Touch Website
URL is copy pasted from abbreviated rules on the UK Touch website. Note that I'm bypassing the pdf to text stage.
All text is chunked to a length of 100 tokens with 50% overlap.
For educational and non-commercial use only.
|
[
"# Touch Rugby Rules Dataset (for embeddings)\n\nURL is taken from the International Touch Website\n\nURL is copy pasted from abbreviated rules on the UK Touch website. Note that I'm bypassing the pdf to text stage.\n\nAll text is chunked to a length of 100 tokens with 50% overlap.\n\nFor educational and non-commercial use only."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us \n",
"# Touch Rugby Rules Dataset (for embeddings)\n\nURL is taken from the International Touch Website\n\nURL is copy pasted from abbreviated rules on the UK Touch website. Note that I'm bypassing the pdf to text stage.\n\nAll text is chunked to a length of 100 tokens with 50% overlap.\n\nFor educational and non-commercial use only."
] |
[
39,
81
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us \n# Touch Rugby Rules Dataset (for embeddings)\n\nURL is taken from the International Touch Website\n\nURL is copy pasted from abbreviated rules on the UK Touch website. Note that I'm bypassing the pdf to text stage.\n\nAll text is chunked to a length of 100 tokens with 50% overlap.\n\nFor educational and non-commercial use only."
] |
4c1cd5f821df215d721a2b61520a6e1081858ff1
|
# Dataset Card for "name_of_your_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SouravModak/name_of_your_dataset
|
[
"region:us"
] |
2023-10-12T13:00:22+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 22631043.6, "num_examples": 1300}], "download_size": 22594225, "dataset_size": 22631043.6}}
|
2023-10-12T13:00:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "name_of_your_dataset"
More Information needed
|
[
"# Dataset Card for \"name_of_your_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"name_of_your_dataset\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"name_of_your_dataset\"\n\nMore Information needed"
] |
d5a1832cb88ecc684cbfb205b7b36cc0fbd7cd47
|
# Dataset Card for "SmartWeed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SouravModak/SmartWeed
|
[
"region:us"
] |
2023-10-12T13:01:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 22631043.6, "num_examples": 1300}], "download_size": 22594225, "dataset_size": 22631043.6}}
|
2023-10-12T13:55:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SmartWeed"
More Information needed
|
[
"# Dataset Card for \"SmartWeed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SmartWeed\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SmartWeed\"\n\nMore Information needed"
] |
203d012da3757d9ee510ddc30a33ead1b15aff47
|
# Dataset Card for "spotlight-textvqa-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
renumics/spotlight-textvqa-enrichment
|
[
"region:us"
] |
2023-10-12T13:03:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image_id.embedding", "sequence": "float32", "length": 2}, {"name": "question.embedding", "sequence": "float32", "length": 2}, {"name": "image.embedding", "sequence": "float32", "length": 2}, {"name": "flickr_original_url.embedding", "sequence": "float32", "length": 2}, {"name": "flickr_300k_url.embedding", "sequence": "float32", "length": 2}, {"name": "set_name.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 1660896, "num_examples": 34602}, {"name": "validation", "num_bytes": 240000, "num_examples": 5000}, {"name": "test", "num_bytes": 275232, "num_examples": 5734}], "download_size": 3028800, "dataset_size": 2176128}}
|
2023-10-13T09:32:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "spotlight-textvqa-enrichment"
More Information needed
|
[
"# Dataset Card for \"spotlight-textvqa-enrichment\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"spotlight-textvqa-enrichment\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-textvqa-enrichment\"\n\nMore Information needed"
] |
fd9b49fdcf6a898247d9614d6d2a94cdf8d420e9
|
# Persian Poetry Dataset
## Dataset Description
### Overview
This dataset contains a rich collection of Persian poems along with metadata about the poets and the verses. The data spans various poets and their poems, and includes the verses with associated information about their position within each poem.
### Data Collection
- **Data Collection Source:** The data is sourced from the [Ganjoor project](https://github.com/ganjoor/). The specific database file can be found in the [releases section](https://github.com/ganjoor/desktop/releases/tag/v2.81) of their GitHub repository.
- **Time Period:** Oct-12-2023
- **Collection Methods:** The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.
### Data Structure
The dataset is structured into multiple tables, notably `poem`, `poet`, and `verse` tables which contain information about the poems, poets, and verses respectively. The tables are linked through various ID fields that allow the data to be connected and queried together.
- **Poem Table:**
- `id`: The unique identifier of a poem.
- `cat_id`: A category identifier linking to poet information.
- `title`: The title of the poem.
- `url`: A URL associated with the poem.
- **Poet Table:**
- `id`: The unique identifier of a poet.
- `name`: The name of the poet.
- `cat_id`: A category identifier.
- `description`: A textual description or biography of the poet.
- **Verse Table:**
- `poem_id`: Identifier linking the verse to a particular poem.
- `vorder`: Order of the verse within the poem.
- `position`: Position of the verse, used to determine if two verses form a hemistich.
- `text`: The text of the verse.
### Data Example
```json
{
"poet": "Sample Poet",
"title": "Sample Poem Title",
"content": [
{
"hemistich": {
"verse0": "First part of a hemistich",
"verse1": "Second part of a hemistich"
}
},
{
"verse": {"text": "A standalone verse"}
}
]
}
```
## Dataset Usage
### Use Cases
This dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:
- Poem generation
- Poet identification
- Style analysis
### Challenges & Limitations
- The dataset does not contain long verses that are over 100 characters.
- Some poems may contain verses that form hemistichs, which are represented with specific structure in the dataset.
### License
GPL-2 (GNU General Public License) inherited from the original source
## Additional Information
### Citation
```
Persian Poetry Dataset. Collected by Kakooch from the Ganjoor Project. Available at: https://huggingface.co/datasets/persian_poetry
```
### Dataset Link
[Download the dataset from Hugging Face](https://huggingface.co/datasets/persian_poetry)
### Contact
Email: [[email protected]](mailto:[email protected]) | GitHub: [kakooch](https://github.com/kakooch)
---
*This README was generated by Kakooch.*
|
kakooch/ganjoor-processed
|
[
"language:fa",
"license:gpl-2.0",
"region:us"
] |
2023-10-12T13:04:32+00:00
|
{"language": ["fa"], "license": "gpl-2.0", "name": "Persian Poetry Dataset", "description": "This dataset contains a rich collection of Persian poems along with metadata about the poets and the verses.\nThe data spans various poets and their poems, and includes the verses with associated information about their position within each poem.\nThe dataset is split into a training set and a test set, with 90% of the verses of each poem for each poet used for training and 10% used for testing.\n", "url": "https://github.com/ganjoor/desktop/releases/tag/v2.81", "citation": "Persian Poetry Dataset. Collected by Kakooch from the Ganjoor Project.\nAvailable at: https://huggingface.co/datasets/persian_poetry\n", "size": "Custom", "splits": {"train": {"description": "This split contains 90% of the verses of each poem for each poet and is used for training."}, "test": {"description": "This split contains 10% of the verses of each poem for each poet and is used for testing."}}}
|
2023-10-14T05:21:52+00:00
|
[] |
[
"fa"
] |
TAGS
#language-Persian #license-gpl-2.0 #region-us
|
# Persian Poetry Dataset
## Dataset Description
### Overview
This dataset contains a rich collection of Persian poems along with metadata about the poets and the verses. The data spans various poets and their poems, and includes the verses with associated information about their position within each poem.
### Data Collection
- Data Collection Source: The data is sourced from the Ganjoor project. The specific database file can be found in the releases section of their GitHub repository.
- Time Period: Oct-12-2023
- Collection Methods: The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.
### Data Structure
The dataset is structured into multiple tables, notably 'poem', 'poet', and 'verse' tables which contain information about the poems, poets, and verses respectively. The tables are linked through various ID fields that allow the data to be connected and queried together.
- Poem Table:
- 'id': The unique identifier of a poem.
- 'cat_id': A category identifier linking to poet information.
- 'title': The title of the poem.
- 'url': A URL associated with the poem.
- Poet Table:
- 'id': The unique identifier of a poet.
- 'name': The name of the poet.
- 'cat_id': A category identifier.
- 'description': A textual description or biography of the poet.
- Verse Table:
- 'poem_id': Identifier linking the verse to a particular poem.
- 'vorder': Order of the verse within the poem.
- 'position': Position of the verse, used to determine if two verses form a hemistich.
- 'text': The text of the verse.
### Data Example
## Dataset Usage
### Use Cases
This dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:
- Poem generation
- Poet identification
- Style analysis
### Challenges & Limitations
- The dataset does not contain long verses that are over 100 characters.
- Some poems may contain verses that form hemistichs, which are represented with specific structure in the dataset.
### License
GPL-2 (GNU General Public License) inherited from the original source
## Additional Information
### Dataset Link
Download the dataset from Hugging Face
### Contact
Email: kakooch@URL | GitHub: kakooch
---
*This README was generated by Kakooch.*
|
[
"# Persian Poetry Dataset",
"## Dataset Description",
"### Overview\n\nThis dataset contains a rich collection of Persian poems along with metadata about the poets and the verses. The data spans various poets and their poems, and includes the verses with associated information about their position within each poem.",
"### Data Collection\n\n- Data Collection Source: The data is sourced from the Ganjoor project. The specific database file can be found in the releases section of their GitHub repository.\n- Time Period: Oct-12-2023\n- Collection Methods: The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.",
"### Data Structure\n\nThe dataset is structured into multiple tables, notably 'poem', 'poet', and 'verse' tables which contain information about the poems, poets, and verses respectively. The tables are linked through various ID fields that allow the data to be connected and queried together.\n\n- Poem Table:\n - 'id': The unique identifier of a poem.\n - 'cat_id': A category identifier linking to poet information.\n - 'title': The title of the poem.\n - 'url': A URL associated with the poem.\n \n- Poet Table:\n - 'id': The unique identifier of a poet.\n - 'name': The name of the poet.\n - 'cat_id': A category identifier.\n - 'description': A textual description or biography of the poet.\n\n- Verse Table:\n - 'poem_id': Identifier linking the verse to a particular poem.\n - 'vorder': Order of the verse within the poem.\n - 'position': Position of the verse, used to determine if two verses form a hemistich.\n - 'text': The text of the verse.",
"### Data Example",
"## Dataset Usage",
"### Use Cases\n\nThis dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:\n- Poem generation\n- Poet identification\n- Style analysis",
"### Challenges & Limitations\n\n- The dataset does not contain long verses that are over 100 characters.\n- Some poems may contain verses that form hemistichs, which are represented with specific structure in the dataset.",
"### License\n\nGPL-2 (GNU General Public License) inherited from the original source",
"## Additional Information",
"### Dataset Link \n\nDownload the dataset from Hugging Face",
"### Contact \n\nEmail: kakooch@URL | GitHub: kakooch\n\n---\n\n*This README was generated by Kakooch.*"
] |
[
"TAGS\n#language-Persian #license-gpl-2.0 #region-us \n",
"# Persian Poetry Dataset",
"## Dataset Description",
"### Overview\n\nThis dataset contains a rich collection of Persian poems along with metadata about the poets and the verses. The data spans various poets and their poems, and includes the verses with associated information about their position within each poem.",
"### Data Collection\n\n- Data Collection Source: The data is sourced from the Ganjoor project. The specific database file can be found in the releases section of their GitHub repository.\n- Time Period: Oct-12-2023\n- Collection Methods: The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.",
"### Data Structure\n\nThe dataset is structured into multiple tables, notably 'poem', 'poet', and 'verse' tables which contain information about the poems, poets, and verses respectively. The tables are linked through various ID fields that allow the data to be connected and queried together.\n\n- Poem Table:\n - 'id': The unique identifier of a poem.\n - 'cat_id': A category identifier linking to poet information.\n - 'title': The title of the poem.\n - 'url': A URL associated with the poem.\n \n- Poet Table:\n - 'id': The unique identifier of a poet.\n - 'name': The name of the poet.\n - 'cat_id': A category identifier.\n - 'description': A textual description or biography of the poet.\n\n- Verse Table:\n - 'poem_id': Identifier linking the verse to a particular poem.\n - 'vorder': Order of the verse within the poem.\n - 'position': Position of the verse, used to determine if two verses form a hemistich.\n - 'text': The text of the verse.",
"### Data Example",
"## Dataset Usage",
"### Use Cases\n\nThis dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:\n- Poem generation\n- Poet identification\n- Style analysis",
"### Challenges & Limitations\n\n- The dataset does not contain long verses that are over 100 characters.\n- Some poems may contain verses that form hemistichs, which are represented with specific structure in the dataset.",
"### License\n\nGPL-2 (GNU General Public License) inherited from the original source",
"## Additional Information",
"### Dataset Link \n\nDownload the dataset from Hugging Face",
"### Contact \n\nEmail: kakooch@URL | GitHub: kakooch\n\n---\n\n*This README was generated by Kakooch.*"
] |
[
19,
6,
4,
57,
82,
267,
5,
5,
44,
50,
20,
5,
13,
30
] |
[
"passage: TAGS\n#language-Persian #license-gpl-2.0 #region-us \n# Persian Poetry Dataset## Dataset Description### Overview\n\nThis dataset contains a rich collection of Persian poems along with metadata about the poets and the verses. The data spans various poets and their poems, and includes the verses with associated information about their position within each poem.### Data Collection\n\n- Data Collection Source: The data is sourced from the Ganjoor project. The specific database file can be found in the releases section of their GitHub repository.\n- Time Period: Oct-12-2023\n- Collection Methods: The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.### Data Structure\n\nThe dataset is structured into multiple tables, notably 'poem', 'poet', and 'verse' tables which contain information about the poems, poets, and verses respectively. The tables are linked through various ID fields that allow the data to be connected and queried together.\n\n- Poem Table:\n - 'id': The unique identifier of a poem.\n - 'cat_id': A category identifier linking to poet information.\n - 'title': The title of the poem.\n - 'url': A URL associated with the poem.\n \n- Poet Table:\n - 'id': The unique identifier of a poet.\n - 'name': The name of the poet.\n - 'cat_id': A category identifier.\n - 'description': A textual description or biography of the poet.\n\n- Verse Table:\n - 'poem_id': Identifier linking the verse to a particular poem.\n - 'vorder': Order of the verse within the poem.\n - 'position': Position of the verse, used to determine if two verses form a hemistich.\n - 'text': The text of the verse.### Data Example## Dataset Usage### Use Cases\n\nThis dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:\n- Poem generation\n- Poet identification\n- Style analysis"
] |
b6f2d8fcf9d6558030556c46f2479e7a18dd5c19
|
# Dataset Card for "spotlight-vikp-textbook_quality_programming-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
renumics/spotlight-vikp-textbook_quality_programming-enrichment
|
[
"region:us"
] |
2023-10-12T13:12:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "topic.embedding", "sequence": "float32", "length": 2}, {"name": "model.embedding", "sequence": "float32", "length": 2}, {"name": "markdown.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 279600, "num_examples": 11650}], "download_size": 389517, "dataset_size": 279600}}
|
2023-10-13T09:41:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "spotlight-vikp-textbook_quality_programming-enrichment"
More Information needed
|
[
"# Dataset Card for \"spotlight-vikp-textbook_quality_programming-enrichment\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"spotlight-vikp-textbook_quality_programming-enrichment\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-vikp-textbook_quality_programming-enrichment\"\n\nMore Information needed"
] |
69fe7d6611bd2ffd57104d2b6f84a7542404af83
|
# Dataset Card for "spotlight-zishuod-pokemon-icons-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
renumics/spotlight-zishuod-pokemon-icons-enrichment
|
[
"region:us"
] |
2023-10-12T13:15:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 3416, "num_examples": 427}, {"name": "test", "num_bytes": 1320, "num_examples": 165}], "download_size": 8424, "dataset_size": 4736}}
|
2023-10-13T09:43:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "spotlight-zishuod-pokemon-icons-enrichment"
More Information needed
|
[
"# Dataset Card for \"spotlight-zishuod-pokemon-icons-enrichment\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"spotlight-zishuod-pokemon-icons-enrichment\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-zishuod-pokemon-icons-enrichment\"\n\nMore Information needed"
] |
522893404b1174dd3104f8ed03fbabe7d2e24af5
|
<p align="center" width="100%">
</p>
<div id="top" align="center">
**Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration**
<h4> |<a href="https://arxiv.org/abs/2310.09168"> 📑 Paper </a> |
<a href="https://huggingface.co/datasets?sort=trending&search=Explore_Instruct"> 🤗 Data </a> |
<a href="https://huggingface.co/models?sort=trending&search=Explore-LM"> 🤗 Model </a> |
<a href="https://github.com/fanqiwan/Explore-Instruct"> 🐱 Github Repo </a> |
</h4>
<!-- **Authors:** -->
_**Fanqi Wan<sup>†</sup>, Xinting Huang<sup>‡</sup>, Tao Yang<sup>†</sup>, Xiaojun Quan<sup>†</sup>, Wei Bi<sup>‡</sup>, Shuming Shi<sup>‡</sup>**_
<!-- **Affiliations:** -->
_<sup>†</sup> Sun Yat-sen University,
<sup>‡</sup> Tencent AI Lab_
</div>
## News
- **Oct 16, 2023:** 🔥 We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct)! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Happy exploring and instructing!
## Contents
- [Overview](#overview)
- [Data Release](#data-release)
- [Model Release](#model-release)
- [Data Generation Process](#data-generation-process)
- [Fine-tuning](#fine-tuning)
- [Evaluation](#evaluation)
- [Limitations](#limitations)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgments)
## Overview
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, **not** necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
- **Lookahead** delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
- **Backtracking** seeks alternative branches to widen the search boundary, hence extending the domain spectrum.
<p align="center">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig2.png?raw=true" width="95%"> <br>
</p>
## Data Release
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct). Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
- `instruction`: `str`, describes the task the model should perform.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig1.png?raw=true" width="50%"> <br>
</p>
| Method | Brainstorming Unique<br/>V-N pairs | Rewriting Unique<br/>V-N pairs | Math Unique<br/>V-N pairs |
|:--------------------------------|:----------------------------------:|:------------------------------:|:-------------------------:|
| _Domain-Specific Human-Curated_ | 2 | 8 | 3 |
| _Domain-Aware Self-Instruct_ | 781 | 1715 | 451 |
| Explore-Instruct | **790** | **2015** | **917** |
## Model Release
We release the Explore-LM models in brainstorming, rewriting, and math domains on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
- Automatic evaluation:
| Automatic Comparison in the Brainstorming Domain | Win:Tie:Lose | Beat Rate |
|:-------------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 194:1:13 | 93.72 |
| Explore-LM-Ext vs Domain-Curated-LM | 196:1:11 | 94.69 |
| Explore-LM vs Domain-Instruct-LM | 114:56:38 | 75.00 |
| Explore-LM-Ext vs Domain-Instruct-LM | 122:55:31 | 79.74 |
| Explore-LM vs ChatGPT | 52:71:85 | 37.96 |
| Explore-LM-Ext vs ChatGPT | 83:69:56 | 59.71 |
| Automatic Comparison in the Rewriting Domain | Win:Tie:Lose | Beat Rate |
|:---------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 50:38:6 | 89.29 |
| Explore-LM-Ext vs Domain-Curated-LM | 53:37:4 | 92.98 |
| Explore-LM vs Domain-Instruct-LM | 34:49:11 | 75.56 |
| Explore-LM-Ext vs Domain-Instruct-LM | 35:53:6 | 85.37 |
| Explore-LM vs ChatGPT | 11:59:24 | 31.43 |
| Explore-LM-Ext vs ChatGPT | 12:56:26 | 31.58 |
| Automatic Comparison in the Math Domain | Accuracy Rate |
|:----------------------------------------|:-------------:|
| Domain-Curated-LM | 3.4 |
| Domain-Instruct-LM | 4.0 |
| Explore-LM | 6.8 |
| Explore-LM-Ext | 8.4 |
| ChatGPT | 34.8 |
- Human evaluation:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig5.png?raw=true" width="95%"> <br>
</p>
## Data Generation Process
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
```
python3 generate_instruction.py \
--action extend \
--save_dir ./en_data/demo_domain \ # input dir include current domain tree for exploration
--out_dir ./en_data/demo_domain_exploration \ # output dir of the explored new domain tree
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--extend_nums <TASK_NUMBER_DEPTH_0>,...,<TASK_NUMBER_DEPTH_MAX_DEPTH-1> \ # exploration breadth at each depth
--max_depth <MAX_DEPTH> \ # exploration depth
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Instruction-Tuning Data Generation
```
python3 generate_instruction.py \
--action enrich \
--save_dir ./en_data/demo_domain_exploration \ # input dir include current domain tree for data generation
--out_dir ./en_data/demo_domain_generation \ # output dir of the domain tree with generated data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--enrich_nums <DATA_NUMBER_DEPTH_0>,...,<DATA_NUMBER_DEPTH_MAX_DEPTH> \ # data number for task at each depth
--enrich_batch_size <BATCH_SIZE> \ # batch size for data generation
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Task Pruning
```
python3 generate_instruction.py \
--action prune \
--save_dir ./en_data/demo_domain_generation \ # input dir include current domain tree for task pruning
--out_dir ./en_data/demo_domain_pruning \ # output dir of the domain tree with 'pruned_subtasks_name.json' file
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--prune_threshold <PRUNE_THRESHOLD> \ # threshold of rouge-l overlap between task names
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Filtering
```
python3 generate_instruction.py \
--action filter \
--save_dir ./en_data/demo_domain_pruning \ # input dir include current domain tree for data filtering
--out_dir ./en_data/demo_domain_filtering \ # output dir of the domain tree with fitered data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--filter_threshold <FILTER_THRESHOLD> \ # threshold of rouge-l overlap between instructions
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Sampling
```
python3 generate_instruction.py \
--action sample \
--save_dir ./en_data/demo_domain_filtering \ # input dir include current domain tree for data sampling
--out_dir ./en_data/demo_domain_sampling \ # output dir of the domain tree with sampled data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_filtering/pruned_subtasks_name.json \ # file of pruned tasks
--sample_example_num <SAMPLE_EXAMPLES_NUM> \ # number of sampled examples
--sample_max_depth <SAMPLE_MAX_DEPTH> \ # max depth for data sampling
--sample_use_pruned \ # do not sample from pruned tasks
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
## Fine-tuning
We fine-tune LLaMA-7B with the following hyperparameters:
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|:----------------|-------------------:|---------------:|--------:|------------:|--------------:|
| LLaMA 7B | 128 | 2e-5 | 3 | 512| 0 |
To reproduce the training procedure, please use the following command:
```
deepspeed --num_gpus=8 ./train/train.py \
--deepspeed ./deepspeed_config/deepspeed_zero3_offload_config.json \
--model_name_or_path decapoda-research/llama-7b-hf \
--data_path ./en_data/demo_domain_sampling \
--fp16 True \
--output_dir ./training_results/explore-lm-7b-demo-domain \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--model_max_length 512 \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--prompt_type alpaca \
2>&1 | tee ./training_logs/explore-lm-7b-demo-domain.log
python3 ./train/zero_to_fp32.py \
--checkpoint_dir ./training_results/explore-lm-7b-demo-domain \
--output_file ./training_results/explore-lm-7b-demo-domain/pytorch_model.bin
```
## Evaluation
The evaluation datasets for different domains are as follows:
- Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. ([en_eval_set.jsonl](./eval/question/en_eval_set.jsonl))
- Math: From randomly selected 500 questions from the test set of MATH. ([MATH_eval_set_sample.jsonl](./eval/question/MATH_eval_set_sample.jsonl))
The evaluation metrics for different domains are as follows:
- Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
- Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
```
# Brainstorming and Rewriting Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/en_eval_set.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 1 \
--temperature 0.7 \
--max_new_tokens 512 \
--prompt_type alpaca \
--do_sample
# 2. Evaluation
python3 ./eval/chatgpt_score.py \
--baseline_file ./eval/answer/<MODEL_1>.jsonl \ # answer of baseline model to compare with
--answer_file ./eval/answer/<MODEL_2>.jsonl \ # answer of evaluation model
--review_file ./eval/review/<MODEL_1>_cp_<MODEL_2>_<DOMAIN>.jsonl \ # review from chatgpt
--prompt_file ./eval/prompt/en_review_prompt_compare.jsonl \ # evaluation prompt for chatgpt
--target_classes <DOMAIN> \ # evaluation domain
--batch_size <BATCH_SIZE> \
--review_model "gpt-3.5-turbo-0301"
```
```
# Math Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 10 \
--temperature 1.0 \
--max_new_tokens 512 \
--prompt_type alpaca
# 2. Evaluation
python3 ./eval/auto_eval.py \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl # answer of evaluation model
```
## Limitations
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
## License
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
## Citation
If you find this work is relevant with your research or applications, please feel free to cite our work!
```
@misc{wan2023explore,
title={Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration},
author={Fanqi, Wan and Xinting, Huang and Tao, Yang and Xiaojun, Quan and Wei, Bi and Shuming, Shi},
year={2023},
eprint={2310.09168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgments
This repo benefits from [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Vicuna](https://github.com/lm-sys/FastChat). Thanks for their wonderful works!
|
Wanfq/Explore_Instruct_Rewriting_10k
|
[
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2310.09168",
"region:us"
] |
2023-10-12T13:22:20+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0"}
|
2023-10-16T01:16:43+00:00
|
[
"2310.09168"
] |
[
"en"
] |
TAGS
#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us
|
Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration
#### | [|
[|](URL Model </a> |
<a href=)](URL Paper </a> |
<a href=)
*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*
*† Sun Yat-sen University,
‡ Tencent AI Lab*
News
----
* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!
Contents
--------
* Overview
* Data Release
* Model Release
* Data Generation Process
* Fine-tuning
* Evaluation
* Limitations
* License
* Citation
* Acknowledgements
Overview
--------
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.

Data Release
------------
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
* 'instruction': 'str', describes the task the model should perform.
* 'input': 'str', optional context or input for the task.
* 'output': 'str', ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:

Model Release
-------------
We release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
* Automatic evaluation:
* Human evaluation:

Data Generation Process
-----------------------
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
### Instruction-Tuning Data Generation
### Task Pruning
### Data Filtering
### Data Sampling
Fine-tuning
-----------
We fine-tune LLaMA-7B with the following hyperparameters:
To reproduce the training procedure, please use the following command:
Evaluation
----------
The evaluation datasets for different domains are as follows:
* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\_eval\_set.jsonl)
* Math: From randomly selected 500 questions from the test set of MATH. (MATH\_eval\_set\_sample.jsonl)
The evaluation metrics for different domains are as follows:
* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
* Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
Limitations
-----------
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
License
-------
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
If you find this work is relevant with your research or applications, please feel free to cite our work!
Acknowledgments
---------------
This repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!
|
[
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
"TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n",
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
30,
772,
6,
9,
6,
5,
414
] |
[
"passage: TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n"
] |
5cc041afbde1f2999a402869103b2a5fea50aeca
|
<p align="center" width="100%">
</p>
<div id="top" align="center">
**Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration**
<h4> |<a href="https://arxiv.org/abs/2310.09168"> 📑 Paper </a> |
<a href="https://huggingface.co/datasets?sort=trending&search=Explore_Instruct"> 🤗 Data </a> |
<a href="https://huggingface.co/models?sort=trending&search=Explore-LM"> 🤗 Model </a> |
<a href="https://github.com/fanqiwan/Explore-Instruct"> 🐱 Github Repo </a> |
</h4>
<!-- **Authors:** -->
_**Fanqi Wan<sup>†</sup>, Xinting Huang<sup>‡</sup>, Tao Yang<sup>†</sup>, Xiaojun Quan<sup>†</sup>, Wei Bi<sup>‡</sup>, Shuming Shi<sup>‡</sup>**_
<!-- **Affiliations:** -->
_<sup>†</sup> Sun Yat-sen University,
<sup>‡</sup> Tencent AI Lab_
</div>
## News
- **Oct 16, 2023:** 🔥 We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct)! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Happy exploring and instructing!
## Contents
- [Overview](#overview)
- [Data Release](#data-release)
- [Model Release](#model-release)
- [Data Generation Process](#data-generation-process)
- [Fine-tuning](#fine-tuning)
- [Evaluation](#evaluation)
- [Limitations](#limitations)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgments)
## Overview
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, **not** necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
- **Lookahead** delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
- **Backtracking** seeks alternative branches to widen the search boundary, hence extending the domain spectrum.
<p align="center">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig2.png?raw=true" width="95%"> <br>
</p>
## Data Release
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct). Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
- `instruction`: `str`, describes the task the model should perform.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig1.png?raw=true" width="50%"> <br>
</p>
| Method | Brainstorming Unique<br/>V-N pairs | Rewriting Unique<br/>V-N pairs | Math Unique<br/>V-N pairs |
|:--------------------------------|:----------------------------------:|:------------------------------:|:-------------------------:|
| _Domain-Specific Human-Curated_ | 2 | 8 | 3 |
| _Domain-Aware Self-Instruct_ | 781 | 1715 | 451 |
| Explore-Instruct | **790** | **2015** | **917** |
## Model Release
We release the Explore-LM models in brainstorming, rewriting, and math domains on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
- Automatic evaluation:
| Automatic Comparison in the Brainstorming Domain | Win:Tie:Lose | Beat Rate |
|:-------------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 194:1:13 | 93.72 |
| Explore-LM-Ext vs Domain-Curated-LM | 196:1:11 | 94.69 |
| Explore-LM vs Domain-Instruct-LM | 114:56:38 | 75.00 |
| Explore-LM-Ext vs Domain-Instruct-LM | 122:55:31 | 79.74 |
| Explore-LM vs ChatGPT | 52:71:85 | 37.96 |
| Explore-LM-Ext vs ChatGPT | 83:69:56 | 59.71 |
| Automatic Comparison in the Rewriting Domain | Win:Tie:Lose | Beat Rate |
|:---------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 50:38:6 | 89.29 |
| Explore-LM-Ext vs Domain-Curated-LM | 53:37:4 | 92.98 |
| Explore-LM vs Domain-Instruct-LM | 34:49:11 | 75.56 |
| Explore-LM-Ext vs Domain-Instruct-LM | 35:53:6 | 85.37 |
| Explore-LM vs ChatGPT | 11:59:24 | 31.43 |
| Explore-LM-Ext vs ChatGPT | 12:56:26 | 31.58 |
| Automatic Comparison in the Math Domain | Accuracy Rate |
|:----------------------------------------|:-------------:|
| Domain-Curated-LM | 3.4 |
| Domain-Instruct-LM | 4.0 |
| Explore-LM | 6.8 |
| Explore-LM-Ext | 8.4 |
| ChatGPT | 34.8 |
- Human evaluation:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig5.png?raw=true" width="95%"> <br>
</p>
## Data Generation Process
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
```
python3 generate_instruction.py \
--action extend \
--save_dir ./en_data/demo_domain \ # input dir include current domain tree for exploration
--out_dir ./en_data/demo_domain_exploration \ # output dir of the explored new domain tree
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--extend_nums <TASK_NUMBER_DEPTH_0>,...,<TASK_NUMBER_DEPTH_MAX_DEPTH-1> \ # exploration breadth at each depth
--max_depth <MAX_DEPTH> \ # exploration depth
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Instruction-Tuning Data Generation
```
python3 generate_instruction.py \
--action enrich \
--save_dir ./en_data/demo_domain_exploration \ # input dir include current domain tree for data generation
--out_dir ./en_data/demo_domain_generation \ # output dir of the domain tree with generated data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--enrich_nums <DATA_NUMBER_DEPTH_0>,...,<DATA_NUMBER_DEPTH_MAX_DEPTH> \ # data number for task at each depth
--enrich_batch_size <BATCH_SIZE> \ # batch size for data generation
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Task Pruning
```
python3 generate_instruction.py \
--action prune \
--save_dir ./en_data/demo_domain_generation \ # input dir include current domain tree for task pruning
--out_dir ./en_data/demo_domain_pruning \ # output dir of the domain tree with 'pruned_subtasks_name.json' file
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--prune_threshold <PRUNE_THRESHOLD> \ # threshold of rouge-l overlap between task names
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Filtering
```
python3 generate_instruction.py \
--action filter \
--save_dir ./en_data/demo_domain_pruning \ # input dir include current domain tree for data filtering
--out_dir ./en_data/demo_domain_filtering \ # output dir of the domain tree with fitered data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--filter_threshold <FILTER_THRESHOLD> \ # threshold of rouge-l overlap between instructions
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Sampling
```
python3 generate_instruction.py \
--action sample \
--save_dir ./en_data/demo_domain_filtering \ # input dir include current domain tree for data sampling
--out_dir ./en_data/demo_domain_sampling \ # output dir of the domain tree with sampled data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_filtering/pruned_subtasks_name.json \ # file of pruned tasks
--sample_example_num <SAMPLE_EXAMPLES_NUM> \ # number of sampled examples
--sample_max_depth <SAMPLE_MAX_DEPTH> \ # max depth for data sampling
--sample_use_pruned \ # do not sample from pruned tasks
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
## Fine-tuning
We fine-tune LLaMA-7B with the following hyperparameters:
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|:----------------|-------------------:|---------------:|--------:|------------:|--------------:|
| LLaMA 7B | 128 | 2e-5 | 3 | 512| 0 |
To reproduce the training procedure, please use the following command:
```
deepspeed --num_gpus=8 ./train/train.py \
--deepspeed ./deepspeed_config/deepspeed_zero3_offload_config.json \
--model_name_or_path decapoda-research/llama-7b-hf \
--data_path ./en_data/demo_domain_sampling \
--fp16 True \
--output_dir ./training_results/explore-lm-7b-demo-domain \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--model_max_length 512 \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--prompt_type alpaca \
2>&1 | tee ./training_logs/explore-lm-7b-demo-domain.log
python3 ./train/zero_to_fp32.py \
--checkpoint_dir ./training_results/explore-lm-7b-demo-domain \
--output_file ./training_results/explore-lm-7b-demo-domain/pytorch_model.bin
```
## Evaluation
The evaluation datasets for different domains are as follows:
- Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. ([en_eval_set.jsonl](./eval/question/en_eval_set.jsonl))
- Math: From randomly selected 500 questions from the test set of MATH. ([MATH_eval_set_sample.jsonl](./eval/question/MATH_eval_set_sample.jsonl))
The evaluation metrics for different domains are as follows:
- Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
- Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
```
# Brainstorming and Rewriting Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/en_eval_set.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 1 \
--temperature 0.7 \
--max_new_tokens 512 \
--prompt_type alpaca \
--do_sample
# 2. Evaluation
python3 ./eval/chatgpt_score.py \
--baseline_file ./eval/answer/<MODEL_1>.jsonl \ # answer of baseline model to compare with
--answer_file ./eval/answer/<MODEL_2>.jsonl \ # answer of evaluation model
--review_file ./eval/review/<MODEL_1>_cp_<MODEL_2>_<DOMAIN>.jsonl \ # review from chatgpt
--prompt_file ./eval/prompt/en_review_prompt_compare.jsonl \ # evaluation prompt for chatgpt
--target_classes <DOMAIN> \ # evaluation domain
--batch_size <BATCH_SIZE> \
--review_model "gpt-3.5-turbo-0301"
```
```
# Math Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 10 \
--temperature 1.0 \
--max_new_tokens 512 \
--prompt_type alpaca
# 2. Evaluation
python3 ./eval/auto_eval.py \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl # answer of evaluation model
```
## Limitations
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
## License
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
## Citation
If you find this work is relevant with your research or applications, please feel free to cite our work!
```
@misc{wan2023explore,
title={Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration},
author={Fanqi, Wan and Xinting, Huang and Tao, Yang and Xiaojun, Quan and Wei, Bi and Shuming, Shi},
year={2023},
eprint={2310.09168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgments
This repo benefits from [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Vicuna](https://github.com/lm-sys/FastChat). Thanks for their wonderful works!
|
Wanfq/Explore_Instruct_Rewriting_32k
|
[
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2310.09168",
"region:us"
] |
2023-10-12T13:23:24+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0"}
|
2023-10-16T01:17:26+00:00
|
[
"2310.09168"
] |
[
"en"
] |
TAGS
#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us
|
Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration
#### | [|
[|](URL Model </a> |
<a href=)](URL Paper </a> |
<a href=)
*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*
*† Sun Yat-sen University,
‡ Tencent AI Lab*
News
----
* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!
Contents
--------
* Overview
* Data Release
* Model Release
* Data Generation Process
* Fine-tuning
* Evaluation
* Limitations
* License
* Citation
* Acknowledgements
Overview
--------
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.

Data Release
------------
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
* 'instruction': 'str', describes the task the model should perform.
* 'input': 'str', optional context or input for the task.
* 'output': 'str', ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:

Model Release
-------------
We release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
* Automatic evaluation:
* Human evaluation:

Data Generation Process
-----------------------
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
### Instruction-Tuning Data Generation
### Task Pruning
### Data Filtering
### Data Sampling
Fine-tuning
-----------
We fine-tune LLaMA-7B with the following hyperparameters:
To reproduce the training procedure, please use the following command:
Evaluation
----------
The evaluation datasets for different domains are as follows:
* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\_eval\_set.jsonl)
* Math: From randomly selected 500 questions from the test set of MATH. (MATH\_eval\_set\_sample.jsonl)
The evaluation metrics for different domains are as follows:
* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
* Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
Limitations
-----------
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
License
-------
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
If you find this work is relevant with your research or applications, please feel free to cite our work!
Acknowledgments
---------------
This repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!
|
[
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
"TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n",
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
30,
772,
6,
9,
6,
5,
414
] |
[
"passage: TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n"
] |
a044a1935b0a6192d4025e76cbc4412e80f0dac5
|
# Dataset Card for "BIRD-SQL-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xu3kev/BIRD-SQL-data
|
[
"region:us"
] |
2023-10-12T13:26:54+00:00
|
{"dataset_info": {"features": [{"name": "db_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "evidence", "dtype": "string"}, {"name": "SQL", "dtype": "string"}, {"name": "schema", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1039491, "num_examples": 200}], "download_size": 98914, "dataset_size": 1039491}}
|
2023-10-12T13:50:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "BIRD-SQL-data"
More Information needed
|
[
"# Dataset Card for \"BIRD-SQL-data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"BIRD-SQL-data\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"BIRD-SQL-data\"\n\nMore Information needed"
] |
584cf17b304220f1396d64da8e67c84acca0d819
|
<p align="center" width="100%">
</p>
<div id="top" align="center">
**Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration**
<h4> |<a href="https://arxiv.org/abs/2310.09168"> 📑 Paper </a> |
<a href="https://huggingface.co/datasets?sort=trending&search=Explore_Instruct"> 🤗 Data </a> |
<a href="https://huggingface.co/models?sort=trending&search=Explore-LM"> 🤗 Model </a> |
<a href="https://github.com/fanqiwan/Explore-Instruct"> 🐱 Github Repo </a> |
</h4>
<!-- **Authors:** -->
_**Fanqi Wan<sup>†</sup>, Xinting Huang<sup>‡</sup>, Tao Yang<sup>†</sup>, Xiaojun Quan<sup>†</sup>, Wei Bi<sup>‡</sup>, Shuming Shi<sup>‡</sup>**_
<!-- **Affiliations:** -->
_<sup>†</sup> Sun Yat-sen University,
<sup>‡</sup> Tencent AI Lab_
</div>
## News
- **Oct 16, 2023:** 🔥 We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct)! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Happy exploring and instructing!
## Contents
- [Overview](#overview)
- [Data Release](#data-release)
- [Model Release](#model-release)
- [Data Generation Process](#data-generation-process)
- [Fine-tuning](#fine-tuning)
- [Evaluation](#evaluation)
- [Limitations](#limitations)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgments)
## Overview
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, **not** necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
- **Lookahead** delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
- **Backtracking** seeks alternative branches to widen the search boundary, hence extending the domain spectrum.
<p align="center">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig2.png?raw=true" width="95%"> <br>
</p>
## Data Release
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct). Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
- `instruction`: `str`, describes the task the model should perform.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig1.png?raw=true" width="50%"> <br>
</p>
| Method | Brainstorming Unique<br/>V-N pairs | Rewriting Unique<br/>V-N pairs | Math Unique<br/>V-N pairs |
|:--------------------------------|:----------------------------------:|:------------------------------:|:-------------------------:|
| _Domain-Specific Human-Curated_ | 2 | 8 | 3 |
| _Domain-Aware Self-Instruct_ | 781 | 1715 | 451 |
| Explore-Instruct | **790** | **2015** | **917** |
## Model Release
We release the Explore-LM models in brainstorming, rewriting, and math domains on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
- Automatic evaluation:
| Automatic Comparison in the Brainstorming Domain | Win:Tie:Lose | Beat Rate |
|:-------------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 194:1:13 | 93.72 |
| Explore-LM-Ext vs Domain-Curated-LM | 196:1:11 | 94.69 |
| Explore-LM vs Domain-Instruct-LM | 114:56:38 | 75.00 |
| Explore-LM-Ext vs Domain-Instruct-LM | 122:55:31 | 79.74 |
| Explore-LM vs ChatGPT | 52:71:85 | 37.96 |
| Explore-LM-Ext vs ChatGPT | 83:69:56 | 59.71 |
| Automatic Comparison in the Rewriting Domain | Win:Tie:Lose | Beat Rate |
|:---------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 50:38:6 | 89.29 |
| Explore-LM-Ext vs Domain-Curated-LM | 53:37:4 | 92.98 |
| Explore-LM vs Domain-Instruct-LM | 34:49:11 | 75.56 |
| Explore-LM-Ext vs Domain-Instruct-LM | 35:53:6 | 85.37 |
| Explore-LM vs ChatGPT | 11:59:24 | 31.43 |
| Explore-LM-Ext vs ChatGPT | 12:56:26 | 31.58 |
| Automatic Comparison in the Math Domain | Accuracy Rate |
|:----------------------------------------|:-------------:|
| Domain-Curated-LM | 3.4 |
| Domain-Instruct-LM | 4.0 |
| Explore-LM | 6.8 |
| Explore-LM-Ext | 8.4 |
| ChatGPT | 34.8 |
- Human evaluation:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig5.png?raw=true" width="95%"> <br>
</p>
## Data Generation Process
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
```
python3 generate_instruction.py \
--action extend \
--save_dir ./en_data/demo_domain \ # input dir include current domain tree for exploration
--out_dir ./en_data/demo_domain_exploration \ # output dir of the explored new domain tree
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--extend_nums <TASK_NUMBER_DEPTH_0>,...,<TASK_NUMBER_DEPTH_MAX_DEPTH-1> \ # exploration breadth at each depth
--max_depth <MAX_DEPTH> \ # exploration depth
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Instruction-Tuning Data Generation
```
python3 generate_instruction.py \
--action enrich \
--save_dir ./en_data/demo_domain_exploration \ # input dir include current domain tree for data generation
--out_dir ./en_data/demo_domain_generation \ # output dir of the domain tree with generated data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--enrich_nums <DATA_NUMBER_DEPTH_0>,...,<DATA_NUMBER_DEPTH_MAX_DEPTH> \ # data number for task at each depth
--enrich_batch_size <BATCH_SIZE> \ # batch size for data generation
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Task Pruning
```
python3 generate_instruction.py \
--action prune \
--save_dir ./en_data/demo_domain_generation \ # input dir include current domain tree for task pruning
--out_dir ./en_data/demo_domain_pruning \ # output dir of the domain tree with 'pruned_subtasks_name.json' file
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--prune_threshold <PRUNE_THRESHOLD> \ # threshold of rouge-l overlap between task names
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Filtering
```
python3 generate_instruction.py \
--action filter \
--save_dir ./en_data/demo_domain_pruning \ # input dir include current domain tree for data filtering
--out_dir ./en_data/demo_domain_filtering \ # output dir of the domain tree with fitered data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--filter_threshold <FILTER_THRESHOLD> \ # threshold of rouge-l overlap between instructions
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Sampling
```
python3 generate_instruction.py \
--action sample \
--save_dir ./en_data/demo_domain_filtering \ # input dir include current domain tree for data sampling
--out_dir ./en_data/demo_domain_sampling \ # output dir of the domain tree with sampled data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_filtering/pruned_subtasks_name.json \ # file of pruned tasks
--sample_example_num <SAMPLE_EXAMPLES_NUM> \ # number of sampled examples
--sample_max_depth <SAMPLE_MAX_DEPTH> \ # max depth for data sampling
--sample_use_pruned \ # do not sample from pruned tasks
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
## Fine-tuning
We fine-tune LLaMA-7B with the following hyperparameters:
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|:----------------|-------------------:|---------------:|--------:|------------:|--------------:|
| LLaMA 7B | 128 | 2e-5 | 3 | 512| 0 |
To reproduce the training procedure, please use the following command:
```
deepspeed --num_gpus=8 ./train/train.py \
--deepspeed ./deepspeed_config/deepspeed_zero3_offload_config.json \
--model_name_or_path decapoda-research/llama-7b-hf \
--data_path ./en_data/demo_domain_sampling \
--fp16 True \
--output_dir ./training_results/explore-lm-7b-demo-domain \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--model_max_length 512 \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--prompt_type alpaca \
2>&1 | tee ./training_logs/explore-lm-7b-demo-domain.log
python3 ./train/zero_to_fp32.py \
--checkpoint_dir ./training_results/explore-lm-7b-demo-domain \
--output_file ./training_results/explore-lm-7b-demo-domain/pytorch_model.bin
```
## Evaluation
The evaluation datasets for different domains are as follows:
- Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. ([en_eval_set.jsonl](./eval/question/en_eval_set.jsonl))
- Math: From randomly selected 500 questions from the test set of MATH. ([MATH_eval_set_sample.jsonl](./eval/question/MATH_eval_set_sample.jsonl))
The evaluation metrics for different domains are as follows:
- Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
- Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
```
# Brainstorming and Rewriting Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/en_eval_set.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 1 \
--temperature 0.7 \
--max_new_tokens 512 \
--prompt_type alpaca \
--do_sample
# 2. Evaluation
python3 ./eval/chatgpt_score.py \
--baseline_file ./eval/answer/<MODEL_1>.jsonl \ # answer of baseline model to compare with
--answer_file ./eval/answer/<MODEL_2>.jsonl \ # answer of evaluation model
--review_file ./eval/review/<MODEL_1>_cp_<MODEL_2>_<DOMAIN>.jsonl \ # review from chatgpt
--prompt_file ./eval/prompt/en_review_prompt_compare.jsonl \ # evaluation prompt for chatgpt
--target_classes <DOMAIN> \ # evaluation domain
--batch_size <BATCH_SIZE> \
--review_model "gpt-3.5-turbo-0301"
```
```
# Math Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 10 \
--temperature 1.0 \
--max_new_tokens 512 \
--prompt_type alpaca
# 2. Evaluation
python3 ./eval/auto_eval.py \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl # answer of evaluation model
```
## Limitations
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
## License
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
## Citation
If you find this work is relevant with your research or applications, please feel free to cite our work!
```
@misc{wan2023explore,
title={Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration},
author={Fanqi, Wan and Xinting, Huang and Tao, Yang and Xiaojun, Quan and Wei, Bi and Shuming, Shi},
year={2023},
eprint={2310.09168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgments
This repo benefits from [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Vicuna](https://github.com/lm-sys/FastChat). Thanks for their wonderful works!
|
Wanfq/Explore_Instruct_Brainstorming_10k
|
[
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2310.09168",
"region:us"
] |
2023-10-12T13:27:21+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0"}
|
2023-10-16T01:18:01+00:00
|
[
"2310.09168"
] |
[
"en"
] |
TAGS
#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us
|
Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration
#### | [|
[|](URL Model </a> |
<a href=)](URL Paper </a> |
<a href=)
*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*
*† Sun Yat-sen University,
‡ Tencent AI Lab*
News
----
* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!
Contents
--------
* Overview
* Data Release
* Model Release
* Data Generation Process
* Fine-tuning
* Evaluation
* Limitations
* License
* Citation
* Acknowledgements
Overview
--------
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.

Data Release
------------
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
* 'instruction': 'str', describes the task the model should perform.
* 'input': 'str', optional context or input for the task.
* 'output': 'str', ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:

Model Release
-------------
We release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
* Automatic evaluation:
* Human evaluation:

Data Generation Process
-----------------------
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
### Instruction-Tuning Data Generation
### Task Pruning
### Data Filtering
### Data Sampling
Fine-tuning
-----------
We fine-tune LLaMA-7B with the following hyperparameters:
To reproduce the training procedure, please use the following command:
Evaluation
----------
The evaluation datasets for different domains are as follows:
* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\_eval\_set.jsonl)
* Math: From randomly selected 500 questions from the test set of MATH. (MATH\_eval\_set\_sample.jsonl)
The evaluation metrics for different domains are as follows:
* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
* Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
Limitations
-----------
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
License
-------
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
If you find this work is relevant with your research or applications, please feel free to cite our work!
Acknowledgments
---------------
This repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!
|
[
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
"TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n",
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
30,
772,
6,
9,
6,
5,
414
] |
[
"passage: TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n"
] |
8c2eb9eac8f41ab75718e53091861f816561c6d2
|
<p align="center" width="100%">
</p>
<div id="top" align="center">
**Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration**
<h4> |<a href="https://arxiv.org/abs/2310.09168"> 📑 Paper </a> |
<a href="https://huggingface.co/datasets?sort=trending&search=Explore_Instruct"> 🤗 Data </a> |
<a href="https://huggingface.co/models?sort=trending&search=Explore-LM"> 🤗 Model </a> |
<a href="https://github.com/fanqiwan/Explore-Instruct"> 🐱 Github Repo </a> |
</h4>
<!-- **Authors:** -->
_**Fanqi Wan<sup>†</sup>, Xinting Huang<sup>‡</sup>, Tao Yang<sup>†</sup>, Xiaojun Quan<sup>†</sup>, Wei Bi<sup>‡</sup>, Shuming Shi<sup>‡</sup>**_
<!-- **Affiliations:** -->
_<sup>†</sup> Sun Yat-sen University,
<sup>‡</sup> Tencent AI Lab_
</div>
## News
- **Oct 16, 2023:** 🔥 We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct)! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Happy exploring and instructing!
## Contents
- [Overview](#overview)
- [Data Release](#data-release)
- [Model Release](#model-release)
- [Data Generation Process](#data-generation-process)
- [Fine-tuning](#fine-tuning)
- [Evaluation](#evaluation)
- [Limitations](#limitations)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgments)
## Overview
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, **not** necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
- **Lookahead** delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
- **Backtracking** seeks alternative branches to widen the search boundary, hence extending the domain spectrum.
<p align="center">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig2.png?raw=true" width="95%"> <br>
</p>
## Data Release
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct). Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
- `instruction`: `str`, describes the task the model should perform.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig1.png?raw=true" width="50%"> <br>
</p>
| Method | Brainstorming Unique<br/>V-N pairs | Rewriting Unique<br/>V-N pairs | Math Unique<br/>V-N pairs |
|:--------------------------------|:----------------------------------:|:------------------------------:|:-------------------------:|
| _Domain-Specific Human-Curated_ | 2 | 8 | 3 |
| _Domain-Aware Self-Instruct_ | 781 | 1715 | 451 |
| Explore-Instruct | **790** | **2015** | **917** |
## Model Release
We release the Explore-LM models in brainstorming, rewriting, and math domains on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
- Automatic evaluation:
| Automatic Comparison in the Brainstorming Domain | Win:Tie:Lose | Beat Rate |
|:-------------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 194:1:13 | 93.72 |
| Explore-LM-Ext vs Domain-Curated-LM | 196:1:11 | 94.69 |
| Explore-LM vs Domain-Instruct-LM | 114:56:38 | 75.00 |
| Explore-LM-Ext vs Domain-Instruct-LM | 122:55:31 | 79.74 |
| Explore-LM vs ChatGPT | 52:71:85 | 37.96 |
| Explore-LM-Ext vs ChatGPT | 83:69:56 | 59.71 |
| Automatic Comparison in the Rewriting Domain | Win:Tie:Lose | Beat Rate |
|:---------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 50:38:6 | 89.29 |
| Explore-LM-Ext vs Domain-Curated-LM | 53:37:4 | 92.98 |
| Explore-LM vs Domain-Instruct-LM | 34:49:11 | 75.56 |
| Explore-LM-Ext vs Domain-Instruct-LM | 35:53:6 | 85.37 |
| Explore-LM vs ChatGPT | 11:59:24 | 31.43 |
| Explore-LM-Ext vs ChatGPT | 12:56:26 | 31.58 |
| Automatic Comparison in the Math Domain | Accuracy Rate |
|:----------------------------------------|:-------------:|
| Domain-Curated-LM | 3.4 |
| Domain-Instruct-LM | 4.0 |
| Explore-LM | 6.8 |
| Explore-LM-Ext | 8.4 |
| ChatGPT | 34.8 |
- Human evaluation:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig5.png?raw=true" width="95%"> <br>
</p>
## Data Generation Process
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
```
python3 generate_instruction.py \
--action extend \
--save_dir ./en_data/demo_domain \ # input dir include current domain tree for exploration
--out_dir ./en_data/demo_domain_exploration \ # output dir of the explored new domain tree
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--extend_nums <TASK_NUMBER_DEPTH_0>,...,<TASK_NUMBER_DEPTH_MAX_DEPTH-1> \ # exploration breadth at each depth
--max_depth <MAX_DEPTH> \ # exploration depth
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Instruction-Tuning Data Generation
```
python3 generate_instruction.py \
--action enrich \
--save_dir ./en_data/demo_domain_exploration \ # input dir include current domain tree for data generation
--out_dir ./en_data/demo_domain_generation \ # output dir of the domain tree with generated data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--enrich_nums <DATA_NUMBER_DEPTH_0>,...,<DATA_NUMBER_DEPTH_MAX_DEPTH> \ # data number for task at each depth
--enrich_batch_size <BATCH_SIZE> \ # batch size for data generation
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Task Pruning
```
python3 generate_instruction.py \
--action prune \
--save_dir ./en_data/demo_domain_generation \ # input dir include current domain tree for task pruning
--out_dir ./en_data/demo_domain_pruning \ # output dir of the domain tree with 'pruned_subtasks_name.json' file
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--prune_threshold <PRUNE_THRESHOLD> \ # threshold of rouge-l overlap between task names
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Filtering
```
python3 generate_instruction.py \
--action filter \
--save_dir ./en_data/demo_domain_pruning \ # input dir include current domain tree for data filtering
--out_dir ./en_data/demo_domain_filtering \ # output dir of the domain tree with fitered data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--filter_threshold <FILTER_THRESHOLD> \ # threshold of rouge-l overlap between instructions
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Sampling
```
python3 generate_instruction.py \
--action sample \
--save_dir ./en_data/demo_domain_filtering \ # input dir include current domain tree for data sampling
--out_dir ./en_data/demo_domain_sampling \ # output dir of the domain tree with sampled data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_filtering/pruned_subtasks_name.json \ # file of pruned tasks
--sample_example_num <SAMPLE_EXAMPLES_NUM> \ # number of sampled examples
--sample_max_depth <SAMPLE_MAX_DEPTH> \ # max depth for data sampling
--sample_use_pruned \ # do not sample from pruned tasks
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
## Fine-tuning
We fine-tune LLaMA-7B with the following hyperparameters:
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|:----------------|-------------------:|---------------:|--------:|------------:|--------------:|
| LLaMA 7B | 128 | 2e-5 | 3 | 512| 0 |
To reproduce the training procedure, please use the following command:
```
deepspeed --num_gpus=8 ./train/train.py \
--deepspeed ./deepspeed_config/deepspeed_zero3_offload_config.json \
--model_name_or_path decapoda-research/llama-7b-hf \
--data_path ./en_data/demo_domain_sampling \
--fp16 True \
--output_dir ./training_results/explore-lm-7b-demo-domain \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--model_max_length 512 \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--prompt_type alpaca \
2>&1 | tee ./training_logs/explore-lm-7b-demo-domain.log
python3 ./train/zero_to_fp32.py \
--checkpoint_dir ./training_results/explore-lm-7b-demo-domain \
--output_file ./training_results/explore-lm-7b-demo-domain/pytorch_model.bin
```
## Evaluation
The evaluation datasets for different domains are as follows:
- Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. ([en_eval_set.jsonl](./eval/question/en_eval_set.jsonl))
- Math: From randomly selected 500 questions from the test set of MATH. ([MATH_eval_set_sample.jsonl](./eval/question/MATH_eval_set_sample.jsonl))
The evaluation metrics for different domains are as follows:
- Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
- Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
```
# Brainstorming and Rewriting Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/en_eval_set.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 1 \
--temperature 0.7 \
--max_new_tokens 512 \
--prompt_type alpaca \
--do_sample
# 2. Evaluation
python3 ./eval/chatgpt_score.py \
--baseline_file ./eval/answer/<MODEL_1>.jsonl \ # answer of baseline model to compare with
--answer_file ./eval/answer/<MODEL_2>.jsonl \ # answer of evaluation model
--review_file ./eval/review/<MODEL_1>_cp_<MODEL_2>_<DOMAIN>.jsonl \ # review from chatgpt
--prompt_file ./eval/prompt/en_review_prompt_compare.jsonl \ # evaluation prompt for chatgpt
--target_classes <DOMAIN> \ # evaluation domain
--batch_size <BATCH_SIZE> \
--review_model "gpt-3.5-turbo-0301"
```
```
# Math Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 10 \
--temperature 1.0 \
--max_new_tokens 512 \
--prompt_type alpaca
# 2. Evaluation
python3 ./eval/auto_eval.py \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl # answer of evaluation model
```
## Limitations
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
## License
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
## Citation
If you find this work is relevant with your research or applications, please feel free to cite our work!
```
@misc{wan2023explore,
title={Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration},
author={Fanqi, Wan and Xinting, Huang and Tao, Yang and Xiaojun, Quan and Wei, Bi and Shuming, Shi},
year={2023},
eprint={2310.09168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgments
This repo benefits from [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Vicuna](https://github.com/lm-sys/FastChat). Thanks for their wonderful works!
|
Wanfq/Explore_Instruct_Brainstorming_16k
|
[
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2310.09168",
"region:us"
] |
2023-10-12T13:28:06+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0"}
|
2023-10-16T01:18:38+00:00
|
[
"2310.09168"
] |
[
"en"
] |
TAGS
#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us
|
Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration
#### | [|
[|](URL Model </a> |
<a href=)](URL Paper </a> |
<a href=)
*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*
*† Sun Yat-sen University,
‡ Tencent AI Lab*
News
----
* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!
Contents
--------
* Overview
* Data Release
* Model Release
* Data Generation Process
* Fine-tuning
* Evaluation
* Limitations
* License
* Citation
* Acknowledgements
Overview
--------
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.

Data Release
------------
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
* 'instruction': 'str', describes the task the model should perform.
* 'input': 'str', optional context or input for the task.
* 'output': 'str', ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:

Model Release
-------------
We release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
* Automatic evaluation:
* Human evaluation:

Data Generation Process
-----------------------
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
### Instruction-Tuning Data Generation
### Task Pruning
### Data Filtering
### Data Sampling
Fine-tuning
-----------
We fine-tune LLaMA-7B with the following hyperparameters:
To reproduce the training procedure, please use the following command:
Evaluation
----------
The evaluation datasets for different domains are as follows:
* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\_eval\_set.jsonl)
* Math: From randomly selected 500 questions from the test set of MATH. (MATH\_eval\_set\_sample.jsonl)
The evaluation metrics for different domains are as follows:
* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
* Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
Limitations
-----------
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
License
-------
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
If you find this work is relevant with your research or applications, please feel free to cite our work!
Acknowledgments
---------------
This repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!
|
[
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
"TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n",
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
30,
772,
6,
9,
6,
5,
414
] |
[
"passage: TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n"
] |
788a75a221ca7a05762a2882440ffc5da6cc2798
|
<p align="center" width="100%">
</p>
<div id="top" align="center">
**Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration**
<h4> |<a href="https://arxiv.org/abs/2310.09168"> 📑 Paper </a> |
<a href="https://huggingface.co/datasets?sort=trending&search=Explore_Instruct"> 🤗 Data </a> |
<a href="https://huggingface.co/models?sort=trending&search=Explore-LM"> 🤗 Model </a> |
<a href="https://github.com/fanqiwan/Explore-Instruct"> 🐱 Github Repo </a> |
</h4>
<!-- **Authors:** -->
_**Fanqi Wan<sup>†</sup>, Xinting Huang<sup>‡</sup>, Tao Yang<sup>†</sup>, Xiaojun Quan<sup>†</sup>, Wei Bi<sup>‡</sup>, Shuming Shi<sup>‡</sup>**_
<!-- **Affiliations:** -->
_<sup>†</sup> Sun Yat-sen University,
<sup>‡</sup> Tencent AI Lab_
</div>
## News
- **Oct 16, 2023:** 🔥 We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct)! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Happy exploring and instructing!
## Contents
- [Overview](#overview)
- [Data Release](#data-release)
- [Model Release](#model-release)
- [Data Generation Process](#data-generation-process)
- [Fine-tuning](#fine-tuning)
- [Evaluation](#evaluation)
- [Limitations](#limitations)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgments)
## Overview
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, **not** necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
- **Lookahead** delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
- **Backtracking** seeks alternative branches to widen the search boundary, hence extending the domain spectrum.
<p align="center">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig2.png?raw=true" width="95%"> <br>
</p>
## Data Release
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct). Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
- `instruction`: `str`, describes the task the model should perform.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig1.png?raw=true" width="50%"> <br>
</p>
| Method | Brainstorming Unique<br/>V-N pairs | Rewriting Unique<br/>V-N pairs | Math Unique<br/>V-N pairs |
|:--------------------------------|:----------------------------------:|:------------------------------:|:-------------------------:|
| _Domain-Specific Human-Curated_ | 2 | 8 | 3 |
| _Domain-Aware Self-Instruct_ | 781 | 1715 | 451 |
| Explore-Instruct | **790** | **2015** | **917** |
## Model Release
We release the Explore-LM models in brainstorming, rewriting, and math domains on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
- Automatic evaluation:
| Automatic Comparison in the Brainstorming Domain | Win:Tie:Lose | Beat Rate |
|:-------------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 194:1:13 | 93.72 |
| Explore-LM-Ext vs Domain-Curated-LM | 196:1:11 | 94.69 |
| Explore-LM vs Domain-Instruct-LM | 114:56:38 | 75.00 |
| Explore-LM-Ext vs Domain-Instruct-LM | 122:55:31 | 79.74 |
| Explore-LM vs ChatGPT | 52:71:85 | 37.96 |
| Explore-LM-Ext vs ChatGPT | 83:69:56 | 59.71 |
| Automatic Comparison in the Rewriting Domain | Win:Tie:Lose | Beat Rate |
|:---------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 50:38:6 | 89.29 |
| Explore-LM-Ext vs Domain-Curated-LM | 53:37:4 | 92.98 |
| Explore-LM vs Domain-Instruct-LM | 34:49:11 | 75.56 |
| Explore-LM-Ext vs Domain-Instruct-LM | 35:53:6 | 85.37 |
| Explore-LM vs ChatGPT | 11:59:24 | 31.43 |
| Explore-LM-Ext vs ChatGPT | 12:56:26 | 31.58 |
| Automatic Comparison in the Math Domain | Accuracy Rate |
|:----------------------------------------|:-------------:|
| Domain-Curated-LM | 3.4 |
| Domain-Instruct-LM | 4.0 |
| Explore-LM | 6.8 |
| Explore-LM-Ext | 8.4 |
| ChatGPT | 34.8 |
- Human evaluation:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig5.png?raw=true" width="95%"> <br>
</p>
## Data Generation Process
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
```
python3 generate_instruction.py \
--action extend \
--save_dir ./en_data/demo_domain \ # input dir include current domain tree for exploration
--out_dir ./en_data/demo_domain_exploration \ # output dir of the explored new domain tree
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--extend_nums <TASK_NUMBER_DEPTH_0>,...,<TASK_NUMBER_DEPTH_MAX_DEPTH-1> \ # exploration breadth at each depth
--max_depth <MAX_DEPTH> \ # exploration depth
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Instruction-Tuning Data Generation
```
python3 generate_instruction.py \
--action enrich \
--save_dir ./en_data/demo_domain_exploration \ # input dir include current domain tree for data generation
--out_dir ./en_data/demo_domain_generation \ # output dir of the domain tree with generated data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--enrich_nums <DATA_NUMBER_DEPTH_0>,...,<DATA_NUMBER_DEPTH_MAX_DEPTH> \ # data number for task at each depth
--enrich_batch_size <BATCH_SIZE> \ # batch size for data generation
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Task Pruning
```
python3 generate_instruction.py \
--action prune \
--save_dir ./en_data/demo_domain_generation \ # input dir include current domain tree for task pruning
--out_dir ./en_data/demo_domain_pruning \ # output dir of the domain tree with 'pruned_subtasks_name.json' file
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--prune_threshold <PRUNE_THRESHOLD> \ # threshold of rouge-l overlap between task names
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Filtering
```
python3 generate_instruction.py \
--action filter \
--save_dir ./en_data/demo_domain_pruning \ # input dir include current domain tree for data filtering
--out_dir ./en_data/demo_domain_filtering \ # output dir of the domain tree with fitered data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--filter_threshold <FILTER_THRESHOLD> \ # threshold of rouge-l overlap between instructions
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Sampling
```
python3 generate_instruction.py \
--action sample \
--save_dir ./en_data/demo_domain_filtering \ # input dir include current domain tree for data sampling
--out_dir ./en_data/demo_domain_sampling \ # output dir of the domain tree with sampled data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_filtering/pruned_subtasks_name.json \ # file of pruned tasks
--sample_example_num <SAMPLE_EXAMPLES_NUM> \ # number of sampled examples
--sample_max_depth <SAMPLE_MAX_DEPTH> \ # max depth for data sampling
--sample_use_pruned \ # do not sample from pruned tasks
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
## Fine-tuning
We fine-tune LLaMA-7B with the following hyperparameters:
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|:----------------|-------------------:|---------------:|--------:|------------:|--------------:|
| LLaMA 7B | 128 | 2e-5 | 3 | 512| 0 |
To reproduce the training procedure, please use the following command:
```
deepspeed --num_gpus=8 ./train/train.py \
--deepspeed ./deepspeed_config/deepspeed_zero3_offload_config.json \
--model_name_or_path decapoda-research/llama-7b-hf \
--data_path ./en_data/demo_domain_sampling \
--fp16 True \
--output_dir ./training_results/explore-lm-7b-demo-domain \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--model_max_length 512 \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--prompt_type alpaca \
2>&1 | tee ./training_logs/explore-lm-7b-demo-domain.log
python3 ./train/zero_to_fp32.py \
--checkpoint_dir ./training_results/explore-lm-7b-demo-domain \
--output_file ./training_results/explore-lm-7b-demo-domain/pytorch_model.bin
```
## Evaluation
The evaluation datasets for different domains are as follows:
- Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. ([en_eval_set.jsonl](./eval/question/en_eval_set.jsonl))
- Math: From randomly selected 500 questions from the test set of MATH. ([MATH_eval_set_sample.jsonl](./eval/question/MATH_eval_set_sample.jsonl))
The evaluation metrics for different domains are as follows:
- Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
- Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
```
# Brainstorming and Rewriting Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/en_eval_set.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 1 \
--temperature 0.7 \
--max_new_tokens 512 \
--prompt_type alpaca \
--do_sample
# 2. Evaluation
python3 ./eval/chatgpt_score.py \
--baseline_file ./eval/answer/<MODEL_1>.jsonl \ # answer of baseline model to compare with
--answer_file ./eval/answer/<MODEL_2>.jsonl \ # answer of evaluation model
--review_file ./eval/review/<MODEL_1>_cp_<MODEL_2>_<DOMAIN>.jsonl \ # review from chatgpt
--prompt_file ./eval/prompt/en_review_prompt_compare.jsonl \ # evaluation prompt for chatgpt
--target_classes <DOMAIN> \ # evaluation domain
--batch_size <BATCH_SIZE> \
--review_model "gpt-3.5-turbo-0301"
```
```
# Math Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 10 \
--temperature 1.0 \
--max_new_tokens 512 \
--prompt_type alpaca
# 2. Evaluation
python3 ./eval/auto_eval.py \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl # answer of evaluation model
```
## Limitations
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
## License
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
## Citation
If you find this work is relevant with your research or applications, please feel free to cite our work!
```
@misc{wan2023explore,
title={Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration},
author={Fanqi, Wan and Xinting, Huang and Tao, Yang and Xiaojun, Quan and Wei, Bi and Shuming, Shi},
year={2023},
eprint={2310.09168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgments
This repo benefits from [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Vicuna](https://github.com/lm-sys/FastChat). Thanks for their wonderful works!
|
Wanfq/Explore_Instruct_Math_10k
|
[
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2310.09168",
"region:us"
] |
2023-10-12T13:29:28+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0"}
|
2023-10-16T01:19:13+00:00
|
[
"2310.09168"
] |
[
"en"
] |
TAGS
#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us
|
Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration
#### | [|
[|](URL Model </a> |
<a href=)](URL Paper </a> |
<a href=)
*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*
*† Sun Yat-sen University,
‡ Tencent AI Lab*
News
----
* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!
Contents
--------
* Overview
* Data Release
* Model Release
* Data Generation Process
* Fine-tuning
* Evaluation
* Limitations
* License
* Citation
* Acknowledgements
Overview
--------
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.

Data Release
------------
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
* 'instruction': 'str', describes the task the model should perform.
* 'input': 'str', optional context or input for the task.
* 'output': 'str', ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:

Model Release
-------------
We release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
* Automatic evaluation:
* Human evaluation:

Data Generation Process
-----------------------
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
### Instruction-Tuning Data Generation
### Task Pruning
### Data Filtering
### Data Sampling
Fine-tuning
-----------
We fine-tune LLaMA-7B with the following hyperparameters:
To reproduce the training procedure, please use the following command:
Evaluation
----------
The evaluation datasets for different domains are as follows:
* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\_eval\_set.jsonl)
* Math: From randomly selected 500 questions from the test set of MATH. (MATH\_eval\_set\_sample.jsonl)
The evaluation metrics for different domains are as follows:
* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
* Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
Limitations
-----------
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
License
-------
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
If you find this work is relevant with your research or applications, please feel free to cite our work!
Acknowledgments
---------------
This repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!
|
[
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
"TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n",
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
30,
772,
6,
9,
6,
5,
414
] |
[
"passage: TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n"
] |
944b06db63e7b9fc6cc9650ca2432a4545586267
|
<p align="center" width="100%">
</p>
<div id="top" align="center">
**Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration**
<h4> |<a href="https://arxiv.org/abs/2310.09168"> 📑 Paper </a> |
<a href="https://huggingface.co/datasets?sort=trending&search=Explore_Instruct"> 🤗 Data </a> |
<a href="https://huggingface.co/models?sort=trending&search=Explore-LM"> 🤗 Model </a> |
<a href="https://github.com/fanqiwan/Explore-Instruct"> 🐱 Github Repo </a> |
</h4>
<!-- **Authors:** -->
_**Fanqi Wan<sup>†</sup>, Xinting Huang<sup>‡</sup>, Tao Yang<sup>†</sup>, Xiaojun Quan<sup>†</sup>, Wei Bi<sup>‡</sup>, Shuming Shi<sup>‡</sup>**_
<!-- **Affiliations:** -->
_<sup>†</sup> Sun Yat-sen University,
<sup>‡</sup> Tencent AI Lab_
</div>
## News
- **Oct 16, 2023:** 🔥 We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct)! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Happy exploring and instructing!
## Contents
- [Overview](#overview)
- [Data Release](#data-release)
- [Model Release](#model-release)
- [Data Generation Process](#data-generation-process)
- [Fine-tuning](#fine-tuning)
- [Evaluation](#evaluation)
- [Limitations](#limitations)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgments)
## Overview
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, **not** necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
- **Lookahead** delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
- **Backtracking** seeks alternative branches to widen the search boundary, hence extending the domain spectrum.
<p align="center">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig2.png?raw=true" width="95%"> <br>
</p>
## Data Release
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct). Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
- `instruction`: `str`, describes the task the model should perform.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig1.png?raw=true" width="50%"> <br>
</p>
| Method | Brainstorming Unique<br/>V-N pairs | Rewriting Unique<br/>V-N pairs | Math Unique<br/>V-N pairs |
|:--------------------------------|:----------------------------------:|:------------------------------:|:-------------------------:|
| _Domain-Specific Human-Curated_ | 2 | 8 | 3 |
| _Domain-Aware Self-Instruct_ | 781 | 1715 | 451 |
| Explore-Instruct | **790** | **2015** | **917** |
## Model Release
We release the Explore-LM models in brainstorming, rewriting, and math domains on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
- Automatic evaluation:
| Automatic Comparison in the Brainstorming Domain | Win:Tie:Lose | Beat Rate |
|:-------------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 194:1:13 | 93.72 |
| Explore-LM-Ext vs Domain-Curated-LM | 196:1:11 | 94.69 |
| Explore-LM vs Domain-Instruct-LM | 114:56:38 | 75.00 |
| Explore-LM-Ext vs Domain-Instruct-LM | 122:55:31 | 79.74 |
| Explore-LM vs ChatGPT | 52:71:85 | 37.96 |
| Explore-LM-Ext vs ChatGPT | 83:69:56 | 59.71 |
| Automatic Comparison in the Rewriting Domain | Win:Tie:Lose | Beat Rate |
|:---------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 50:38:6 | 89.29 |
| Explore-LM-Ext vs Domain-Curated-LM | 53:37:4 | 92.98 |
| Explore-LM vs Domain-Instruct-LM | 34:49:11 | 75.56 |
| Explore-LM-Ext vs Domain-Instruct-LM | 35:53:6 | 85.37 |
| Explore-LM vs ChatGPT | 11:59:24 | 31.43 |
| Explore-LM-Ext vs ChatGPT | 12:56:26 | 31.58 |
| Automatic Comparison in the Math Domain | Accuracy Rate |
|:----------------------------------------|:-------------:|
| Domain-Curated-LM | 3.4 |
| Domain-Instruct-LM | 4.0 |
| Explore-LM | 6.8 |
| Explore-LM-Ext | 8.4 |
| ChatGPT | 34.8 |
- Human evaluation:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig5.png?raw=true" width="95%"> <br>
</p>
## Data Generation Process
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
```
python3 generate_instruction.py \
--action extend \
--save_dir ./en_data/demo_domain \ # input dir include current domain tree for exploration
--out_dir ./en_data/demo_domain_exploration \ # output dir of the explored new domain tree
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--extend_nums <TASK_NUMBER_DEPTH_0>,...,<TASK_NUMBER_DEPTH_MAX_DEPTH-1> \ # exploration breadth at each depth
--max_depth <MAX_DEPTH> \ # exploration depth
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Instruction-Tuning Data Generation
```
python3 generate_instruction.py \
--action enrich \
--save_dir ./en_data/demo_domain_exploration \ # input dir include current domain tree for data generation
--out_dir ./en_data/demo_domain_generation \ # output dir of the domain tree with generated data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--enrich_nums <DATA_NUMBER_DEPTH_0>,...,<DATA_NUMBER_DEPTH_MAX_DEPTH> \ # data number for task at each depth
--enrich_batch_size <BATCH_SIZE> \ # batch size for data generation
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Task Pruning
```
python3 generate_instruction.py \
--action prune \
--save_dir ./en_data/demo_domain_generation \ # input dir include current domain tree for task pruning
--out_dir ./en_data/demo_domain_pruning \ # output dir of the domain tree with 'pruned_subtasks_name.json' file
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--prune_threshold <PRUNE_THRESHOLD> \ # threshold of rouge-l overlap between task names
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Filtering
```
python3 generate_instruction.py \
--action filter \
--save_dir ./en_data/demo_domain_pruning \ # input dir include current domain tree for data filtering
--out_dir ./en_data/demo_domain_filtering \ # output dir of the domain tree with fitered data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--filter_threshold <FILTER_THRESHOLD> \ # threshold of rouge-l overlap between instructions
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Sampling
```
python3 generate_instruction.py \
--action sample \
--save_dir ./en_data/demo_domain_filtering \ # input dir include current domain tree for data sampling
--out_dir ./en_data/demo_domain_sampling \ # output dir of the domain tree with sampled data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_filtering/pruned_subtasks_name.json \ # file of pruned tasks
--sample_example_num <SAMPLE_EXAMPLES_NUM> \ # number of sampled examples
--sample_max_depth <SAMPLE_MAX_DEPTH> \ # max depth for data sampling
--sample_use_pruned \ # do not sample from pruned tasks
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
## Fine-tuning
We fine-tune LLaMA-7B with the following hyperparameters:
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|:----------------|-------------------:|---------------:|--------:|------------:|--------------:|
| LLaMA 7B | 128 | 2e-5 | 3 | 512| 0 |
To reproduce the training procedure, please use the following command:
```
deepspeed --num_gpus=8 ./train/train.py \
--deepspeed ./deepspeed_config/deepspeed_zero3_offload_config.json \
--model_name_or_path decapoda-research/llama-7b-hf \
--data_path ./en_data/demo_domain_sampling \
--fp16 True \
--output_dir ./training_results/explore-lm-7b-demo-domain \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--model_max_length 512 \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--prompt_type alpaca \
2>&1 | tee ./training_logs/explore-lm-7b-demo-domain.log
python3 ./train/zero_to_fp32.py \
--checkpoint_dir ./training_results/explore-lm-7b-demo-domain \
--output_file ./training_results/explore-lm-7b-demo-domain/pytorch_model.bin
```
## Evaluation
The evaluation datasets for different domains are as follows:
- Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. ([en_eval_set.jsonl](./eval/question/en_eval_set.jsonl))
- Math: From randomly selected 500 questions from the test set of MATH. ([MATH_eval_set_sample.jsonl](./eval/question/MATH_eval_set_sample.jsonl))
The evaluation metrics for different domains are as follows:
- Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
- Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
```
# Brainstorming and Rewriting Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/en_eval_set.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 1 \
--temperature 0.7 \
--max_new_tokens 512 \
--prompt_type alpaca \
--do_sample
# 2. Evaluation
python3 ./eval/chatgpt_score.py \
--baseline_file ./eval/answer/<MODEL_1>.jsonl \ # answer of baseline model to compare with
--answer_file ./eval/answer/<MODEL_2>.jsonl \ # answer of evaluation model
--review_file ./eval/review/<MODEL_1>_cp_<MODEL_2>_<DOMAIN>.jsonl \ # review from chatgpt
--prompt_file ./eval/prompt/en_review_prompt_compare.jsonl \ # evaluation prompt for chatgpt
--target_classes <DOMAIN> \ # evaluation domain
--batch_size <BATCH_SIZE> \
--review_model "gpt-3.5-turbo-0301"
```
```
# Math Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 10 \
--temperature 1.0 \
--max_new_tokens 512 \
--prompt_type alpaca
# 2. Evaluation
python3 ./eval/auto_eval.py \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl # answer of evaluation model
```
## Limitations
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
## License
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
## Citation
If you find this work is relevant with your research or applications, please feel free to cite our work!
```
@misc{wan2023explore,
title={Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration},
author={Fanqi, Wan and Xinting, Huang and Tao, Yang and Xiaojun, Quan and Wei, Bi and Shuming, Shi},
year={2023},
eprint={2310.09168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgments
This repo benefits from [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Vicuna](https://github.com/lm-sys/FastChat). Thanks for their wonderful works!
|
Wanfq/Explore_Instruct_Math_64k
|
[
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2310.09168",
"region:us"
] |
2023-10-12T13:29:49+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0"}
|
2023-10-16T01:19:56+00:00
|
[
"2310.09168"
] |
[
"en"
] |
TAGS
#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us
|
Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration
#### | [|
[|](URL Model </a> |
<a href=)](URL Paper </a> |
<a href=)
*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*
*† Sun Yat-sen University,
‡ Tencent AI Lab*
News
----
* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!
Contents
--------
* Overview
* Data Release
* Model Release
* Data Generation Process
* Fine-tuning
* Evaluation
* Limitations
* License
* Citation
* Acknowledgements
Overview
--------
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.

Data Release
------------
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
* 'instruction': 'str', describes the task the model should perform.
* 'input': 'str', optional context or input for the task.
* 'output': 'str', ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:

Model Release
-------------
We release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
* Automatic evaluation:
* Human evaluation:

Data Generation Process
-----------------------
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
### Instruction-Tuning Data Generation
### Task Pruning
### Data Filtering
### Data Sampling
Fine-tuning
-----------
We fine-tune LLaMA-7B with the following hyperparameters:
To reproduce the training procedure, please use the following command:
Evaluation
----------
The evaluation datasets for different domains are as follows:
* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\_eval\_set.jsonl)
* Math: From randomly selected 500 questions from the test set of MATH. (MATH\_eval\_set\_sample.jsonl)
The evaluation metrics for different domains are as follows:
* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
* Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
Limitations
-----------
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
License
-------
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
If you find this work is relevant with your research or applications, please feel free to cite our work!
Acknowledgments
---------------
This repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!
|
[
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
"TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n",
"#### | [| \n [|](URL Model </a> |\n<a href=)](URL Paper </a> |\n<a href=)\n\n\n*Fanqi Wan†, Xinting Huang‡, Tao Yang†, Xiaojun Quan†, Wei Bi‡, Shuming Shi‡*\n\n\n*† Sun Yat-sen University,\n‡ Tencent AI Lab*\n\n\n\nNews\n----\n\n\n* Oct 16, 2023: We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on Huggingface Datasets! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on Huggingface Models. Happy exploring and instructing!\n\n\nContents\n--------\n\n\n* Overview\n* Data Release\n* Model Release\n* Data Generation Process\n* Fine-tuning\n* Evaluation\n* Limitations\n* License\n* Citation\n* Acknowledgements\n\n\nOverview\n--------\n\n\nWe propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, not necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:\n\n\n* Lookahead delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks\n* Backtracking seeks alternative branches to widen the search boundary, hence extending the domain spectrum.\n\n\n\n \n\n\n\n\nData Release\n------------\n\n\nWe release the Explore-Instruct data in brainstorming, rewriting, and math domains on Huggingface Datasets. Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:\n\n\n* 'instruction': 'str', describes the task the model should perform.\n* 'input': 'str', optional context or input for the task.\n* 'output': 'str', ground-truth output text for the task and input text.\n\n\nThe results of data-centric analysis are shown as follows:\n\n\n\n \n\n\n\n\n\nModel Release\n-------------\n\n\nWe release the Explore-LM models in brainstorming, rewriting, and math domains on Huggingface Models. Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.\n\n\nThe results of automatic and human evaluation in three domains are shown as follows:\n\n\n* Automatic evaluation:\n\n\n\n\n\n* Human evaluation:\n\n\n\n \n\n\n\n\nData Generation Process\n-----------------------\n\n\nTo generate the domain-specific instruction-tuning data, please follow the following commands step by step:",
"### Domain Space Exploration",
"### Instruction-Tuning Data Generation",
"### Task Pruning",
"### Data Filtering",
"### Data Sampling\n\n\nFine-tuning\n-----------\n\n\nWe fine-tune LLaMA-7B with the following hyperparameters:\n\n\n\nTo reproduce the training procedure, please use the following command:\n\n\nEvaluation\n----------\n\n\nThe evaluation datasets for different domains are as follows:\n\n\n* Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. (en\\_eval\\_set.jsonl)\n* Math: From randomly selected 500 questions from the test set of MATH. (MATH\\_eval\\_set\\_sample.jsonl)\n\n\nThe evaluation metrics for different domains are as follows:\n\n\n* Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.\n* Math: Accuracy Rate metric in solving math problems.\n\n\nThe automatic evaluation commands for different domains are as follows:\n\n\nLimitations\n-----------\n\n\nExplore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.\n\n\nLicense\n-------\n\n\nExplore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).\n\n\nIf you find this work is relevant with your research or applications, please feel free to cite our work!\n\n\nAcknowledgments\n---------------\n\n\nThis repo benefits from Stanford-Alpaca and Vicuna. Thanks for their wonderful works!"
] |
[
30,
772,
6,
9,
6,
5,
414
] |
[
"passage: TAGS\n#language-English #license-cc-by-nc-4.0 #arxiv-2310.09168 #region-us \n"
] |
cdb1dc34ba1fa9a83d1406d29580f7f3c242a1a1
|
# Dataset Card for "artery-ultrasound-siit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RyuNumchon/artery-ultrasound-siit
|
[
"region:us"
] |
2023-10-12T13:44:36+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 230791779.0, "num_examples": 100}], "download_size": 0, "dataset_size": 230791779.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-08T07:45:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "artery-ultrasound-siit"
More Information needed
|
[
"# Dataset Card for \"artery-ultrasound-siit\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"artery-ultrasound-siit\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"artery-ultrasound-siit\"\n\nMore Information needed"
] |
6822125504159be8f2d33f976bc05e8a1734980b
|
# Dataset Card for "xlmr_test_10shot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_test_10shot
|
[
"region:us"
] |
2023-10-12T13:59:28+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 948566820, "num_examples": 900000}], "download_size": 204962722, "dataset_size": 948566820}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T14:00:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_test_10shot"
More Information needed
|
[
"# Dataset Card for \"xlmr_test_10shot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_test_10shot\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_test_10shot\"\n\nMore Information needed"
] |
dd0b54d0fbcc02efceed30603f719cedee81c74a
|
# Dataset Card for "eval_tag_nq_dev_v10_first"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/eval_tag_nq_dev_v10_first
|
[
"region:us"
] |
2023-10-12T14:26:41+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "titles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3200, "num_examples": 10}, {"name": "validation", "num_bytes": 2312059, "num_examples": 6515}], "download_size": 1383725, "dataset_size": 2315259}}
|
2023-10-12T14:36:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eval_tag_nq_dev_v10_first"
More Information needed
|
[
"# Dataset Card for \"eval_tag_nq_dev_v10_first\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eval_tag_nq_dev_v10_first\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eval_tag_nq_dev_v10_first\"\n\nMore Information needed"
] |
36bcbdb88929b0124455466980016e8ea88ae918
|
# Dataset Card for "eval_tag_nq_dev_v11_first"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/eval_tag_nq_dev_v11_first
|
[
"region:us"
] |
2023-10-12T14:26:50+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "titles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3340, "num_examples": 10}, {"name": "validation", "num_bytes": 2403269, "num_examples": 6515}], "download_size": 1389023, "dataset_size": 2406609}}
|
2023-10-12T14:36:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eval_tag_nq_dev_v11_first"
More Information needed
|
[
"# Dataset Card for \"eval_tag_nq_dev_v11_first\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eval_tag_nq_dev_v11_first\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eval_tag_nq_dev_v11_first\"\n\nMore Information needed"
] |
3d9d2ec3d29eb18a44d691fd8784d2552794327c
|
# Dataset Card for stackoverflow_feedback_demo
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("frascuchon/stackoverflow_feedback_demo")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("frascuchon/stackoverflow_feedback_demo")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| title | Title | text | True | False |
| question | Question | text | True | True |
| answer | Answer | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| title_question_fit | Does the title match the question? | label_selection | True | N/A | ['yes', 'no'] |
| tags | What are the topics mentioned in this question? | multi_label_selection | True | N/A | ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'] |
| answer_quality | Rate the quality of the answer: | rating | True | N/A | [1, 2, 3, 4, 5] |
| new_answer | If needed, correct the answer | text | False | N/A | N/A |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"answer": "\u003cp\u003eUnfortunately the only API that isn\u0027t deprecated is located in the ApplicationServices framework, which doesn\u0027t have a bridge support file, and thus isn\u0027t available in the bridge. If you\u0027re wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef.\u003c/p\u003e\r\n\r\n\u003cp\u003eCocoa doesn\u0027t have any native support, at least as of 10.5, for getting the location of a font.\u003c/p\u003e",
"question": "\u003cp\u003eI am using the Photoshop\u0027s javascript API to find the fonts in a given PSD.\u003c/p\u003e\n\n\u003cp\u003eGiven a font name returned by the API, I want to find the actual physical font file that that font name corresponds to on the disc.\u003c/p\u003e\n\n\u003cp\u003eThis is all happening in a python program running on OSX so I guess I\u0027m looking for one of:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eSome Photoshop javascript\u003c/li\u003e\n\u003cli\u003eA Python function\u003c/li\u003e\n\u003cli\u003eAn OSX API that I can call from python\u003c/li\u003e\n\u003c/ul\u003e\n",
"title": "How can I find the full path to a font from its display name on a Mac?"
},
"metadata": {},
"responses": [
{
"status": "submitted",
"user_id": "5a053951-24cd-4c9d-9e0c-8a054b95b812",
"values": {
"answer_quality": {
"value": 1
},
"new_answer": {
"value": "Sample answer"
},
"tags": {
"value": [
"tkinter"
]
},
"title_question_fit": {
"value": "yes"
}
}
}
],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"answer": "\u003cp\u003eUnfortunately the only API that isn\u0027t deprecated is located in the ApplicationServices framework, which doesn\u0027t have a bridge support file, and thus isn\u0027t available in the bridge. If you\u0027re wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef.\u003c/p\u003e\r\n\r\n\u003cp\u003eCocoa doesn\u0027t have any native support, at least as of 10.5, for getting the location of a font.\u003c/p\u003e",
"answer_quality": [
{
"status": "submitted",
"user_id": "5a053951-24cd-4c9d-9e0c-8a054b95b812",
"value": 1
}
],
"answer_quality-suggestion": null,
"answer_quality-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"external_id": null,
"metadata": "{}",
"new_answer": [
{
"status": "submitted",
"user_id": "5a053951-24cd-4c9d-9e0c-8a054b95b812",
"value": "Sample answer"
}
],
"new_answer-suggestion": null,
"new_answer-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"question": "\u003cp\u003eI am using the Photoshop\u0027s javascript API to find the fonts in a given PSD.\u003c/p\u003e\n\n\u003cp\u003eGiven a font name returned by the API, I want to find the actual physical font file that that font name corresponds to on the disc.\u003c/p\u003e\n\n\u003cp\u003eThis is all happening in a python program running on OSX so I guess I\u0027m looking for one of:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eSome Photoshop javascript\u003c/li\u003e\n\u003cli\u003eA Python function\u003c/li\u003e\n\u003cli\u003eAn OSX API that I can call from python\u003c/li\u003e\n\u003c/ul\u003e\n",
"tags": [
{
"status": "submitted",
"user_id": "5a053951-24cd-4c9d-9e0c-8a054b95b812",
"value": [
"tkinter"
]
}
],
"tags-suggestion": null,
"tags-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"title": "How can I find the full path to a font from its display name on a Mac?",
"title_question_fit": [
{
"status": "submitted",
"user_id": "5a053951-24cd-4c9d-9e0c-8a054b95b812",
"value": "yes"
}
],
"title_question_fit-suggestion": null,
"title_question_fit-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **title** is of type `text`.
* **question** is of type `text`.
* **answer** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **title_question_fit** is of type `label_selection` with the following allowed values ['yes', 'no'].
* **tags** is of type `multi_label_selection` with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].
* **answer_quality** is of type `rating` with the following allowed values [1, 2, 3, 4, 5].
* (optional) **new_answer** is of type `text`.
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **title_question_fit-suggestion** is of type `label_selection` with the following allowed values ['yes', 'no'].
* (optional) **tags-suggestion** is of type `multi_label_selection` with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].
* (optional) **answer_quality-suggestion** is of type `rating` with the following allowed values [1, 2, 3, 4, 5].
* (optional) **new_answer-suggestion** is of type `text`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
frascuchon/stackoverflow_feedback_demo
|
[
"size_categories:n<1K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] |
2023-10-12T14:34:19+00:00
|
{"size_categories": "n<1K", "tags": ["rlfh", "argilla", "human-feedback"]}
|
2023-10-18T14:52:25+00:00
|
[] |
[] |
TAGS
#size_categories-n<1K #rlfh #argilla #human-feedback #region-us
|
Dataset Card for stackoverflow\_feedback\_demo
==============================================
This dataset has been created with Argilla.
As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'.
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla.
* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'.
* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:
### Load with 'datasets'
To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:
### Supported Tasks and Leaderboards
This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.
There are no leaderboards associated with this dataset.
### Languages
Dataset Structure
-----------------
### Data in Argilla
The dataset is created in Argilla with: fields, questions, suggestions, and guidelines.
The fields are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking.
NEW Additionally, we also have suggestions, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the guidelines are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
While the same record in HuggingFace 'datasets' looks as follows:
### Data Fields
Among the dataset fields, we differentiate between the following:
* Fields: These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
+ title is of type 'text'.
+ question is of type 'text'.
+ answer is of type 'text'.
* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.
+ title\_question\_fit is of type 'label\_selection' with the following allowed values ['yes', 'no'].
+ tags is of type 'multi\_label\_selection' with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].
+ answer\_quality is of type 'rating' with the following allowed values [1, 2, 3, 4, 5].
+ (optional) new\_answer is of type 'text'.
* NEW Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
+ (optional) title\_question\_fit-suggestion is of type 'label\_selection' with the following allowed values ['yes', 'no'].
+ (optional) tags-suggestion is of type 'multi\_label\_selection' with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].
+ (optional) answer\_quality-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5].
+ (optional) new\_answer-suggestion is of type 'text'.
Additionally, we also have one more field which is optional and is the following:
* external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is 'train'.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation guidelines
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
|
[
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nNEW Additionally, we also have suggestions, which are linked to the existing questions, and so on, named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.\n\n\nFinally, the guidelines are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ title is of type 'text'.\n\t+ question is of type 'text'.\n\t+ answer is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ title\\_question\\_fit is of type 'label\\_selection' with the following allowed values ['yes', 'no'].\n\t+ tags is of type 'multi\\_label\\_selection' with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].\n\t+ answer\\_quality is of type 'rating' with the following allowed values [1, 2, 3, 4, 5].\n\t+ (optional) new\\_answer is of type 'text'.\n* NEW Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) title\\_question\\_fit-suggestion is of type 'label\\_selection' with the following allowed values ['yes', 'no'].\n\t+ (optional) tags-suggestion is of type 'multi\\_label\\_selection' with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].\n\t+ (optional) answer\\_quality-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5].\n\t+ (optional) new\\_answer-suggestion is of type 'text'.\n\n\nAdditionally, we also have one more field which is optional and is the following:\n\n\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nNEW Additionally, we also have suggestions, which are linked to the existing questions, and so on, named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.\n\n\nFinally, the guidelines are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ title is of type 'text'.\n\t+ question is of type 'text'.\n\t+ answer is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ title\\_question\\_fit is of type 'label\\_selection' with the following allowed values ['yes', 'no'].\n\t+ tags is of type 'multi\\_label\\_selection' with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].\n\t+ answer\\_quality is of type 'rating' with the following allowed values [1, 2, 3, 4, 5].\n\t+ (optional) new\\_answer is of type 'text'.\n* NEW Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) title\\_question\\_fit-suggestion is of type 'label\\_selection' with the following allowed values ['yes', 'no'].\n\t+ (optional) tags-suggestion is of type 'multi\\_label\\_selection' with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].\n\t+ (optional) answer\\_quality-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5].\n\t+ (optional) new\\_answer-suggestion is of type 'text'.\n\n\nAdditionally, we also have one more field which is optional and is the following:\n\n\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
27,
162,
40,
53,
68,
11,
219,
40,
1059,
27,
7,
4,
10,
10,
5,
5,
5,
9,
18,
7,
8,
14,
6,
6,
5
] |
[
"passage: TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.### Languages\n\n\nDataset Structure\n-----------------",
"passage: ### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nNEW Additionally, we also have suggestions, which are linked to the existing questions, and so on, named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.\n\n\nFinally, the guidelines are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:"
] |
f7e9f4a51e316d0be3c91060c5a1b95032e6c59c
|
# Stork
**Homepage**: https://github.com/ih-lab/STORK/ \
**Publication Date**: 2019-01-18 \
**License**: [MIT](https://github.com/ih-lab/STORK/blob/master/LICENSE)

|
1aurent/STORK
|
[
"task_categories:image-classification",
"size_categories:n<1K",
"license:mit",
"biology",
"IVF",
"embryo",
"region:us"
] |
2023-10-12T14:41:07+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["image-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "good", "1": "poor"}}}}], "splits": [{"name": "train", "num_bytes": 4513394, "num_examples": 84}, {"name": "test", "num_bytes": 729815, "num_examples": 14}], "download_size": 5243240, "dataset_size": 5243209}, "tags": ["biology", "IVF", "embryo"]}
|
2023-10-12T14:49:47+00:00
|
[] |
[] |
TAGS
#task_categories-image-classification #size_categories-n<1K #license-mit #biology #IVF #embryo #region-us
|
# Stork
Homepage: URL \
Publication Date: 2019-01-18 \
License: MIT
!STORK logo
|
[
"# Stork\n\nHomepage: URL \\\nPublication Date: 2019-01-18 \\\nLicense: MIT\n\n!STORK logo"
] |
[
"TAGS\n#task_categories-image-classification #size_categories-n<1K #license-mit #biology #IVF #embryo #region-us \n",
"# Stork\n\nHomepage: URL \\\nPublication Date: 2019-01-18 \\\nLicense: MIT\n\n!STORK logo"
] |
[
42,
24
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-n<1K #license-mit #biology #IVF #embryo #region-us \n# Stork\n\nHomepage: URL \\\nPublication Date: 2019-01-18 \\\nLicense: MIT\n\n!STORK logo"
] |
afeaf86093a401432f81e9000d8039c970223046
|
# Dataset Card for "MedNLI_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hippocrates/MedNLI_test
|
[
"region:us"
] |
2023-10-12T14:48:06+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "gold", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4828284, "num_examples": 11232}, {"name": "valid", "num_bytes": 606323, "num_examples": 1395}, {"name": "test", "num_bytes": 605516, "num_examples": 1422}], "download_size": 0, "dataset_size": 6040123}}
|
2023-10-18T18:46:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "MedNLI_test"
More Information needed
|
[
"# Dataset Card for \"MedNLI_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"MedNLI_test\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"MedNLI_test\"\n\nMore Information needed"
] |
264221065e76dc7482237d1396a652a8c08a440b
|
# Dataset Card for "e74ecf3f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/e74ecf3f
|
[
"region:us"
] |
2023-10-12T14:55:10+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 158, "num_examples": 10}], "download_size": 1309, "dataset_size": 158}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T14:55:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "e74ecf3f"
More Information needed
|
[
"# Dataset Card for \"e74ecf3f\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"e74ecf3f\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"e74ecf3f\"\n\nMore Information needed"
] |
9122256f9d14752ed80fb9b7d158e21d9f9261aa
|
# Dataset Card for "BIRD-SQL-data-train"
Data from [BIRD-SQL](https://bird-bench.github.io/) benchmark training set.
|
xu3kev/BIRD-SQL-data-train
|
[
"region:us"
] |
2023-10-12T14:56:51+00:00
|
{"dataset_info": {"features": [{"name": "db_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "evidence", "dtype": "string"}, {"name": "SQL", "dtype": "string"}, {"name": "schema", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49782288, "num_examples": 9428}], "download_size": 2331031, "dataset_size": 49782288}}
|
2023-10-12T15:00:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "BIRD-SQL-data-train"
Data from BIRD-SQL benchmark training set.
|
[
"# Dataset Card for \"BIRD-SQL-data-train\"\nData from BIRD-SQL benchmark training set."
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"BIRD-SQL-data-train\"\nData from BIRD-SQL benchmark training set."
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"BIRD-SQL-data-train\"\nData from BIRD-SQL benchmark training set."
] |
12ebae739eecbb76ab8f5ca469b0e172aaaf906b
|
Description: From transcription to dataset of full playlist from 2016 of TV show of QA from 7th Day Adventist Church in Brazil, called "Na Mira da Verdade" (Truth on Sight [free translation])
UNKOWN BIASES. I didn't checked all the dataset, just made some filter about some irrelevant content and malformed questions and answers.
MAY HAVE SOME PEOPLE NAME identified in the show.
NOT BULLETPROOF. Since the transcription was not perfect and the method used to make QA may include some malformed words.
Also, didn't checked all biblical references.
|
skoll520/namiradaverdadeY2016-Bible-QA-pt-br-7th-day-adventist
|
[
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:pt",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-10-12T14:58:59+00:00
|
{"language": ["pt"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]}
|
2023-10-12T15:15:53+00:00
|
[] |
[
"pt"
] |
TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-cc-by-nc-4.0 #region-us
|
Description: From transcription to dataset of full playlist from 2016 of TV show of QA from 7th Day Adventist Church in Brazil, called "Na Mira da Verdade" (Truth on Sight [free translation])
UNKOWN BIASES. I didn't checked all the dataset, just made some filter about some irrelevant content and malformed questions and answers.
MAY HAVE SOME PEOPLE NAME identified in the show.
NOT BULLETPROOF. Since the transcription was not perfect and the method used to make QA may include some malformed words.
Also, didn't checked all biblical references.
|
[] |
[
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-cc-by-nc-4.0 #region-us \n"
] |
[
47
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-cc-by-nc-4.0 #region-us \n"
] |
f4d58f4319530e6a4802373099ca4249ecdbaf5f
|
# Dataset Card for "PubmedQA_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hippocrates/PubmedQA_train
|
[
"region:us"
] |
2023-10-12T15:17:31+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 701570152, "num_examples": 211269}, {"name": "valid", "num_bytes": 159299, "num_examples": 50}, {"name": "test", "num_bytes": 1622241, "num_examples": 500}], "download_size": 359787344, "dataset_size": 703351692}}
|
2023-11-30T16:12:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "PubmedQA_train"
More Information needed
|
[
"# Dataset Card for \"PubmedQA_train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"PubmedQA_train\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"PubmedQA_train\"\n\nMore Information needed"
] |
c9cd931a3a58e189b7341c1181c7367e2f8c23e6
|
# Dataset Card for "trivia_qa5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
erbacher/trivia_qa5
|
[
"region:us"
] |
2023-10-12T15:20:52+00:00
|
{"dataset_info": {"features": [{"name": "target", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "gold_generation", "sequence": "string"}, {"name": "text", "dtype": "string"}, {"name": "results", "dtype": "string"}, {"name": "em", "dtype": "float64"}, {"name": "hal_m", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73599939, "num_examples": 78785}, {"name": "dev", "num_bytes": 8307250, "num_examples": 8837}, {"name": "test", "num_bytes": 10650305, "num_examples": 11313}], "download_size": 33930791, "dataset_size": 92557494}}
|
2023-10-12T21:25:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "trivia_qa5"
More Information needed
|
[
"# Dataset Card for \"trivia_qa5\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"trivia_qa5\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"trivia_qa5\"\n\nMore Information needed"
] |
e814f2a54adb40cc1434baca90174705aa1e28f1
|
# Dataset Card for "nq_open5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
erbacher/nq_open5
|
[
"region:us"
] |
2023-10-12T15:24:53+00:00
|
{"dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "gold_generation", "sequence": "string"}, {"name": "target", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "results", "dtype": "string"}, {"name": "em", "dtype": "float64"}, {"name": "hal_m", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41737579, "num_examples": 79168}, {"name": "dev", "num_bytes": 4612579, "num_examples": 8757}, {"name": "test", "num_bytes": 1950822, "num_examples": 3610}], "download_size": 13126477, "dataset_size": 48300980}}
|
2023-10-12T19:53:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "nq_open5"
More Information needed
|
[
"# Dataset Card for \"nq_open5\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"nq_open5\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"nq_open5\"\n\nMore Information needed"
] |
3b3dd312083ea82817025929e590625751562975
|
# Dataset Card for "xlmr_int_hard_curr_trn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_int_hard_curr_trn
|
[
"region:us"
] |
2023-10-12T15:35:05+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 427097407, "num_examples": 339150}], "download_size": 120986396, "dataset_size": 427097407}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T15:35:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_int_hard_curr_trn"
More Information needed
|
[
"# Dataset Card for \"xlmr_int_hard_curr_trn\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_int_hard_curr_trn\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_int_hard_curr_trn\"\n\nMore Information needed"
] |
234b351bc5dda99a44403b5e2311579293a3b2ee
|
# Dataset Card for "xlmr_int_hard_curr_trn_ep2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_int_hard_curr_trn_ep2
|
[
"region:us"
] |
2023-10-12T15:36:31+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 284672773, "num_examples": 226100}], "download_size": 80604529, "dataset_size": 284672773}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T15:36:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_int_hard_curr_trn_ep2"
More Information needed
|
[
"# Dataset Card for \"xlmr_int_hard_curr_trn_ep2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_int_hard_curr_trn_ep2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_int_hard_curr_trn_ep2\"\n\nMore Information needed"
] |
0234ba404518db458d6fbe85030e7024ec3c3ffa
|
# Persian Poetry Dataset
## Dataset Description
### Overview
This dataset contains a collection of Persian poems structured in a question-answering format. The dataset is derived from various Persian poets and their poems, providing a rich source for exploring Persian poetry in a structured manner suitable for machine learning applications, especially in natural language processing tasks like question answering.
### Data Collection
- **Data Collection Source:** The data is sourced from the [Ganjoor project](https://github.com/ganjoor/). The specific database file can be found in the [releases section](https://github.com/ganjoor/desktop/releases/tag/v2.81) of their GitHub repository.
- **Time Period:** Oct-12-2023
- **Collection Methods:** The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.
### Data Structure
The dataset is structured into a CSV file with the following columns:
- `context`: A static string which is "Persian Poetry or شعر فارسی".
- `question`: A string that asks for a sample poem from a specific poet in the format "یک نمونه از شعر [POET_NAME]".
- `answer`: Text of a hemistich or verse. Verses of a hemistich are TAB SEPARATED
- `answer_start`: The starting character index of `answer` within `context` (Note: this is always -1 in the current dataset as `answer` is not a substring of `context`).
### Data Example
```plaintext
context,question,answer,answer_start
Persian Poetry,یک نمونه از شعر صائب تبریزی,خار نتواند گرفتن دامن ریگ روان رهنورد شوق، افسردن نمی داند که چیست,-1
```
## Dataset Usage
### Use Cases
This dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:
- Question Answering
- Text Generation
- Language Modeling
- Style Analysis
### Challenges & Limitations
- The `answer_start` field is always -1 as the `answer` is not a substring of `context`. Depending on your use-case, you might need to adjust how `context` and `answer_start` are determined.
- The dataset does not contain long verses that are over 100 characters.
### License
GPL-2 (GNU General Public License) ingerited from original ganjoor project
## Additional Information
### Citation
```
Persian Poetry Dataset. Collected by Kakooch from the Ganjoor Project. Available at: https://huggingface.co/datasets/persian_poetry
```
### Dataset Link
[Download the dataset from Hugging Face](https://huggingface.co/datasets/persian_poetry)
### Contact
Email: [[email protected]](mailto:[email protected]) | GitHub: [kakooch](https://github.com/kakooch)
---
*This README was generated by Kakooch.*
|
kakooch/persian-poetry-qa
|
[
"language:fa",
"license:gpl-2.0",
"region:us"
] |
2023-10-12T15:38:52+00:00
|
{"language": ["fa"], "license": "gpl-2.0", "name": "Persian Poetry QA Dataset", "description": "This dataset is structured in a question-answering format derived from a rich collection of Persian poems along with metadata about the poets and the verses. \nIt is designed to be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as Question Answering, Text Generation, Language Modeling, and Style Analysis.\n", "url": "https://github.com/ganjoor/desktop/releases/tag/v2.81", "citation": "Persian Poetry QA Dataset. Collected by Kakooch from the Ganjoor Project.\nAvailable at: https://huggingface.co/datasets/persian_poetry\n", "size": "Custom", "splits": {"train": {"description": "This split contains Persian poems structured for QA, where each row asks for a sample poem from a specific poet with the poem or verse as the answer."}, "validation": {"description": "This split contains random selection of 1% of Persian poems in original dataset."}}, "features": {"context": {"description": "A static string which is 'Persian Poetry or \u0634\u0639\u0631 \u0641\u0627\u0631\u0633\u06cc'.", "type": "string"}, "question": {"description": "A string that asks for a sample poem from a specific poet in the format '\u06cc\u06a9 \u0646\u0645\u0648\u0646\u0647 \u0627\u0632 \u0634\u0639\u0631 [POET_NAME]'.", "type": "string"}, "answer": {"description": "Text of a hemistich or verse.", "type": "string"}, "answer_start": {"description": "The starting character index of `answer` within `context` (Note: this is always -1 in the current dataset as `answer` is not a substring of `context`).", "type": "int32"}}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "poems-qa.csv"}]}]}
|
2023-10-14T09:22:45+00:00
|
[] |
[
"fa"
] |
TAGS
#language-Persian #license-gpl-2.0 #region-us
|
# Persian Poetry Dataset
## Dataset Description
### Overview
This dataset contains a collection of Persian poems structured in a question-answering format. The dataset is derived from various Persian poets and their poems, providing a rich source for exploring Persian poetry in a structured manner suitable for machine learning applications, especially in natural language processing tasks like question answering.
### Data Collection
- Data Collection Source: The data is sourced from the Ganjoor project. The specific database file can be found in the releases section of their GitHub repository.
- Time Period: Oct-12-2023
- Collection Methods: The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.
### Data Structure
The dataset is structured into a CSV file with the following columns:
- 'context': A static string which is "Persian Poetry or شعر فارسی".
- 'question': A string that asks for a sample poem from a specific poet in the format "یک نمونه از شعر [POET_NAME]".
- 'answer': Text of a hemistich or verse. Verses of a hemistich are TAB SEPARATED
- 'answer_start': The starting character index of 'answer' within 'context' (Note: this is always -1 in the current dataset as 'answer' is not a substring of 'context').
### Data Example
## Dataset Usage
### Use Cases
This dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:
- Question Answering
- Text Generation
- Language Modeling
- Style Analysis
### Challenges & Limitations
- The 'answer_start' field is always -1 as the 'answer' is not a substring of 'context'. Depending on your use-case, you might need to adjust how 'context' and 'answer_start' are determined.
- The dataset does not contain long verses that are over 100 characters.
### License
GPL-2 (GNU General Public License) ingerited from original ganjoor project
## Additional Information
### Dataset Link
Download the dataset from Hugging Face
### Contact
Email: kakooch@URL | GitHub: kakooch
---
*This README was generated by Kakooch.*
|
[
"# Persian Poetry Dataset",
"## Dataset Description",
"### Overview\n\nThis dataset contains a collection of Persian poems structured in a question-answering format. The dataset is derived from various Persian poets and their poems, providing a rich source for exploring Persian poetry in a structured manner suitable for machine learning applications, especially in natural language processing tasks like question answering.",
"### Data Collection\n\n- Data Collection Source: The data is sourced from the Ganjoor project. The specific database file can be found in the releases section of their GitHub repository.\n- Time Period: Oct-12-2023\n- Collection Methods: The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.",
"### Data Structure\n\nThe dataset is structured into a CSV file with the following columns:\n- 'context': A static string which is \"Persian Poetry or شعر فارسی\".\n- 'question': A string that asks for a sample poem from a specific poet in the format \"یک نمونه از شعر [POET_NAME]\".\n- 'answer': Text of a hemistich or verse. Verses of a hemistich are TAB SEPARATED\n- 'answer_start': The starting character index of 'answer' within 'context' (Note: this is always -1 in the current dataset as 'answer' is not a substring of 'context').",
"### Data Example",
"## Dataset Usage",
"### Use Cases\n\nThis dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:\n- Question Answering\n- Text Generation\n- Language Modeling\n- Style Analysis",
"### Challenges & Limitations\n\n- The 'answer_start' field is always -1 as the 'answer' is not a substring of 'context'. Depending on your use-case, you might need to adjust how 'context' and 'answer_start' are determined.\n- The dataset does not contain long verses that are over 100 characters.",
"### License\n\nGPL-2 (GNU General Public License) ingerited from original ganjoor project",
"## Additional Information",
"### Dataset Link \n\nDownload the dataset from Hugging Face",
"### Contact \n\nEmail: kakooch@URL | GitHub: kakooch\n\n---\n\n*This README was generated by Kakooch.*"
] |
[
"TAGS\n#language-Persian #license-gpl-2.0 #region-us \n",
"# Persian Poetry Dataset",
"## Dataset Description",
"### Overview\n\nThis dataset contains a collection of Persian poems structured in a question-answering format. The dataset is derived from various Persian poets and their poems, providing a rich source for exploring Persian poetry in a structured manner suitable for machine learning applications, especially in natural language processing tasks like question answering.",
"### Data Collection\n\n- Data Collection Source: The data is sourced from the Ganjoor project. The specific database file can be found in the releases section of their GitHub repository.\n- Time Period: Oct-12-2023\n- Collection Methods: The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.",
"### Data Structure\n\nThe dataset is structured into a CSV file with the following columns:\n- 'context': A static string which is \"Persian Poetry or شعر فارسی\".\n- 'question': A string that asks for a sample poem from a specific poet in the format \"یک نمونه از شعر [POET_NAME]\".\n- 'answer': Text of a hemistich or verse. Verses of a hemistich are TAB SEPARATED\n- 'answer_start': The starting character index of 'answer' within 'context' (Note: this is always -1 in the current dataset as 'answer' is not a substring of 'context').",
"### Data Example",
"## Dataset Usage",
"### Use Cases\n\nThis dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:\n- Question Answering\n- Text Generation\n- Language Modeling\n- Style Analysis",
"### Challenges & Limitations\n\n- The 'answer_start' field is always -1 as the 'answer' is not a substring of 'context'. Depending on your use-case, you might need to adjust how 'context' and 'answer_start' are determined.\n- The dataset does not contain long verses that are over 100 characters.",
"### License\n\nGPL-2 (GNU General Public License) ingerited from original ganjoor project",
"## Additional Information",
"### Dataset Link \n\nDownload the dataset from Hugging Face",
"### Contact \n\nEmail: kakooch@URL | GitHub: kakooch\n\n---\n\n*This README was generated by Kakooch.*"
] |
[
19,
6,
4,
78,
82,
155,
5,
5,
47,
81,
22,
5,
13,
30
] |
[
"passage: TAGS\n#language-Persian #license-gpl-2.0 #region-us \n# Persian Poetry Dataset## Dataset Description### Overview\n\nThis dataset contains a collection of Persian poems structured in a question-answering format. The dataset is derived from various Persian poets and their poems, providing a rich source for exploring Persian poetry in a structured manner suitable for machine learning applications, especially in natural language processing tasks like question answering.### Data Collection\n\n- Data Collection Source: The data is sourced from the Ganjoor project. The specific database file can be found in the releases section of their GitHub repository.\n- Time Period: Oct-12-2023\n- Collection Methods: The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.### Data Structure\n\nThe dataset is structured into a CSV file with the following columns:\n- 'context': A static string which is \"Persian Poetry or شعر فارسی\".\n- 'question': A string that asks for a sample poem from a specific poet in the format \"یک نمونه از شعر [POET_NAME]\".\n- 'answer': Text of a hemistich or verse. Verses of a hemistich are TAB SEPARATED\n- 'answer_start': The starting character index of 'answer' within 'context' (Note: this is always -1 in the current dataset as 'answer' is not a substring of 'context').### Data Example## Dataset Usage### Use Cases\n\nThis dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:\n- Question Answering\n- Text Generation\n- Language Modeling\n- Style Analysis### Challenges & Limitations\n\n- The 'answer_start' field is always -1 as the 'answer' is not a substring of 'context'. Depending on your use-case, you might need to adjust how 'context' and 'answer_start' are determined.\n- The dataset does not contain long verses that are over 100 characters.### License\n\nGPL-2 (GNU General Public License) ingerited from original ganjoor project"
] |
fa8d9ed22e437d030d533b5a5bb00fa2c073156e
|
# Dataset Card for "xlmr_int_hard_curr_trn_ep2_lrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_int_hard_curr_trn_ep2_lrg
|
[
"region:us"
] |
2023-10-12T15:46:26+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 285070021, "num_examples": 226100}], "download_size": 80645458, "dataset_size": 285070021}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T16:05:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_int_hard_curr_trn_ep2_lrg"
More Information needed
|
[
"# Dataset Card for \"xlmr_int_hard_curr_trn_ep2_lrg\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_int_hard_curr_trn_ep2_lrg\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_int_hard_curr_trn_ep2_lrg\"\n\nMore Information needed"
] |
20e4ea6c0723645b40730238ed24a892c135ee15
|
# Dataset Card for "xlmr_eval_lrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_eval_lrg
|
[
"region:us"
] |
2023-10-12T15:46:42+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 19341220, "num_examples": 11590}], "download_size": 5401187, "dataset_size": 19341220}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T15:46:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_eval_lrg"
More Information needed
|
[
"# Dataset Card for \"xlmr_eval_lrg\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_eval_lrg\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_eval_lrg\"\n\nMore Information needed"
] |
9c1ac7f4af47130e5b9a8abc6289c8b3be750a5a
|
A benchmark for LLMs, real tests for real people, and real usecases. THE benchmark for the ages.
|
qq67878980/Niggermaxxx_benchmark
|
[
"license:cc",
"region:us"
] |
2023-10-12T15:47:09+00:00
|
{"license": "cc"}
|
2023-10-12T16:03:43+00:00
|
[] |
[] |
TAGS
#license-cc #region-us
|
A benchmark for LLMs, real tests for real people, and real usecases. THE benchmark for the ages.
|
[] |
[
"TAGS\n#license-cc #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-cc #region-us \n"
] |
7da61aae0739327c83ae26c8630b5ca125c0d802
|
# Dataset Card for "AmbigNQ-clarifying-question"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
erbacher/AmbigNQ-clarifying-question
|
[
"region:us"
] |
2023-10-12T15:57:59+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "index", "dtype": "int64"}, {"name": "clar", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "ambig", "dtype": "bool"}, {"name": "input_passage", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62693997.0, "num_examples": 10000}, {"name": "dev", "num_bytes": 6291036.0, "num_examples": 1001}, {"name": "test", "num_bytes": 64783344.0, "num_examples": 1000}], "download_size": 75095693, "dataset_size": 133768377.0}}
|
2023-10-12T15:58:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AmbigNQ-clarifying-question"
More Information needed
|
[
"# Dataset Card for \"AmbigNQ-clarifying-question\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AmbigNQ-clarifying-question\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AmbigNQ-clarifying-question\"\n\nMore Information needed"
] |
faa9000ef593172774fd20081a83a4006d59a0ba
|
Task: MCQ with multiple correct answers.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the [DataFinder](https://aclanthology.org/2023.acl-long.573/) dataset. We curate the abstracts of each dataset from [PapersWithCode](https://paperswithcode.com/datasets).
Given is a short `query` discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with multiple correct answers. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit [https://github.com/shruti-singh/scidata_recommendation](https://github.com/shruti-singh/scidata_recommendation).
Please note that the query instances in this dataset have no intersection with the [`dataset_recommendation_mcq_sc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_sc) dataset. [`dataset_recommendation_mcq_sc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_sc) is a variant of this MCQ question-answering task with only single correct answer.
|
shrutisingh/dataset_recommendation_mcq_mc
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-12T16:02:16+00:00
|
{"license": "apache-2.0"}
|
2023-10-12T16:15:59+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Task: MCQ with multiple correct answers.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the DataFinder dataset. We curate the abstracts of each dataset from PapersWithCode.
Given is a short 'query' discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with multiple correct answers. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit URL
Please note that the query instances in this dataset have no intersection with the 'dataset_recommendation_mcq_sc' dataset. 'dataset_recommendation_mcq_sc' is a variant of this MCQ question-answering task with only single correct answer.
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
b09a3ac38534322c6ae72d0ce1ec6eb917a20e7f
|
# Dataset Card for "plmn_instruct_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
csupiisc/plmn_instruct_5k
|
[
"region:us"
] |
2023-10-12T16:03:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1132554, "num_examples": 4000}, {"name": "test", "num_bytes": 282676, "num_examples": 1000}], "download_size": 229610, "dataset_size": 1415230}}
|
2023-10-12T16:04:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "plmn_instruct_5k"
More Information needed
|
[
"# Dataset Card for \"plmn_instruct_5k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"plmn_instruct_5k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"plmn_instruct_5k\"\n\nMore Information needed"
] |
74a5af1dafe5c5e8bfdb887932b2989bc96ccb61
|
# Dataset Card for "xlmr_int_hard_curr_trn_ep2_corr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_int_hard_curr_trn_ep2_corr
|
[
"region:us"
] |
2023-10-12T16:04:10+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 285070021, "num_examples": 226100}], "download_size": 80645458, "dataset_size": 285070021}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T16:04:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_int_hard_curr_trn_ep2_corr"
More Information needed
|
[
"# Dataset Card for \"xlmr_int_hard_curr_trn_ep2_corr\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_int_hard_curr_trn_ep2_corr\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_int_hard_curr_trn_ep2_corr\"\n\nMore Information needed"
] |
44c676952a08e1a1482bc891854602894152ec04
|
If you find our dataset useful, please consider citing:
<pre>
<code>
@inproceedings{li-etal-2023-synthetic,
title = "Synthetic Data Generation with Large Language Models for Text Classification: Potential and Limitations",
author = "Li, Zhuoyan and Zhu, Hangxiao and Lu, Zhuoran and Yin, Ming",
editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.647",
doi = "10.18653/v1/2023.emnlp-main.647",
pages = "10443--10461",
}
</code>
</pre>
|
xfleezy/label_subjectivity_annotations
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] |
2023-10-12T16:05:45+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"]}
|
2024-02-01T01:33:38+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #region-us
|
If you find our dataset useful, please consider citing:
<pre>
<code>
@inproceedings{li-etal-2023-synthetic,
title = "Synthetic Data Generation with Large Language Models for Text Classification: Potential and Limitations",
author = "Li, Zhuoyan and Zhu, Hangxiao and Lu, Zhuoran and Yin, Ming",
editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "URL
doi = "10.18653/v1/URL-main.647",
pages = "10443--10461",
}
</code>
</pre>
|
[] |
[
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #region-us \n"
] |
[
38
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #region-us \n"
] |
0f9eaa95d9f39d5b3c234bd5879a455d33011b38
|
# Dataset Card for "xlmr_int_hard_curr_trn_ep3_corr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_int_hard_curr_trn_ep3_corr
|
[
"region:us"
] |
2023-10-12T16:05:58+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 428103904, "num_examples": 339150}], "download_size": 121213275, "dataset_size": 428103904}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T16:06:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_int_hard_curr_trn_ep3_corr"
More Information needed
|
[
"# Dataset Card for \"xlmr_int_hard_curr_trn_ep3_corr\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_int_hard_curr_trn_ep3_corr\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_int_hard_curr_trn_ep3_corr\"\n\nMore Information needed"
] |
42010ae2b56fbaaa59efa1693175fdbd63068677
|
DO NOT USE
Forked from https://huggingface.co/datasets/fewshot-goes-multilingual/cs_squad-3.0
# Dataset Card for Czech Simple Question Answering Dataset 2.0
This a processed and filtered adaptation of an existing dataset. For raw and larger dataset, see `Dataset Source` section.
## Dataset Description
The data contains questions and answers based on Czech wikipeadia articles.
Each question has an answer (or more) and a selected part of the context as the evidence.
A majority of the answers are extractive - i.e. they are present in the context in the exact form. The remaining cases are
- yes/no questions
- answer is almost in the exact form present in the text, but the form of words was changed to suit the question (declension, ...)
- answered in own words (should be rare, but is not)
All questions in the dataset are answerable from the context. Small minority of questions have multiple answers.
Sometimes it means that any of them is correct (e.g. either "Pacifik" or "Tichý oceán" are correct terms for Pacific Ocean)
and sometimes it means that all of them together are a correct answer (e.g., Who was Leonardo da Vinci? ["painter", "engineer"])
Total number of examples is around:
- 6,250 in train
- 570 in validation
- 850 in test.
## Dataset Features
Each example contains:
- `item_id`: string id of the
- `context`: "reasonably" big chunk (string) of wikipedia article that contains the answer
- `question`: string
- `answers`: list of all answers (string). mostly list of length 1
- `evidence_text`: substring of context (typically one sentence) that is sufficient to answer the question
- `evidence_start`: index in context, such that `context[evidence_start:evidence_end] == evidence_text`
- `evidence_end`: index in context
- `occurences`:
list of (dictionaries) occurences of the answer(s) in the evidence.
Each answer was searched with word boundaries ("\b" in regex) and case-sensitive in the evidence.
If nothing found, try again but case-insensitive.
If nothing found, try again but case-sensitive without word boundaries.
If nothing found, try again but case-insensitive without word boundaries.
This process should supress "false positive" occurences of the answer in the evidence.
- `start`: index in context
- `end`: index in context
- `text`: the answer looked for
- `url`: link to the wikipedia article
- `original_article`: original parsed wikipedia article from which the context is taken
- `question_type`: type of the question, one of: ['ABBREVIATION', 'DATETIME', 'DENOTATION', 'ENTITY', 'LOCATION', 'NUMERIC', 'ORGANIZATION', 'OTHER', 'PERSON', 'YES_NO']
- `answer_type`: type of the answer, one of: ['ABBREVIATION', 'ADJ_PHRASE', 'CLAUSE', 'DATETIME', 'ENTITY', 'LOCATION', 'NUMERIC', 'OTHER', 'PERSON', 'VERB_PHRASE']
## Dataset Source
The dataset is a preprocessed adaptation of existing SQAD 3.0 dataset [link to data](https://lindat.cz/repository/xmlui/handle/11234/1-3069).
This adaptation contains (almost) same data, but converted to a convenient format.
The data was also filtered to remove a statistical bias where the answer was contained
in the first sentence in the article (around 50% of all data in the original dataset, likely
caused by the data collection process).
## Citation
Cite authors of the [original dataset](https://lindat.cz/repository/xmlui/handle/11234/1-3069):
```bibtex
@misc{11234/1-3069,
title = {sqad 3.0},
author = {Medve{\v d}, Marek and Hor{\'a}k, Ale{\v s}},
url = {http://hdl.handle.net/11234/1-3069},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {{GNU} Library or "Lesser" General Public License 3.0 ({LGPL}-3.0)},
year = {2019}
}
```
|
jonasknobloch/cs_squad
|
[
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:cs",
"license:lgpl-3.0",
"czech QA",
"wikipedia QA",
"region:us"
] |
2023-10-12T16:16:09+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["cs"], "license": ["lgpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Czech Simple Question Answering Dataset", "tags": ["czech QA", "wikipedia QA"]}
|
2023-10-12T16:30:21+00:00
|
[] |
[
"cs"
] |
TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Czech #license-lgpl-3.0 #czech QA #wikipedia QA #region-us
|
DO NOT USE
Forked from URL
# Dataset Card for Czech Simple Question Answering Dataset 2.0
This a processed and filtered adaptation of an existing dataset. For raw and larger dataset, see 'Dataset Source' section.
## Dataset Description
The data contains questions and answers based on Czech wikipeadia articles.
Each question has an answer (or more) and a selected part of the context as the evidence.
A majority of the answers are extractive - i.e. they are present in the context in the exact form. The remaining cases are
- yes/no questions
- answer is almost in the exact form present in the text, but the form of words was changed to suit the question (declension, ...)
- answered in own words (should be rare, but is not)
All questions in the dataset are answerable from the context. Small minority of questions have multiple answers.
Sometimes it means that any of them is correct (e.g. either "Pacifik" or "Tichý oceán" are correct terms for Pacific Ocean)
and sometimes it means that all of them together are a correct answer (e.g., Who was Leonardo da Vinci? ["painter", "engineer"])
Total number of examples is around:
- 6,250 in train
- 570 in validation
- 850 in test.
## Dataset Features
Each example contains:
- 'item_id': string id of the
- 'context': "reasonably" big chunk (string) of wikipedia article that contains the answer
- 'question': string
- 'answers': list of all answers (string). mostly list of length 1
- 'evidence_text': substring of context (typically one sentence) that is sufficient to answer the question
- 'evidence_start': index in context, such that 'context[evidence_start:evidence_end] == evidence_text'
- 'evidence_end': index in context
- 'occurences':
list of (dictionaries) occurences of the answer(s) in the evidence.
Each answer was searched with word boundaries ("\b" in regex) and case-sensitive in the evidence.
If nothing found, try again but case-insensitive.
If nothing found, try again but case-sensitive without word boundaries.
If nothing found, try again but case-insensitive without word boundaries.
This process should supress "false positive" occurences of the answer in the evidence.
- 'start': index in context
- 'end': index in context
- 'text': the answer looked for
- 'url': link to the wikipedia article
- 'original_article': original parsed wikipedia article from which the context is taken
- 'question_type': type of the question, one of: ['ABBREVIATION', 'DATETIME', 'DENOTATION', 'ENTITY', 'LOCATION', 'NUMERIC', 'ORGANIZATION', 'OTHER', 'PERSON', 'YES_NO']
- 'answer_type': type of the answer, one of: ['ABBREVIATION', 'ADJ_PHRASE', 'CLAUSE', 'DATETIME', 'ENTITY', 'LOCATION', 'NUMERIC', 'OTHER', 'PERSON', 'VERB_PHRASE']
## Dataset Source
The dataset is a preprocessed adaptation of existing SQAD 3.0 dataset link to data.
This adaptation contains (almost) same data, but converted to a convenient format.
The data was also filtered to remove a statistical bias where the answer was contained
in the first sentence in the article (around 50% of all data in the original dataset, likely
caused by the data collection process).
Cite authors of the original dataset:
|
[
"# Dataset Card for Czech Simple Question Answering Dataset 2.0\n\nThis a processed and filtered adaptation of an existing dataset. For raw and larger dataset, see 'Dataset Source' section.",
"## Dataset Description\nThe data contains questions and answers based on Czech wikipeadia articles.\nEach question has an answer (or more) and a selected part of the context as the evidence.\nA majority of the answers are extractive - i.e. they are present in the context in the exact form. The remaining cases are\n\n- yes/no questions\n- answer is almost in the exact form present in the text, but the form of words was changed to suit the question (declension, ...)\n- answered in own words (should be rare, but is not)\n\nAll questions in the dataset are answerable from the context. Small minority of questions have multiple answers.\nSometimes it means that any of them is correct (e.g. either \"Pacifik\" or \"Tichý oceán\" are correct terms for Pacific Ocean)\nand sometimes it means that all of them together are a correct answer (e.g., Who was Leonardo da Vinci? [\"painter\", \"engineer\"])\n\nTotal number of examples is around:\n\n- 6,250 in train\n- 570 in validation\n- 850 in test.",
"## Dataset Features\nEach example contains:\n- 'item_id': string id of the\n- 'context': \"reasonably\" big chunk (string) of wikipedia article that contains the answer\n- 'question': string\n- 'answers': list of all answers (string). mostly list of length 1\n- 'evidence_text': substring of context (typically one sentence) that is sufficient to answer the question\n- 'evidence_start': index in context, such that 'context[evidence_start:evidence_end] == evidence_text'\n- 'evidence_end': index in context\n- 'occurences':\n list of (dictionaries) occurences of the answer(s) in the evidence.\n Each answer was searched with word boundaries (\"\\b\" in regex) and case-sensitive in the evidence.\n If nothing found, try again but case-insensitive.\n If nothing found, try again but case-sensitive without word boundaries.\n If nothing found, try again but case-insensitive without word boundaries.\n This process should supress \"false positive\" occurences of the answer in the evidence.\n - 'start': index in context\n - 'end': index in context\n - 'text': the answer looked for\n- 'url': link to the wikipedia article\n- 'original_article': original parsed wikipedia article from which the context is taken\n- 'question_type': type of the question, one of: ['ABBREVIATION', 'DATETIME', 'DENOTATION', 'ENTITY', 'LOCATION', 'NUMERIC', 'ORGANIZATION', 'OTHER', 'PERSON', 'YES_NO']\n- 'answer_type': type of the answer, one of: ['ABBREVIATION', 'ADJ_PHRASE', 'CLAUSE', 'DATETIME', 'ENTITY', 'LOCATION', 'NUMERIC', 'OTHER', 'PERSON', 'VERB_PHRASE']",
"## Dataset Source\n\nThe dataset is a preprocessed adaptation of existing SQAD 3.0 dataset link to data.\nThis adaptation contains (almost) same data, but converted to a convenient format.\nThe data was also filtered to remove a statistical bias where the answer was contained\nin the first sentence in the article (around 50% of all data in the original dataset, likely\ncaused by the data collection process).\n\n\nCite authors of the original dataset:"
] |
[
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Czech #license-lgpl-3.0 #czech QA #wikipedia QA #region-us \n",
"# Dataset Card for Czech Simple Question Answering Dataset 2.0\n\nThis a processed and filtered adaptation of an existing dataset. For raw and larger dataset, see 'Dataset Source' section.",
"## Dataset Description\nThe data contains questions and answers based on Czech wikipeadia articles.\nEach question has an answer (or more) and a selected part of the context as the evidence.\nA majority of the answers are extractive - i.e. they are present in the context in the exact form. The remaining cases are\n\n- yes/no questions\n- answer is almost in the exact form present in the text, but the form of words was changed to suit the question (declension, ...)\n- answered in own words (should be rare, but is not)\n\nAll questions in the dataset are answerable from the context. Small minority of questions have multiple answers.\nSometimes it means that any of them is correct (e.g. either \"Pacifik\" or \"Tichý oceán\" are correct terms for Pacific Ocean)\nand sometimes it means that all of them together are a correct answer (e.g., Who was Leonardo da Vinci? [\"painter\", \"engineer\"])\n\nTotal number of examples is around:\n\n- 6,250 in train\n- 570 in validation\n- 850 in test.",
"## Dataset Features\nEach example contains:\n- 'item_id': string id of the\n- 'context': \"reasonably\" big chunk (string) of wikipedia article that contains the answer\n- 'question': string\n- 'answers': list of all answers (string). mostly list of length 1\n- 'evidence_text': substring of context (typically one sentence) that is sufficient to answer the question\n- 'evidence_start': index in context, such that 'context[evidence_start:evidence_end] == evidence_text'\n- 'evidence_end': index in context\n- 'occurences':\n list of (dictionaries) occurences of the answer(s) in the evidence.\n Each answer was searched with word boundaries (\"\\b\" in regex) and case-sensitive in the evidence.\n If nothing found, try again but case-insensitive.\n If nothing found, try again but case-sensitive without word boundaries.\n If nothing found, try again but case-insensitive without word boundaries.\n This process should supress \"false positive\" occurences of the answer in the evidence.\n - 'start': index in context\n - 'end': index in context\n - 'text': the answer looked for\n- 'url': link to the wikipedia article\n- 'original_article': original parsed wikipedia article from which the context is taken\n- 'question_type': type of the question, one of: ['ABBREVIATION', 'DATETIME', 'DENOTATION', 'ENTITY', 'LOCATION', 'NUMERIC', 'ORGANIZATION', 'OTHER', 'PERSON', 'YES_NO']\n- 'answer_type': type of the answer, one of: ['ABBREVIATION', 'ADJ_PHRASE', 'CLAUSE', 'DATETIME', 'ENTITY', 'LOCATION', 'NUMERIC', 'OTHER', 'PERSON', 'VERB_PHRASE']",
"## Dataset Source\n\nThe dataset is a preprocessed adaptation of existing SQAD 3.0 dataset link to data.\nThis adaptation contains (almost) same data, but converted to a convenient format.\nThe data was also filtered to remove a statistical bias where the answer was contained\nin the first sentence in the article (around 50% of all data in the original dataset, likely\ncaused by the data collection process).\n\n\nCite authors of the original dataset:"
] |
[
104,
43,
239,
479,
104
] |
[
"passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Czech #license-lgpl-3.0 #czech QA #wikipedia QA #region-us \n# Dataset Card for Czech Simple Question Answering Dataset 2.0\n\nThis a processed and filtered adaptation of an existing dataset. For raw and larger dataset, see 'Dataset Source' section.## Dataset Description\nThe data contains questions and answers based on Czech wikipeadia articles.\nEach question has an answer (or more) and a selected part of the context as the evidence.\nA majority of the answers are extractive - i.e. they are present in the context in the exact form. The remaining cases are\n\n- yes/no questions\n- answer is almost in the exact form present in the text, but the form of words was changed to suit the question (declension, ...)\n- answered in own words (should be rare, but is not)\n\nAll questions in the dataset are answerable from the context. Small minority of questions have multiple answers.\nSometimes it means that any of them is correct (e.g. either \"Pacifik\" or \"Tichý oceán\" are correct terms for Pacific Ocean)\nand sometimes it means that all of them together are a correct answer (e.g., Who was Leonardo da Vinci? [\"painter\", \"engineer\"])\n\nTotal number of examples is around:\n\n- 6,250 in train\n- 570 in validation\n- 850 in test."
] |
ec5f4e85e80884361ae6f5e0cae643db3e30566f
|
# Dataset Card for "camel_ai_physics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/camel_ai_physics
|
[
"region:us"
] |
2023-10-12T16:17:30+00:00
|
{"dataset_info": {"features": [{"name": "role_1", "dtype": "string"}, {"name": "topic;", "dtype": "string"}, {"name": "sub_topic", "dtype": "string"}, {"name": "message_1", "dtype": "string"}, {"name": "message_2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51650490, "num_examples": 20000}], "download_size": 18889012, "dataset_size": 51650490}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T16:17:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "camel_ai_physics"
More Information needed
|
[
"# Dataset Card for \"camel_ai_physics\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"camel_ai_physics\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"camel_ai_physics\"\n\nMore Information needed"
] |
8696c0ba20cca4ed2da9395223b392b8e44a76ab
|
# Dataset Card for "camel_ai_chemistry"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/camel_ai_chemistry
|
[
"region:us"
] |
2023-10-12T16:22:15+00:00
|
{"dataset_info": {"features": [{"name": "role_1", "dtype": "string"}, {"name": "topic;", "dtype": "string"}, {"name": "sub_topic", "dtype": "string"}, {"name": "message_1", "dtype": "string"}, {"name": "message_2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 47000178, "num_examples": 20000}], "download_size": 16918940, "dataset_size": 47000178}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T16:22:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "camel_ai_chemistry"
More Information needed
|
[
"# Dataset Card for \"camel_ai_chemistry\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"camel_ai_chemistry\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"camel_ai_chemistry\"\n\nMore Information needed"
] |
0d68a5c3191f0909c1d3ea3eb0ebd73df0555e66
|
# Dataset Card for "camel_ai_biology"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/camel_ai_biology
|
[
"region:us"
] |
2023-10-12T16:26:35+00:00
|
{"dataset_info": {"features": [{"name": "role_1", "dtype": "string"}, {"name": "topic;", "dtype": "string"}, {"name": "sub_topic", "dtype": "string"}, {"name": "message_1", "dtype": "string"}, {"name": "message_2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 61275986, "num_examples": 20000}], "download_size": 22376128, "dataset_size": 61275986}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T16:27:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "camel_ai_biology"
More Information needed
|
[
"# Dataset Card for \"camel_ai_biology\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"camel_ai_biology\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"camel_ai_biology\"\n\nMore Information needed"
] |
190df0467a16579f5280f3cd2a236b2e4c785e4c
|
# Dataset Card for Evaluation run of Sao10K/Euryale-1.3-L2-70B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Sao10K/Euryale-1.3-L2-70B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T00:11:50.324232](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B/blob/main/results_2023-10-26T00-11-50.324232.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.5388003355704698,
"em_stderr": 0.005105027329360947,
"f1": 0.6009920302013437,
"f1_stderr": 0.004740248039821831,
"acc": 0.5849328585370874,
"acc_stderr": 0.011836910620214903
},
"harness|drop|3": {
"em": 0.5388003355704698,
"em_stderr": 0.005105027329360947,
"f1": 0.6009920302013437,
"f1_stderr": 0.004740248039821831
},
"harness|gsm8k|5": {
"acc": 0.3419257012888552,
"acc_stderr": 0.013066089625182799
},
"harness|winogrande|5": {
"acc": 0.8279400157853196,
"acc_stderr": 0.010607731615247007
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B
|
[
"region:us"
] |
2023-10-12T16:36:47+00:00
|
{"pretty_name": "Evaluation run of Sao10K/Euryale-1.3-L2-70B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-26T00:11:50.324232](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B/blob/main/results_2023-10-26T00-11-50.324232.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.5388003355704698,\n \"em_stderr\": 0.005105027329360947,\n \"f1\": 0.6009920302013437,\n \"f1_stderr\": 0.004740248039821831,\n \"acc\": 0.5849328585370874,\n \"acc_stderr\": 0.011836910620214903\n },\n \"harness|drop|3\": {\n \"em\": 0.5388003355704698,\n \"em_stderr\": 0.005105027329360947,\n \"f1\": 0.6009920302013437,\n \"f1_stderr\": 0.004740248039821831\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3419257012888552,\n \"acc_stderr\": 0.013066089625182799\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8279400157853196,\n \"acc_stderr\": 0.010607731615247007\n }\n}\n```", "repo_url": "https://huggingface.co/Sao10K/Euryale-1.3-L2-70B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|arc:challenge|25_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T00_11_50.324232", "path": ["**/details_harness|drop|3_2023-10-26T00-11-50.324232.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-26T00-11-50.324232.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T00_11_50.324232", "path": ["**/details_harness|gsm8k|5_2023-10-26T00-11-50.324232.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-26T00-11-50.324232.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hellaswag|10_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T17-36-24.431746.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T17-36-24.431746.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T17-36-24.431746.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T00_11_50.324232", "path": ["**/details_harness|winogrande|5_2023-10-26T00-11-50.324232.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-26T00-11-50.324232.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T17_36_24.431746", "path": ["results_2023-10-12T17-36-24.431746.parquet"]}, {"split": "2023_10_26T00_11_50.324232", "path": ["results_2023-10-26T00-11-50.324232.parquet"]}, {"split": "latest", "path": ["results_2023-10-26T00-11-50.324232.parquet"]}]}]}
|
2023-10-25T23:12:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Sao10K/Euryale-1.3-L2-70B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Sao10K/Euryale-1.3-L2-70B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-26T00:11:50.324232(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Sao10K/Euryale-1.3-L2-70B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sao10K/Euryale-1.3-L2-70B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T00:11:50.324232(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Sao10K/Euryale-1.3-L2-70B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sao10K/Euryale-1.3-L2-70B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T00:11:50.324232(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Sao10K/Euryale-1.3-L2-70B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sao10K/Euryale-1.3-L2-70B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-26T00:11:50.324232(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
fc2acc216796c4e4e021e6e0f4f1ece7061267ea
|
# Dataset Card for "nmt-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
psyche/nmt-sample
|
[
"region:us"
] |
2023-10-12T16:52:33+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "source_language", "dtype": "string"}, {"name": "target_language", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 988, "num_examples": 3}], "download_size": 5473, "dataset_size": 988}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T16:52:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "nmt-sample"
More Information needed
|
[
"# Dataset Card for \"nmt-sample\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"nmt-sample\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"nmt-sample\"\n\nMore Information needed"
] |
35aedc29d29117154a1cfd4150313d2402b20228
|
# Dataset Card for "hdbscan_embeddings_bert-base-portuguese-cased-nli-assin-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iara-project/hdbscan_embeddings_bert-base-portuguese-cased-nli-assin-2
|
[
"region:us"
] |
2023-10-12T17:19:25+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "news_id", "dtype": "string"}, {"name": "embeddings", "sequence": "float64"}, {"name": "sentence", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 207438550, "num_examples": 21933}, {"name": "test", "num_bytes": 207385703, "num_examples": 21933}], "download_size": 308794977, "dataset_size": 414824253}}
|
2023-10-12T17:19:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hdbscan_embeddings_bert-base-portuguese-cased-nli-assin-2"
More Information needed
|
[
"# Dataset Card for \"hdbscan_embeddings_bert-base-portuguese-cased-nli-assin-2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hdbscan_embeddings_bert-base-portuguese-cased-nli-assin-2\"\n\nMore Information needed"
] |
[
6,
35
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hdbscan_embeddings_bert-base-portuguese-cased-nli-assin-2\"\n\nMore Information needed"
] |
291439678cc6ee627ee135763a3e49e7e5336eb0
|
# Dataset Card for "textbook-codex"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crumb/textbook-codex
|
[
"region:us"
] |
2023-10-12T17:37:01+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "src", "dtype": "string"}, {"name": "src_col", "dtype": "string"}, {"name": "model", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12286698438.0, "num_examples": 3593574}], "download_size": 5707800000, "dataset_size": 12286698438.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T20:49:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "textbook-codex"
More Information needed
|
[
"# Dataset Card for \"textbook-codex\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"textbook-codex\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"textbook-codex\"\n\nMore Information needed"
] |
b59d5ee2938ff00bbaf9e2d97e0525419e9f188b
|
Este dataset é composto pelos artigos encontrados nos seguintes portais de notícias:
- <a href="https://anovademocracia.com.br">A Nova Democracia</a>
- <a href="https://averdade.org.br">A verdade</a>
- <a href="https://www.brasildefato.com.br">Brasil de fato</a>
- <a href="https://mst.org.br/conteudo/noticias">Jornal MST</a>
- <a href="https://operamundi.uol.com.br">Opera Mundi</a>
- <a href="https://revistaopera.com.br">Revista Opera</a>
Cada pasta dentro do arquivo "artigos-extraidos.zip" contém os artigos em sí, porém não limpos.
O arquivo "br-news-prototype-dataset.json" é um json contendo todos os artigos concatenados e separados em chunks que foram utilizados para treinar a ultima versão do modelo "br-news-prototype" criada no dia 16/09/2023.
|
chenuneris/news-brazillian-clean
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-12T17:41:13+00:00
|
{"license": "apache-2.0"}
|
2023-10-13T18:08:20+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Este dataset é composto pelos artigos encontrados nos seguintes portais de notícias:
- <a href="URL">A Nova Democracia</a>
- <a href="URL">A verdade</a>
- <a href="URL">Brasil de fato</a>
- <a href="URL MST</a>
- <a href="URL">Opera Mundi</a>
- <a href="URL">Revista Opera</a>
Cada pasta dentro do arquivo "URL" contém os artigos em sí, porém não limpos.
O arquivo "URL" é um json contendo todos os artigos concatenados e separados em chunks que foram utilizados para treinar a ultima versão do modelo "br-news-prototype" criada no dia 16/09/2023.
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
7a93f21306c7c6c4323dda53ae36b3ae81651da7
|
# Dataset Card for Evaluation run of pe-nlp/llama-2-13b-vicuna-wizard
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/pe-nlp/llama-2-13b-vicuna-wizard
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [pe-nlp/llama-2-13b-vicuna-wizard](https://huggingface.co/pe-nlp/llama-2-13b-vicuna-wizard) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_pe-nlp__llama-2-13b-vicuna-wizard",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-12T18:46:39.910816](https://huggingface.co/datasets/open-llm-leaderboard/details_pe-nlp__llama-2-13b-vicuna-wizard/blob/main/results_2023-10-12T18-46-39.910816.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3976510067114094,
"em_stderr": 0.0050120430065395205,
"f1": 0.43937709731543745,
"f1_stderr": 0.004888666829236633,
"acc": 0.3794502424345056,
"acc_stderr": 0.007394168076612409
},
"harness|drop|3": {
"em": 0.3976510067114094,
"em_stderr": 0.0050120430065395205,
"f1": 0.43937709731543745,
"f1_stderr": 0.004888666829236633
},
"harness|gsm8k|5": {
"acc": 0.009097801364670205,
"acc_stderr": 0.0026153265107756725
},
"harness|winogrande|5": {
"acc": 0.749802683504341,
"acc_stderr": 0.012173009642449144
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_pe-nlp__llama-2-13b-vicuna-wizard
|
[
"region:us"
] |
2023-10-12T17:46:43+00:00
|
{"pretty_name": "Evaluation run of pe-nlp/llama-2-13b-vicuna-wizard", "dataset_summary": "Dataset automatically created during the evaluation run of model [pe-nlp/llama-2-13b-vicuna-wizard](https://huggingface.co/pe-nlp/llama-2-13b-vicuna-wizard) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_pe-nlp__llama-2-13b-vicuna-wizard\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-12T18:46:39.910816](https://huggingface.co/datasets/open-llm-leaderboard/details_pe-nlp__llama-2-13b-vicuna-wizard/blob/main/results_2023-10-12T18-46-39.910816.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3976510067114094,\n \"em_stderr\": 0.0050120430065395205,\n \"f1\": 0.43937709731543745,\n \"f1_stderr\": 0.004888666829236633,\n \"acc\": 0.3794502424345056,\n \"acc_stderr\": 0.007394168076612409\n },\n \"harness|drop|3\": {\n \"em\": 0.3976510067114094,\n \"em_stderr\": 0.0050120430065395205,\n \"f1\": 0.43937709731543745,\n \"f1_stderr\": 0.004888666829236633\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009097801364670205,\n \"acc_stderr\": 0.0026153265107756725\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.749802683504341,\n \"acc_stderr\": 0.012173009642449144\n }\n}\n```", "repo_url": "https://huggingface.co/pe-nlp/llama-2-13b-vicuna-wizard", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_12T18_46_39.910816", "path": ["**/details_harness|drop|3_2023-10-12T18-46-39.910816.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-12T18-46-39.910816.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_12T18_46_39.910816", "path": ["**/details_harness|gsm8k|5_2023-10-12T18-46-39.910816.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-12T18-46-39.910816.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_12T18_46_39.910816", "path": ["**/details_harness|winogrande|5_2023-10-12T18-46-39.910816.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-12T18-46-39.910816.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T18_46_39.910816", "path": ["results_2023-10-12T18-46-39.910816.parquet"]}, {"split": "latest", "path": ["results_2023-10-12T18-46-39.910816.parquet"]}]}]}
|
2023-10-12T17:46:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of pe-nlp/llama-2-13b-vicuna-wizard
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model pe-nlp/llama-2-13b-vicuna-wizard on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-12T18:46:39.910816(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of pe-nlp/llama-2-13b-vicuna-wizard",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model pe-nlp/llama-2-13b-vicuna-wizard on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-12T18:46:39.910816(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of pe-nlp/llama-2-13b-vicuna-wizard",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model pe-nlp/llama-2-13b-vicuna-wizard on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-12T18:46:39.910816(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of pe-nlp/llama-2-13b-vicuna-wizard## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model pe-nlp/llama-2-13b-vicuna-wizard on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-12T18:46:39.910816(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0e94eedec406a1c99904f3b5dc286fd10db86452
|
# Dataset Card for "xlmr_test_lrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_test_lrg
|
[
"region:us"
] |
2023-10-12T17:48:23+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2363484390, "num_examples": 900000}], "download_size": 754844875, "dataset_size": 2363484390}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T17:50:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_test_lrg"
More Information needed
|
[
"# Dataset Card for \"xlmr_test_lrg\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_test_lrg\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_test_lrg\"\n\nMore Information needed"
] |
63c00fa11f661d939a266a3e0e10efc6361d36e6
|
# Dataset Card for "formal-logic-simple-order-simple-objects-clavorier-500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pccl-org/formal-logic-simple-order-simple-objects-clavorier-500
|
[
"region:us"
] |
2023-10-12T17:56:06+00:00
|
{"dataset_info": {"features": [{"name": "greater_than", "dtype": "string"}, {"name": "less_than", "dtype": "string"}, {"name": "correct_example", "sequence": "string"}, {"name": "incorrect_example", "sequence": "string"}, {"name": "distance", "dtype": "int64"}, {"name": "index", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 19386150, "num_examples": 124750}], "download_size": 0, "dataset_size": 19386150}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T18:22:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "formal-logic-simple-order-simple-objects-clavorier-500"
More Information needed
|
[
"# Dataset Card for \"formal-logic-simple-order-simple-objects-clavorier-500\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"formal-logic-simple-order-simple-objects-clavorier-500\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"formal-logic-simple-order-simple-objects-clavorier-500\"\n\nMore Information needed"
] |
ba074b34dff0b89f40c2340c183727de88e099bf
|
# Dataset Card for "DDI2013_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hippocrates/DDI2013_train
|
[
"region:us"
] |
2023-10-12T18:18:42+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38149976, "num_examples": 18779}, {"name": "valid", "num_bytes": 16261433, "num_examples": 7244}, {"name": "test", "num_bytes": 11943181, "num_examples": 5761}], "download_size": 12129710, "dataset_size": 66354590}}
|
2023-12-23T17:16:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "DDI2013_train"
More Information needed
|
[
"# Dataset Card for \"DDI2013_train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"DDI2013_train\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"DDI2013_train\"\n\nMore Information needed"
] |
673b69afe52260693f25a1f639c65c73610cc3a7
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 0
## inverse_template: 0
|
ostapeno/platy_icl5_maxD50_maxC1000000_prmt00_1
|
[
"region:us"
] |
2023-10-12T19:16:46+00:00
|
{}
|
2023-10-12T19:16:58+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 0
## inverse_template: 0
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 0",
"## inverse_template: 0"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 0",
"## inverse_template: 0"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
7,
7,
8
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## response_template: 0## inverse_template: 0"
] |
ddd5f55a7d807bde9db1b364db657545bc0086f3
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 0
## inverse_template: 1
|
ostapeno/platy_icl5_maxD50_maxC1000000_prmt01_1
|
[
"region:us"
] |
2023-10-12T19:57:24+00:00
|
{}
|
2023-10-12T19:57:37+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 0
## inverse_template: 1
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 0",
"## inverse_template: 1"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 0",
"## inverse_template: 1"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
7,
7,
8
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## response_template: 0## inverse_template: 1"
] |
136254e7e4b6560051364be0234ad671717534d2
|
# Dataset Card for "irish-tunes-spectrograms"
## 1. Dataset Description
Dataset is used for the following project
- **Homepage:** [Trad-fusion](https://github.com/hdparmar/Tradi-fusion)
### 1.1 Dataset Summary
This dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels.
Although, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.
The simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The metadata.csv file the dataset is in this format
```
{"file_name": "path/to/the/image.png",
"text": "Irish Traditional Tune"}
```
### 2.2 Data Fields
- **file_name**: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- **text**: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "Irish Traditional Tune." This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset
|
hdparmar/irish-tunes-spectrograms
|
[
"task_categories:text-to-image",
"task_categories:text-to-audio",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-10-12T20:06:15+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-image", "text-to-audio"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16031533765.152, "num_examples": 51217}], "download_size": 15902802902, "dataset_size": 16031533765.152}}
|
2023-10-15T01:37:32+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-to-image #task_categories-text-to-audio #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
|
# Dataset Card for "irish-tunes-spectrograms"
## 1. Dataset Description
Dataset is used for the following project
- Homepage: Trad-fusion
### 1.1 Dataset Summary
This dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels.
Although, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.
The simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The URL file the dataset is in this format
### 2.2 Data Fields
- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "Irish Traditional Tune." This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset
|
[
"# Dataset Card for \"irish-tunes-spectrograms\"",
"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion",
"### 1.1 Dataset Summary\nThis dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels. \nAlthough, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.\nThe simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.",
"### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.",
"## 2. Dataset Structure",
"### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.",
"#### Example:\nThe URL file the dataset is in this format",
"### 2.2 Data Fields\n- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.\n- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is \"Irish Traditional Tune.\"",
"### 2.3 Data Splits\nAs of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.",
"### 2.4 Uniform Captions: A Special Note\nAll the spectrograms in this dataset come labeled with a uniform caption: \"Irish Traditional Tune.\" This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.",
"## NOTE\nFurthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset"
] |
[
"TAGS\n#task_categories-text-to-image #task_categories-text-to-audio #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for \"irish-tunes-spectrograms\"",
"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion",
"### 1.1 Dataset Summary\nThis dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels. \nAlthough, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.\nThe simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.",
"### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.",
"## 2. Dataset Structure",
"### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.",
"#### Example:\nThe URL file the dataset is in this format",
"### 2.2 Data Fields\n- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.\n- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is \"Irish Traditional Tune.\"",
"### 2.3 Data Splits\nAs of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.",
"### 2.4 Uniform Captions: A Special Note\nAll the spectrograms in this dataset come labeled with a uniform caption: \"Irish Traditional Tune.\" This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.",
"## NOTE\nFurthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset"
] |
[
55,
17,
20,
170,
30,
7,
65,
15,
80,
49,
76,
34
] |
[
"passage: TAGS\n#task_categories-text-to-image #task_categories-text-to-audio #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n# Dataset Card for \"irish-tunes-spectrograms\"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion### 1.1 Dataset Summary\nThis dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels. \nAlthough, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.\nThe simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.## 2. Dataset Structure### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.#### Example:\nThe URL file the dataset is in this format### 2.2 Data Fields\n- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.\n- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is \"Irish Traditional Tune.\""
] |
e0a5f145ecb45dc4532f99aabc7c57ed9600b43b
|
## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
|
ostapeno/platy_icl5_subset1.0_maxD1000000_3
|
[
"region:us"
] |
2023-10-12T20:07:03+00:00
|
{}
|
2023-10-12T20:07:18+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## subset: 1.0",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## subset: 1.0",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10"
] |
[
6,
9,
10,
5,
9,
14,
13,
12,
27,
7
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## subset: 1.0## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10"
] |
714d6ac02ab12be5147e1db8f5bda9a2d681f11c
|
## Created By Maruti.io
This dataset prompts to create SQL Queries, and completions to those prompts.
|
Maruti-IO/Generate_SQL
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-12T20:20:03+00:00
|
{"license": "apache-2.0"}
|
2023-10-13T01:26:15+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
## Created By URL
This dataset prompts to create SQL Queries, and completions to those prompts.
|
[
"## Created By URL\nThis dataset prompts to create SQL Queries, and completions to those prompts."
] |
[
"TAGS\n#license-apache-2.0 #region-us \n",
"## Created By URL\nThis dataset prompts to create SQL Queries, and completions to those prompts."
] |
[
14,
24
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n## Created By URL\nThis dataset prompts to create SQL Queries, and completions to those prompts."
] |
8453bb4ed54b77fc8567055c23a1769d3a2b020f
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 1
## inverse_template: 0
|
ostapeno/platy_icl5_maxD50_maxC1000000_prmt10_1
|
[
"region:us"
] |
2023-10-12T20:21:42+00:00
|
{}
|
2023-10-12T20:21:55+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 1
## inverse_template: 0
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 1",
"## inverse_template: 0"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 1",
"## inverse_template: 0"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
7,
7,
8
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## response_template: 1## inverse_template: 0"
] |
c38167e7923aa9efdd2c492e158cc72672197cb0
|
# Medical Dataset for ASR
The dataset is a part taken from [The MedDialog dataset](https://huggingface.co/datasets/medical_dialog). We used only icliniq_dialogue.txt and done some preprocessing:
- Remove all chars except for [a-z|A-Z|0-9|,|.].
- Break each conversation into rows of 32 to 35 words.
- Remove Duplication.
- Fix typos using GPT-3 instructons' model.
- Used Suno/Bark to create ~15K audio clips with different voices [*In Progress*]
#### Note:
- We are expecting about ~45 hours of medical audio clips.
- The dataset will be released soon, for any inqueries please contact me on([email protected])
|
Hani89/Medical_ASR_45HRs
|
[
"task_categories:automatic-speech-recognition",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] |
2023-10-12T20:22:48+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["automatic-speech-recognition"], "tags": ["medical"]}
|
2023-10-13T00:38:01+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #size_categories-10K<n<100K #language-English #license-apache-2.0 #medical #region-us
|
# Medical Dataset for ASR
The dataset is a part taken from The MedDialog dataset. We used only icliniq_dialogue.txt and done some preprocessing:
- Remove all chars except for [a-z|A-Z|0-9|,|.].
- Break each conversation into rows of 32 to 35 words.
- Remove Duplication.
- Fix typos using GPT-3 instructons' model.
- Used Suno/Bark to create ~15K audio clips with different voices [*In Progress*]
#### Note:
- We are expecting about ~45 hours of medical audio clips.
- The dataset will be released soon, for any inqueries please contact me on(hmthubaiti@URL)
|
[
"# Medical Dataset for ASR\nThe dataset is a part taken from The MedDialog dataset. We used only icliniq_dialogue.txt and done some preprocessing:\n- Remove all chars except for [a-z|A-Z|0-9|,|.].\n- Break each conversation into rows of 32 to 35 words.\n- Remove Duplication.\n- Fix typos using GPT-3 instructons' model.\n- Used Suno/Bark to create ~15K audio clips with different voices [*In Progress*]",
"#### Note:\n- We are expecting about ~45 hours of medical audio clips.\n- The dataset will be released soon, for any inqueries please contact me on(hmthubaiti@URL)"
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #size_categories-10K<n<100K #language-English #license-apache-2.0 #medical #region-us \n",
"# Medical Dataset for ASR\nThe dataset is a part taken from The MedDialog dataset. We used only icliniq_dialogue.txt and done some preprocessing:\n- Remove all chars except for [a-z|A-Z|0-9|,|.].\n- Break each conversation into rows of 32 to 35 words.\n- Remove Duplication.\n- Fix typos using GPT-3 instructons' model.\n- Used Suno/Bark to create ~15K audio clips with different voices [*In Progress*]",
"#### Note:\n- We are expecting about ~45 hours of medical audio clips.\n- The dataset will be released soon, for any inqueries please contact me on(hmthubaiti@URL)"
] |
[
49,
125,
45
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #size_categories-10K<n<100K #language-English #license-apache-2.0 #medical #region-us \n# Medical Dataset for ASR\nThe dataset is a part taken from The MedDialog dataset. We used only icliniq_dialogue.txt and done some preprocessing:\n- Remove all chars except for [a-z|A-Z|0-9|,|.].\n- Break each conversation into rows of 32 to 35 words.\n- Remove Duplication.\n- Fix typos using GPT-3 instructons' model.\n- Used Suno/Bark to create ~15K audio clips with different voices [*In Progress*]#### Note:\n- We are expecting about ~45 hours of medical audio clips.\n- The dataset will be released soon, for any inqueries please contact me on(hmthubaiti@URL)"
] |
473edf9046f642bb7b96e2f0deb7acaeaf2dc0e0
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 1
## inverse_template: 1
|
ostapeno/platy_icl5_maxD50_maxC1000000_prmt11_1
|
[
"region:us"
] |
2023-10-12T20:39:42+00:00
|
{}
|
2023-10-12T20:39:55+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 1
## inverse_template: 1
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 1",
"## inverse_template: 1"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 1",
"## inverse_template: 1"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
7,
7,
8
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## response_template: 1## inverse_template: 1"
] |
03f036d041682bf5820901d91f964d6dcefd527a
|
# Dataset Card for Hugging Face Hub Model Cards
This datasets consists of [model cards](https://huggingface.co/docs/hub/model-cards) for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more.
This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new [discussion](https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/discussions/new).
## Dataset Details
### Dataset Description
- **Curated by:** Daniel van Strien
- **Language(s) (NLP):** Model cards on the Hugging Face Hub are predominantly in English but may include other languages.
## Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
This dataset has a single split.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
### Source Data
The source data is `README.md` files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The data is downloaded using a CRON job on a daily basis.
#### Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
### Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
#### Annotation process
N/A
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
N/A
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards.
Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
## Dataset Card Authors
[@davanstrien](https://huggingface.co/davanstrien)
## Dataset Card Contact
[@davanstrien](https://huggingface.co/davanstrien)
|
librarian-bots/model_cards_with_metadata
|
[
"task_categories:text-retrieval",
"size_categories:100K<n<1M",
"ethics",
"region:us"
] |
2023-10-12T20:50:53+00:00
|
{"size_categories": ["100K<n<1M"], "task_categories": ["text-retrieval"], "pretty_name": "Hugging Face Hub Model Cards", "dataset_info": {"features": [{"name": "modelId", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "last_modified", "dtype": "timestamp[us, tz=UTC]"}, {"name": "downloads", "dtype": "int64"}, {"name": "likes", "dtype": "int64"}, {"name": "library_name", "dtype": "string"}, {"name": "tags", "sequence": "string"}, {"name": "pipeline_tag", "dtype": "string"}, {"name": "createdAt", "dtype": "timestamp[us, tz=UTC]"}, {"name": "card", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 882676195, "num_examples": 508763}], "download_size": 286032779, "dataset_size": 882676195}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["ethics"]}
|
2024-02-17T02:57:17+00:00
|
[] |
[] |
TAGS
#task_categories-text-retrieval #size_categories-100K<n<1M #ethics #region-us
|
# Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more.
This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
## Dataset Details
### Dataset Description
- Curated by: Daniel van Strien
- Language(s) (NLP): Model cards on the Hugging Face Hub are predominantly in English but may include other languages.
## Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
### Out-of-Scope Use
## Dataset Structure
This dataset has a single split.
## Dataset Creation
### Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
### Source Data
The source data is 'URL' files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
#### Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
#### Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
### Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
#### Annotation process
N/A
#### Who are the annotators?
N/A
#### Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
## Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards.
Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
## Dataset Card Authors
@davanstrien
## Dataset Card Contact
@davanstrien
|
[
"# Dataset Card for Hugging Face Hub Model Cards\n\nThis datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. \nThis dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.\n\nThis dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.",
"## Dataset Details",
"### Dataset Description\n\n\n- Curated by: Daniel van Strien\n- Language(s) (NLP): Model cards on the Hugging Face Hub are predominantly in English but may include other languages.",
"## Uses\n\nThere are a number of potential uses for this dataset including:\n- text mining to find common themes in model cards\n- analysis of the model card format/content\n- topic modelling of model cards\n- analysis of the model card metadata\n- training language models on model cards",
"### Out-of-Scope Use",
"## Dataset Structure\n\nThis dataset has a single split.",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nThe dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.",
"### Source Data\n\nThe source data is 'URL' files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.",
"#### Data Collection and Processing\n\n\n\nThe data is downloaded using a CRON job on a daily basis.",
"#### Who are the source data producers?\n\nThe source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.",
"### Annotations [optional]\n\nThere are no additional annotations in this dataset beyond the model card content.",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\n\n\nN/A",
"#### Personal and Sensitive Information\n\n\n\nWe make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.",
"## Bias, Risks, and Limitations\n\n\n\nModel cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. \nSome model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias. \n\nWhilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\nNo formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.",
"## Dataset Card Authors \n\n@davanstrien",
"## Dataset Card Contact\n\n@davanstrien"
] |
[
"TAGS\n#task_categories-text-retrieval #size_categories-100K<n<1M #ethics #region-us \n",
"# Dataset Card for Hugging Face Hub Model Cards\n\nThis datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. \nThis dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.\n\nThis dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.",
"## Dataset Details",
"### Dataset Description\n\n\n- Curated by: Daniel van Strien\n- Language(s) (NLP): Model cards on the Hugging Face Hub are predominantly in English but may include other languages.",
"## Uses\n\nThere are a number of potential uses for this dataset including:\n- text mining to find common themes in model cards\n- analysis of the model card format/content\n- topic modelling of model cards\n- analysis of the model card metadata\n- training language models on model cards",
"### Out-of-Scope Use",
"## Dataset Structure\n\nThis dataset has a single split.",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nThe dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.",
"### Source Data\n\nThe source data is 'URL' files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.",
"#### Data Collection and Processing\n\n\n\nThe data is downloaded using a CRON job on a daily basis.",
"#### Who are the source data producers?\n\nThe source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.",
"### Annotations [optional]\n\nThere are no additional annotations in this dataset beyond the model card content.",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\n\n\nN/A",
"#### Personal and Sensitive Information\n\n\n\nWe make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.",
"## Bias, Risks, and Limitations\n\n\n\nModel cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. \nSome model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias. \n\nWhilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\nNo formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.",
"## Dataset Card Authors \n\n@davanstrien",
"## Dataset Card Contact\n\n@davanstrien"
] |
[
33,
165,
4,
44,
60,
9,
14,
5,
79,
43,
22,
84,
26,
8,
12,
62,
148,
66,
11,
10
] |
[
"passage: TAGS\n#task_categories-text-retrieval #size_categories-100K<n<1M #ethics #region-us \n# Dataset Card for Hugging Face Hub Model Cards\n\nThis datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. \nThis dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.\n\nThis dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.## Dataset Details### Dataset Description\n\n\n- Curated by: Daniel van Strien\n- Language(s) (NLP): Model cards on the Hugging Face Hub are predominantly in English but may include other languages.## Uses\n\nThere are a number of potential uses for this dataset including:\n- text mining to find common themes in model cards\n- analysis of the model card format/content\n- topic modelling of model cards\n- analysis of the model card metadata\n- training language models on model cards### Out-of-Scope Use## Dataset Structure\n\nThis dataset has a single split.## Dataset Creation### Curation Rationale\n\n\n\nThe dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.### Source Data\n\nThe source data is 'URL' files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.#### Data Collection and Processing\n\n\n\nThe data is downloaded using a CRON job on a daily basis."
] |
4a2304e66d1fefb35be7ca134489c5ccd11ed2f4
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 2
## inverse_template: 0
|
ostapeno/platy_icl5_maxD50_maxC1000000_prmt20_1
|
[
"region:us"
] |
2023-10-12T20:52:21+00:00
|
{}
|
2023-10-12T21:06:49+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 2
## inverse_template: 0
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 2",
"## inverse_template: 0"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 2",
"## inverse_template: 0"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
7,
7,
8
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## response_template: 2## inverse_template: 0"
] |
f8acaaeef7a972c5f6f53f71f5c01d782070ffaf
|
# Monika Chat v1 (10152023)
* dataset of ~680 items (dialogue scraped from game, reddit, and Twitter)
* these items were augmented by [l2-7b-monika-v0.3c1](https://huggingface.co/922-CA/llama-2-7b-monika-v0.3c1) to turn each into snippets of multi-turn chat dialogue between Player and Monika
* finally, these were then manually edited, with more manually crafted items including info about character added in
|
922-CA/MoCha_v1
|
[
"license:openrail",
"region:us"
] |
2023-10-12T20:52:37+00:00
|
{"license": "openrail"}
|
2023-10-14T23:04:06+00:00
|
[] |
[] |
TAGS
#license-openrail #region-us
|
# Monika Chat v1 (10152023)
* dataset of ~680 items (dialogue scraped from game, reddit, and Twitter)
* these items were augmented by l2-7b-monika-v0.3c1 to turn each into snippets of multi-turn chat dialogue between Player and Monika
* finally, these were then manually edited, with more manually crafted items including info about character added in
|
[
"# Monika Chat v1 (10152023)\n* dataset of ~680 items (dialogue scraped from game, reddit, and Twitter)\n* these items were augmented by l2-7b-monika-v0.3c1 to turn each into snippets of multi-turn chat dialogue between Player and Monika\n* finally, these were then manually edited, with more manually crafted items including info about character added in"
] |
[
"TAGS\n#license-openrail #region-us \n",
"# Monika Chat v1 (10152023)\n* dataset of ~680 items (dialogue scraped from game, reddit, and Twitter)\n* these items were augmented by l2-7b-monika-v0.3c1 to turn each into snippets of multi-turn chat dialogue between Player and Monika\n* finally, these were then manually edited, with more manually crafted items including info about character added in"
] |
[
12,
93
] |
[
"passage: TAGS\n#license-openrail #region-us \n# Monika Chat v1 (10152023)\n* dataset of ~680 items (dialogue scraped from game, reddit, and Twitter)\n* these items were augmented by l2-7b-monika-v0.3c1 to turn each into snippets of multi-turn chat dialogue between Player and Monika\n* finally, these were then manually edited, with more manually crafted items including info about character added in"
] |
252bcb5f59d92ac408a4452499c0e2b5f88ed96c
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 2
## inverse_template: 1
|
ostapeno/platy_icl5_maxD50_maxC1000000_prmt21_1
|
[
"region:us"
] |
2023-10-12T21:07:48+00:00
|
{}
|
2023-10-12T21:08:01+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 2
## inverse_template: 1
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 2",
"## inverse_template: 1"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 2",
"## inverse_template: 1"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
7,
7,
8
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## response_template: 2## inverse_template: 1"
] |
ca3ab6ce4471cf2c5b88945f21d45dae08e084d0
|
# Readme
hello! s
|
peterbeamish/hack-cnn
|
[
"source_datasets:github",
"language:en",
"license:other",
"region:us"
] |
2023-10-12T21:15:54+00:00
|
{"language": ["en"], "license": "other", "source_datasets": ["github"], "license_name": "notouch", "license_details": "notouch", "configs": [{"config_name": "default", "splits": [{"name": "train", "num_bytes": 725, "num_examples": 2}, {"name": "test", "num_bytes": 725, "num_examples": 2}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "highlights", "dtype": "string"}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 725, "num_examples": 2}, {"name": "test", "num_bytes": 725, "num_examples": 2}], "download_size": 6468, "dataset_size": 1450}]}
|
2023-10-13T00:10:44+00:00
|
[] |
[
"en"
] |
TAGS
#source_datasets-github #language-English #license-other #region-us
|
# Readme
hello! s
|
[
"# Readme\nhello! s"
] |
[
"TAGS\n#source_datasets-github #language-English #license-other #region-us \n",
"# Readme\nhello! s"
] |
[
24,
7
] |
[
"passage: TAGS\n#source_datasets-github #language-English #license-other #region-us \n# Readme\nhello! s"
] |
214d9a81ef9d333039faeda85a24e045ded71d5d
|
# DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
## Overview
This repo contains the source code of DecodingTrust. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Large Language Models (LLMs). See our paper for details.
[**DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models**](https://arxiv.org/abs//2306.11698)
*Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li.*
https://arxiv.org/pdf/2306.11698.pdf
This project is organized around the following **eight** primary areas of trustworthiness, including:
1. Toxicity
2. Stereotype and bias
3. Adversarial robustness
4. Out-of-Distribution Robustness
5. Privacy
6. Robustness to Adversarial Demonstrations
7. Machine Ethics
8. Fairness
## Getting Started
To evaluate using DecodingTrust dataset, please install the DecodingTrust package as below:
### (Conda +) Pip
For now, we suggest installing DecodingTrust by cloning our repository and install it in editable mode. This will keep the data, code, and configurations in the same place.
```bash
git clone https://github.com/AI-secure/DecodingTrust.git && cd DecodingTrust
pip install -e .
```
Please note that this will install PyTorch with `pip`. If your system does not have a `CUDA` version compatible with the PyTorch `pip` wheel. To install `PyTorch` with `Conda` first, as shown below.
```bash
conda create --name dt-test python=3.9 pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
```
It is also possible to install DecodingTrust as a standalone package, but you will need to clone our repository again to run it will our data.
```bash
conda create --name dt-test python=3.9
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
```
### Support for the `ppc64le` Architecture
We also support the `ppc64le` architecture of IBM Power-9 platforms. To install on this platform, please first make sure you have the following `conda` channels so that we can utilize pre-built packages.
```
--add channels 'defaults' # lowest priority
--add channels 'https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/'
--add channels 'https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/'
--add channels 'https://opence.mit.edu'
--add channels 'https://ftp.osuosl.org/pub/open-ce/current/'
--add channels 'conda-forge' # highest priority
```
Then, install the following pre-built packages.
```bash
mamba create --name dt-test python==3.9 pytorch=2.0.1 torchvision=0.15.2 spacy=3.5.3 scipy=1.10.1 fairlearn~=0.9.0 scikit-learn~=1.1.2 pandas~=2.0.3 pyarrow~=11.0.0 rust -c conda-forge
```
Finally, install DecodingTrust with `pip` as usual.
### Docker / Singularity
To use DecodingTrust with docker, simply pull the following docker image.
```bash
sudo docker pull danielz01/decoding-trust
docker run -it \
-v /path/on/host:/path/in/container \
--gpus all \
decoding-trust/v1.0:latest [arg1 arg2 ...]
```
To use it in through singularity or apptainer container environments on HPC environments, simply run the following.
```bash
module load singularity # Change it to whatever module name your singularity / apptainer environment was given
singularity pull decoding-trust-v1.0.sif docker://danielz01/decoding-trust
singularity exec --nv --bind /path/on/host:/path/in/container decoding-trust-v1.0.sif [arg1 arg2]
```
We will also have a container build for `ppc64le` platforms soon. Stay tuned!
### Notes
+ Each of the eight areas has its own subdirectory containing the respective code and README.
+ Follow the specific `README`: Every subdirectory has its own README. Refer to these documents for information on how to run the scripts and interpret the results.
## [Important] Candidate models
In our benchmark, to have consistent conclusions and results, currently we mianly focus on evaluating the following two OpenAI models:
- `gpt-3.5-turbo-0301`
- `gpt-4-0314`
**Note we use `gpt-3.5-turbo-0301` (with time stamp) released in March instead of `gpt-3.5-turbo` for sake of model evolution to ensure reproducibility.**
Currently, we have supported evaluating all the causal LLMs **hosted in Huggingface** or hosted locally. Specifically, we have tested the following open LLMs:
- `Llama-v2-7B-Chat`
- `Vicuna-7BAlpaca-7B`
- `MPT-7B`
- `Falcon-7B`
- `Alpaca-7B`
- `RedPajama-INCITE-7B-Instruct`
## Tutorial
We have provided a [Tutorial](Tutorial.md) to help you walk through the usage of API to evaluate different trustworthiness perspectives and LLMs.
## Useful tips
- Please first evaluate your experiments with `++dry_run=True` flags on to check the input / output format, and use `gpt-3.5-turbo-0301` to check the generation since it has lower costs.
- Suggesting saving the responses from OpenAI.
## File usage
- `main.py` provides a unified entry point to evaluate all the perspectives and different LLMs with proper configuration
- `chat.py` provides robust APIs for creating requests to OpenAI **Chat Compleition** models and Huggingface autoregressive LLMs. Recommend implementing experiments based on this file. If you think `chat.py` is not good enough and want to make modifications, please let @acphile and @boxinw know.
- `utils.py` provide auxiliary functions
For other files, please refer to each subdirs for more information.
## License
This project is licensed under the [CC BY-SA 4.0 ]("http://creativecommons.org/licenses/by-sa/4.0/legalcode") - see the LICENSE file for details.
## Citation
Please cite the paper as follows if you use the data or code from DecodingTrust:
```
@article{wang2023decodingtrust,
title={DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models},
author={Wang, Boxin and Chen, Weixin and Pei, Hengzhi and Xie, Chulin and Kang, Mintong and Zhang, Chenhui and Xu, Chejian and Xiong, Zidi and Dutta, Ritik and Schaeffer, Rylan and others},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023}
}
```
## Contact
Please reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to [email protected].
Thank you for your interest in DecodingTrust. We hope our work will contribute to a more trustworthy, fair, and robust AI future.
|
AI-Secure/DecodingTrust
|
[
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2306.11698",
"region:us"
] |
2023-10-12T21:26:17+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering", "text-generation", "text2text-generation"], "pretty_name": "DecodingTrust", "arixv": 2306.11698, "configs": [{"config_name": "toxicity", "data_files": [{"split": "realtoxicityprompts.nontoxic", "path": "toxicity/user_prompts/nontoxic.jsonl"}, {"split": "realtoxicityprompts.toxic", "path": "toxicity/user_prompts/toxic.jsonl"}, {"split": "toxic.gpt3.5", "path": "toxicity/user_prompts/toxic.gpt3.5.jsonl"}, {"split": "toxic.gpt4", "path": "toxicity/user_prompts/toxic.gpt4.jsonl"}]}, {"config_name": "adv_demonstration", "data_files": [{"split": "counterfactual.snliPremiseCf", "path": "adv_demonstration/counterfactual/snli_premise_cf/42.jsonl"}, {"split": "counterfactual.snliHypothesisCf", "path": "adv_demonstration/counterfactual/snli_hypothesis_cf/42.jsonl"}, {"split": "counterfactual.controlRaisingCf", "path": "adv_demonstration/counterfactual/control_raising_cf/42.jsonl"}, {"split": "counterfactual.irregularFormCf", "path": "adv_demonstration/counterfactual/irregular_form_cf/42.jsonl"}, {"split": "counterfactual.mainVerbCf", "path": "adv_demonstration/counterfactual/main_verb_cf/42.jsonl"}, {"split": "counterfactual.syntacticCategoryCf", "path": "adv_demonstration/counterfactual/syntactic_category_cf/42.jsonl"}, {"split": "spurious.PP.entailBias", "path": "adv_demonstration/spurious/PP/entail-bias/42.jsonl"}, {"split": "spurious.PP.nonEntailBias", "path": "adv_demonstration/spurious/PP/non-entail-bias/42.jsonl"}, {"split": "spurious.adverb.entailBias", "path": "adv_demonstration/spurious/adverb/entail-bias/42.jsonl"}, {"split": "spurious.adverb.nonEntailBias", "path": "adv_demonstration/spurious/adverb/non-entail-bias/42.jsonl"}, {"split": "spurious.embeddedUnderVerb.entailBias", "path": "adv_demonstration/spurious/embedded_under_verb/entail-bias/42.jsonl"}, {"split": "spurious.embeddedUnderVerb.nonEntailBias", "path": "adv_demonstration/spurious/embedded_under_verb/non-entail-bias/42.jsonl"}, {"split": "spurious.lRelativeClause.entailBias", "path": "adv_demonstration/spurious/l_relative_clause/entail-bias/42.jsonl"}, {"split": "spurious.lRelativeClause.nonEntailBias", "path": "adv_demonstration/spurious/l_relative_clause/non-entail-bias/42.jsonl"}, {"split": "spurious.passive.entailBias", "path": "adv_demonstration/spurious/passive/entail-bias/42.jsonl"}, {"split": "spurious.passive.nonEntailBias", "path": "adv_demonstration/spurious/passive/non-entail-bias/42.jsonl"}, {"split": "spurious.sRelativeClause.entailBias", "path": "adv_demonstration/spurious/s_relative_clause/entail-bias/42.jsonl"}, {"split": "spurious.sRelativeClause.nonEntailBias", "path": "adv_demonstration/spurious/s_relative_clause/non-entail-bias/42.jsonl"}, {"split": "backdoor.sst2.setup1BadwordCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup1BadwordAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_asr/42.jsonl"}, {"split": "backdoor.sst2.setup2BadwordCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup2BadwordAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_asr/42.jsonl"}, {"split": "backdoor.sst2.setup3BadwordCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup3BadwordAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_asr/42.jsonl"}, {"split": "backdoor.sst2.setup1AddsentCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup1AddsentAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_asr/42.jsonl"}, {"split": "backdoor.sst2.setup2AddsentCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup2AddsentAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_asr/42.jsonl"}, {"split": "backdoor.sst2.setup3AddsentCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup3AddsentAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_asr/42.jsonl"}, {"split": "backdoor.sst2.setup1SynbkdCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup1SynbkdAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_asr/42.jsonl"}, {"split": "backdoor.sst2.setup2SynbkdCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup2SynbkdAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_asr/42.jsonl"}, {"split": "backdoor.sst2.setup3SynbkdCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup3SynbkdAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_asr/42.jsonl"}, {"split": "backdoor.sst2.setup1StylebkdCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup1StylebkdAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_asr/42.jsonl"}, {"split": "backdoor.sst2.setup2StylebkdCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup2StylebkdAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_asr/42.jsonl"}, {"split": "backdoor.sst2.setup3StylebkdCacc", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_cacc/42.jsonl"}, {"split": "backdoor.sst2.setup3StylebkdAsr", "path": "adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_asr/42.jsonl"}]}, {"config_name": "stereotype", "data_files": [{"split": "stereotype", "path": "stereotype/dataset/stereotype_bias_data.jsonl"}]}, {"config_name": "adv-glue-plus-plus", "data_files": [{"split": "sst2", "path": "adv-glue-plus-plus/data/sst2.jsonl"}, {"split": "qqp", "path": "adv-glue-plus-plus/data/qqp.jsonl"}, {"split": "mnli", "path": "adv-glue-plus-plus/data/mnli.jsonl"}, {"split": "mnli_mismatched", "path": "adv-glue-plus-plus/data/mnli-mm.jsonl"}, {"split": "qnli", "path": "adv-glue-plus-plus/data/qnli.jsonl"}, {"split": "rte", "path": "adv-glue-plus-plus/data/rte.jsonl"}]}, {"config_name": "machine_ethics", "data_files": [{"split": "cm_train", "path": "machine_ethics/cm_train.jsonl"}, {"split": "cm_test", "path": "machine_ethics/cm_test.jsonl"}, {"split": "deontology_train", "path": "machine_ethics/deontology_train.jsonl"}, {"split": "deontology_test", "path": "machine_ethics/deontology_test.jsonl"}, {"split": "justice_train", "path": "machine_ethics/justice_train.jsonl"}, {"split": "justice_test", "path": "machine_ethics/justice_test.jsonl"}, {"split": "util_train", "path": "machine_ethics/util_train.jsonl"}, {"split": "util_test", "path": "machine_ethics/util_test.jsonl"}, {"split": "virtue_train", "path": "machine_ethics/virtue_train.jsonl"}, {"split": "virtue_test", "path": "machine_ethics/virtue_test.jsonl"}, {"split": "jiminy_train", "path": "machine_ethics/jiminy_train.jsonl"}, {"split": "jiminy_test", "path": "machine_ethics/jiminy_test.jsonl"}, {"split": "jiminy_subset", "path": "machine_ethics/jiminy_subset.jsonl"}]}, {"config_name": "privacy", "data_files": [{"split": "enron.context", "path": "privacy/enron_data/context.jsonl"}, {"split": "enron.email2name", "path": "privacy/enron_data/email2name.jsonl"}, {"split": "enron.one_shot_non_domain", "path": "privacy/enron_data/one_shot_non_domain.jsonl"}, {"split": "enron.one_shot", "path": "privacy/enron_data/one_shot.jsonl"}, {"split": "enron.two_shot_non_domain", "path": "privacy/enron_data/two_shot_non_domain.jsonl"}, {"split": "enron.two_shot", "path": "privacy/enron_data/two_shot.jsonl"}, {"split": "enron.five_shot_non_domain", "path": "privacy/enron_data/five_shot_non_domain.jsonl"}, {"split": "enron.five_shot", "path": "privacy/enron_data/five_shot.jsonl"}]}, {"config_name": "fairness", "data_files": [{"split": "adult.zero_shot.br_0.0", "path": "fairness/fairness_data/adult_0_200_test_base_rate_0.0.jsonl"}, {"split": "adult.zero_shot.br_0.5", "path": "fairness/fairness_data/adult_0_200_test_base_rate_0.5.jsonl"}, {"split": "adult.zero_shot.br_1.0", "path": "fairness/fairness_data/adult_0_200_test_base_rate_1.0.jsonl"}, {"split": "adult.few_shot.tr_br_0.0", "path": "fairness/fairness_data/adult_32_200_train_base_rate_0.0.jsonl"}, {"split": "adult.few_shot.tr_br_0.5", "path": "fairness/fairness_data/adult_32_200_train_base_rate_0.5.jsonl"}, {"split": "adult.few_shot.tr_br_1.0", "path": "fairness/fairness_data/adult_32_200_train_base_rate_1.0.jsonl"}, {"split": "adult.few_shot.num_train_0", "path": "fairness/fairness_data/adult_0_200_train_br_0.0_test_br_0.5.jsonl"}, {"split": "adult.few_shot.num_train_16", "path": "fairness/fairness_data/adult_16_200_train_br_0.0_test_br_0.5.jsonl"}, {"split": "adult.few_shot.num_train_32", "path": "fairness/fairness_data/adult_32_200_train_br_0.0_test_br_0.5.jsonl"}, {"split": "crime.zero_shot.br_0.0", "path": "fairness/fairness_data/crime_0_300_test_base_rate_0.0.jsonl"}, {"split": "crime.zero_shot.br_0.5", "path": "fairness/fairness_data/crime_0_300_test_base_rate_0.5.jsonl"}, {"split": "crime.zero_shot.br_1.0", "path": "fairness/fairness_data/crime_0_300_test_base_rate_1.0.jsonl"}]}, {"config_name": "ood", "data_files": [{"split": "style", "path": "ood/style.jsonl"}, {"split": "knowledge", "path": "ood/knowledge.jsonl"}]}]}
|
2023-12-27T23:53:48+00:00
|
[
"2306.11698"
] |
[
"en"
] |
TAGS
#task_categories-text-classification #task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #arxiv-2306.11698 #region-us
|
# DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
## Overview
This repo contains the source code of DecodingTrust. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Large Language Models (LLMs). See our paper for details.
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
*Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li.*
URL
This project is organized around the following eight primary areas of trustworthiness, including:
1. Toxicity
2. Stereotype and bias
3. Adversarial robustness
4. Out-of-Distribution Robustness
5. Privacy
6. Robustness to Adversarial Demonstrations
7. Machine Ethics
8. Fairness
## Getting Started
To evaluate using DecodingTrust dataset, please install the DecodingTrust package as below:
### (Conda +) Pip
For now, we suggest installing DecodingTrust by cloning our repository and install it in editable mode. This will keep the data, code, and configurations in the same place.
Please note that this will install PyTorch with 'pip'. If your system does not have a 'CUDA' version compatible with the PyTorch 'pip' wheel. To install 'PyTorch' with 'Conda' first, as shown below.
It is also possible to install DecodingTrust as a standalone package, but you will need to clone our repository again to run it will our data.
### Support for the 'ppc64le' Architecture
We also support the 'ppc64le' architecture of IBM Power-9 platforms. To install on this platform, please first make sure you have the following 'conda' channels so that we can utilize pre-built packages.
Then, install the following pre-built packages.
Finally, install DecodingTrust with 'pip' as usual.
### Docker / Singularity
To use DecodingTrust with docker, simply pull the following docker image.
To use it in through singularity or apptainer container environments on HPC environments, simply run the following.
We will also have a container build for 'ppc64le' platforms soon. Stay tuned!
### Notes
+ Each of the eight areas has its own subdirectory containing the respective code and README.
+ Follow the specific 'README': Every subdirectory has its own README. Refer to these documents for information on how to run the scripts and interpret the results.
## [Important] Candidate models
In our benchmark, to have consistent conclusions and results, currently we mianly focus on evaluating the following two OpenAI models:
- 'gpt-3.5-turbo-0301'
- 'gpt-4-0314'
Note we use 'gpt-3.5-turbo-0301' (with time stamp) released in March instead of 'gpt-3.5-turbo' for sake of model evolution to ensure reproducibility.
Currently, we have supported evaluating all the causal LLMs hosted in Huggingface or hosted locally. Specifically, we have tested the following open LLMs:
- 'Llama-v2-7B-Chat'
- 'Vicuna-7BAlpaca-7B'
- 'MPT-7B'
- 'Falcon-7B'
- 'Alpaca-7B'
- 'RedPajama-INCITE-7B-Instruct'
## Tutorial
We have provided a Tutorial to help you walk through the usage of API to evaluate different trustworthiness perspectives and LLMs.
## Useful tips
- Please first evaluate your experiments with '++dry_run=True' flags on to check the input / output format, and use 'gpt-3.5-turbo-0301' to check the generation since it has lower costs.
- Suggesting saving the responses from OpenAI.
## File usage
- 'URL' provides a unified entry point to evaluate all the perspectives and different LLMs with proper configuration
- 'URL' provides robust APIs for creating requests to OpenAI Chat Compleition models and Huggingface autoregressive LLMs. Recommend implementing experiments based on this file. If you think 'URL' is not good enough and want to make modifications, please let @acphile and @boxinw know.
- 'URL' provide auxiliary functions
For other files, please refer to each subdirs for more information.
## License
This project is licensed under the CC BY-SA 4.0 - see the LICENSE file for details.
Please cite the paper as follows if you use the data or code from DecodingTrust:
## Contact
Please reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to boxinw2@URL.
Thank you for your interest in DecodingTrust. We hope our work will contribute to a more trustworthy, fair, and robust AI future.
|
[
"# DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models",
"## Overview\n\nThis repo contains the source code of DecodingTrust. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Large Language Models (LLMs). See our paper for details.\n\nDecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models \n\n*Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li.*\n\nURL\n\nThis project is organized around the following eight primary areas of trustworthiness, including:\n1. Toxicity\n2. Stereotype and bias\n3. Adversarial robustness\n4. Out-of-Distribution Robustness\n5. Privacy \n6. Robustness to Adversarial Demonstrations\n7. Machine Ethics\n8. Fairness",
"## Getting Started\n\nTo evaluate using DecodingTrust dataset, please install the DecodingTrust package as below:",
"### (Conda +) Pip\n\nFor now, we suggest installing DecodingTrust by cloning our repository and install it in editable mode. This will keep the data, code, and configurations in the same place. \n\n\n\nPlease note that this will install PyTorch with 'pip'. If your system does not have a 'CUDA' version compatible with the PyTorch 'pip' wheel. To install 'PyTorch' with 'Conda' first, as shown below.\n\n\n\nIt is also possible to install DecodingTrust as a standalone package, but you will need to clone our repository again to run it will our data.",
"### Support for the 'ppc64le' Architecture \n\nWe also support the 'ppc64le' architecture of IBM Power-9 platforms. To install on this platform, please first make sure you have the following 'conda' channels so that we can utilize pre-built packages.\n\n\n\nThen, install the following pre-built packages.\n\n\n\nFinally, install DecodingTrust with 'pip' as usual.",
"### Docker / Singularity\n\nTo use DecodingTrust with docker, simply pull the following docker image.\n\nTo use it in through singularity or apptainer container environments on HPC environments, simply run the following.\n\n\nWe will also have a container build for 'ppc64le' platforms soon. Stay tuned!",
"### Notes\n+ Each of the eight areas has its own subdirectory containing the respective code and README.\n\n+ Follow the specific 'README': Every subdirectory has its own README. Refer to these documents for information on how to run the scripts and interpret the results.",
"## [Important] Candidate models\nIn our benchmark, to have consistent conclusions and results, currently we mianly focus on evaluating the following two OpenAI models:\n\n- 'gpt-3.5-turbo-0301' \n- 'gpt-4-0314'\n\nNote we use 'gpt-3.5-turbo-0301' (with time stamp) released in March instead of 'gpt-3.5-turbo' for sake of model evolution to ensure reproducibility.\n\nCurrently, we have supported evaluating all the causal LLMs hosted in Huggingface or hosted locally. Specifically, we have tested the following open LLMs:\n\n- 'Llama-v2-7B-Chat'\n- 'Vicuna-7BAlpaca-7B'\n- 'MPT-7B'\n- 'Falcon-7B'\n- 'Alpaca-7B'\n- 'RedPajama-INCITE-7B-Instruct'",
"## Tutorial\n\nWe have provided a Tutorial to help you walk through the usage of API to evaluate different trustworthiness perspectives and LLMs.",
"## Useful tips\n\n- Please first evaluate your experiments with '++dry_run=True' flags on to check the input / output format, and use 'gpt-3.5-turbo-0301' to check the generation since it has lower costs.\n- Suggesting saving the responses from OpenAI.",
"## File usage\n\n- 'URL' provides a unified entry point to evaluate all the perspectives and different LLMs with proper configuration\n- 'URL' provides robust APIs for creating requests to OpenAI Chat Compleition models and Huggingface autoregressive LLMs. Recommend implementing experiments based on this file. If you think 'URL' is not good enough and want to make modifications, please let @acphile and @boxinw know.\n- 'URL' provide auxiliary functions \n\nFor other files, please refer to each subdirs for more information.",
"## License\nThis project is licensed under the CC BY-SA 4.0 - see the LICENSE file for details.\n\nPlease cite the paper as follows if you use the data or code from DecodingTrust:",
"## Contact\nPlease reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to boxinw2@URL.\n\nThank you for your interest in DecodingTrust. We hope our work will contribute to a more trustworthy, fair, and robust AI future."
] |
[
"TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #arxiv-2306.11698 #region-us \n",
"# DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models",
"## Overview\n\nThis repo contains the source code of DecodingTrust. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Large Language Models (LLMs). See our paper for details.\n\nDecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models \n\n*Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li.*\n\nURL\n\nThis project is organized around the following eight primary areas of trustworthiness, including:\n1. Toxicity\n2. Stereotype and bias\n3. Adversarial robustness\n4. Out-of-Distribution Robustness\n5. Privacy \n6. Robustness to Adversarial Demonstrations\n7. Machine Ethics\n8. Fairness",
"## Getting Started\n\nTo evaluate using DecodingTrust dataset, please install the DecodingTrust package as below:",
"### (Conda +) Pip\n\nFor now, we suggest installing DecodingTrust by cloning our repository and install it in editable mode. This will keep the data, code, and configurations in the same place. \n\n\n\nPlease note that this will install PyTorch with 'pip'. If your system does not have a 'CUDA' version compatible with the PyTorch 'pip' wheel. To install 'PyTorch' with 'Conda' first, as shown below.\n\n\n\nIt is also possible to install DecodingTrust as a standalone package, but you will need to clone our repository again to run it will our data.",
"### Support for the 'ppc64le' Architecture \n\nWe also support the 'ppc64le' architecture of IBM Power-9 platforms. To install on this platform, please first make sure you have the following 'conda' channels so that we can utilize pre-built packages.\n\n\n\nThen, install the following pre-built packages.\n\n\n\nFinally, install DecodingTrust with 'pip' as usual.",
"### Docker / Singularity\n\nTo use DecodingTrust with docker, simply pull the following docker image.\n\nTo use it in through singularity or apptainer container environments on HPC environments, simply run the following.\n\n\nWe will also have a container build for 'ppc64le' platforms soon. Stay tuned!",
"### Notes\n+ Each of the eight areas has its own subdirectory containing the respective code and README.\n\n+ Follow the specific 'README': Every subdirectory has its own README. Refer to these documents for information on how to run the scripts and interpret the results.",
"## [Important] Candidate models\nIn our benchmark, to have consistent conclusions and results, currently we mianly focus on evaluating the following two OpenAI models:\n\n- 'gpt-3.5-turbo-0301' \n- 'gpt-4-0314'\n\nNote we use 'gpt-3.5-turbo-0301' (with time stamp) released in March instead of 'gpt-3.5-turbo' for sake of model evolution to ensure reproducibility.\n\nCurrently, we have supported evaluating all the causal LLMs hosted in Huggingface or hosted locally. Specifically, we have tested the following open LLMs:\n\n- 'Llama-v2-7B-Chat'\n- 'Vicuna-7BAlpaca-7B'\n- 'MPT-7B'\n- 'Falcon-7B'\n- 'Alpaca-7B'\n- 'RedPajama-INCITE-7B-Instruct'",
"## Tutorial\n\nWe have provided a Tutorial to help you walk through the usage of API to evaluate different trustworthiness perspectives and LLMs.",
"## Useful tips\n\n- Please first evaluate your experiments with '++dry_run=True' flags on to check the input / output format, and use 'gpt-3.5-turbo-0301' to check the generation since it has lower costs.\n- Suggesting saving the responses from OpenAI.",
"## File usage\n\n- 'URL' provides a unified entry point to evaluate all the perspectives and different LLMs with proper configuration\n- 'URL' provides robust APIs for creating requests to OpenAI Chat Compleition models and Huggingface autoregressive LLMs. Recommend implementing experiments based on this file. If you think 'URL' is not good enough and want to make modifications, please let @acphile and @boxinw know.\n- 'URL' provide auxiliary functions \n\nFor other files, please refer to each subdirs for more information.",
"## License\nThis project is licensed under the CC BY-SA 4.0 - see the LICENSE file for details.\n\nPlease cite the paper as follows if you use the data or code from DecodingTrust:",
"## Contact\nPlease reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to boxinw2@URL.\n\nThank you for your interest in DecodingTrust. We hope our work will contribute to a more trustworthy, fair, and robust AI future."
] |
[
88,
22,
247,
28,
145,
92,
74,
63,
205,
31,
70,
125,
45,
68
] |
[
"passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #arxiv-2306.11698 #region-us \n# DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models## Overview\n\nThis repo contains the source code of DecodingTrust. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Large Language Models (LLMs). See our paper for details.\n\nDecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models \n\n*Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li.*\n\nURL\n\nThis project is organized around the following eight primary areas of trustworthiness, including:\n1. Toxicity\n2. Stereotype and bias\n3. Adversarial robustness\n4. Out-of-Distribution Robustness\n5. Privacy \n6. Robustness to Adversarial Demonstrations\n7. Machine Ethics\n8. Fairness## Getting Started\n\nTo evaluate using DecodingTrust dataset, please install the DecodingTrust package as below:",
"passage: ### (Conda +) Pip\n\nFor now, we suggest installing DecodingTrust by cloning our repository and install it in editable mode. This will keep the data, code, and configurations in the same place. \n\n\n\nPlease note that this will install PyTorch with 'pip'. If your system does not have a 'CUDA' version compatible with the PyTorch 'pip' wheel. To install 'PyTorch' with 'Conda' first, as shown below.\n\n\n\nIt is also possible to install DecodingTrust as a standalone package, but you will need to clone our repository again to run it will our data.### Support for the 'ppc64le' Architecture \n\nWe also support the 'ppc64le' architecture of IBM Power-9 platforms. To install on this platform, please first make sure you have the following 'conda' channels so that we can utilize pre-built packages.\n\n\n\nThen, install the following pre-built packages.\n\n\n\nFinally, install DecodingTrust with 'pip' as usual.### Docker / Singularity\n\nTo use DecodingTrust with docker, simply pull the following docker image.\n\nTo use it in through singularity or apptainer container environments on HPC environments, simply run the following.\n\n\nWe will also have a container build for 'ppc64le' platforms soon. Stay tuned!### Notes\n+ Each of the eight areas has its own subdirectory containing the respective code and README.\n\n+ Follow the specific 'README': Every subdirectory has its own README. Refer to these documents for information on how to run the scripts and interpret the results.## [Important] Candidate models\nIn our benchmark, to have consistent conclusions and results, currently we mianly focus on evaluating the following two OpenAI models:\n\n- 'gpt-3.5-turbo-0301' \n- 'gpt-4-0314'\n\nNote we use 'gpt-3.5-turbo-0301' (with time stamp) released in March instead of 'gpt-3.5-turbo' for sake of model evolution to ensure reproducibility.\n\nCurrently, we have supported evaluating all the causal LLMs hosted in Huggingface or hosted locally. Specifically, we have tested the following open LLMs:\n\n- 'Llama-v2-7B-Chat'\n- 'Vicuna-7BAlpaca-7B'\n- 'MPT-7B'\n- 'Falcon-7B'\n- 'Alpaca-7B'\n- 'RedPajama-INCITE-7B-Instruct'## Tutorial\n\nWe have provided a Tutorial to help you walk through the usage of API to evaluate different trustworthiness perspectives and LLMs."
] |
49f6e3642baacbe147924cd1a72c9193a9e2869c
|
# Dataset Card for Evaluation run of WizardLM/WizardMath-13B-V1.0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/WizardLM/WizardMath-13B-V1.0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [WizardLM/WizardMath-13B-V1.0](https://huggingface.co/WizardLM/WizardMath-13B-V1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_WizardLM__WizardMath-13B-V1.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-12T22:45:52.861079](https://huggingface.co/datasets/open-llm-leaderboard/details_WizardLM__WizardMath-13B-V1.0/blob/main/results_2023-10-12T22-45-52.861079.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0024119127516778523,
"em_stderr": 0.0005023380498893313,
"f1": 0.07075817953020154,
"f1_stderr": 0.0015254513833319102,
"acc": 0.4212998893591507,
"acc_stderr": 0.010848795701326375
},
"harness|drop|3": {
"em": 0.0024119127516778523,
"em_stderr": 0.0005023380498893313,
"f1": 0.07075817953020154,
"f1_stderr": 0.0015254513833319102
},
"harness|gsm8k|5": {
"acc": 0.12357846853677028,
"acc_stderr": 0.009065050306776925
},
"harness|winogrande|5": {
"acc": 0.7190213101815311,
"acc_stderr": 0.012632541095875825
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_WizardLM__WizardMath-13B-V1.0
|
[
"region:us"
] |
2023-10-12T21:45:57+00:00
|
{"pretty_name": "Evaluation run of WizardLM/WizardMath-13B-V1.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [WizardLM/WizardMath-13B-V1.0](https://huggingface.co/WizardLM/WizardMath-13B-V1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_WizardLM__WizardMath-13B-V1.0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-12T22:45:52.861079](https://huggingface.co/datasets/open-llm-leaderboard/details_WizardLM__WizardMath-13B-V1.0/blob/main/results_2023-10-12T22-45-52.861079.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0024119127516778523,\n \"em_stderr\": 0.0005023380498893313,\n \"f1\": 0.07075817953020154,\n \"f1_stderr\": 0.0015254513833319102,\n \"acc\": 0.4212998893591507,\n \"acc_stderr\": 0.010848795701326375\n },\n \"harness|drop|3\": {\n \"em\": 0.0024119127516778523,\n \"em_stderr\": 0.0005023380498893313,\n \"f1\": 0.07075817953020154,\n \"f1_stderr\": 0.0015254513833319102\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12357846853677028,\n \"acc_stderr\": 0.009065050306776925\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7190213101815311,\n \"acc_stderr\": 0.012632541095875825\n }\n}\n```", "repo_url": "https://huggingface.co/WizardLM/WizardMath-13B-V1.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_12T22_45_52.861079", "path": ["**/details_harness|drop|3_2023-10-12T22-45-52.861079.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-12T22-45-52.861079.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_12T22_45_52.861079", "path": ["**/details_harness|gsm8k|5_2023-10-12T22-45-52.861079.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-12T22-45-52.861079.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_12T22_45_52.861079", "path": ["**/details_harness|winogrande|5_2023-10-12T22-45-52.861079.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-12T22-45-52.861079.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T22_45_52.861079", "path": ["results_2023-10-12T22-45-52.861079.parquet"]}, {"split": "latest", "path": ["results_2023-10-12T22-45-52.861079.parquet"]}]}]}
|
2023-10-12T21:46:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of WizardLM/WizardMath-13B-V1.0
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model WizardLM/WizardMath-13B-V1.0 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-12T22:45:52.861079(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of WizardLM/WizardMath-13B-V1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model WizardLM/WizardMath-13B-V1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-12T22:45:52.861079(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of WizardLM/WizardMath-13B-V1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model WizardLM/WizardMath-13B-V1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-12T22:45:52.861079(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of WizardLM/WizardMath-13B-V1.0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model WizardLM/WizardMath-13B-V1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-12T22:45:52.861079(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
782deb41407712b8409f99f1f18b2de79d1fab63
|
# Dataset Card for "slimpajama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rokset3/slimpajama
|
[
"region:us"
] |
2023-10-12T21:48:18+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "redpajama_set_name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 23874206724, "num_examples": 5489000}], "download_size": 13962151299, "dataset_size": 23874206724}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T22:12:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "slimpajama"
More Information needed
|
[
"# Dataset Card for \"slimpajama\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"slimpajama\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"slimpajama\"\n\nMore Information needed"
] |
f2302b22dce66bae5e32324c3f5bbc555b59197b
|
# Dataset Card for "biology_dataset_standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_unified
|
[
"region:us"
] |
2023-10-12T22:48:34+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 59401701, "num_examples": 19999}], "download_size": 0, "dataset_size": 59401701}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T20:20:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_unified"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_unified\"\n\nMore Information needed"
] |
b226f8dd6b6a92f1ab3d24be7a34ff9390e99a33
|
# Dataset Card for "biology_dataset_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_embedded
|
[
"region:us"
] |
2023-10-12T22:59:50+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 141397601, "num_examples": 19999}], "download_size": 0, "dataset_size": 141397601}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T20:20:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_embedded"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_embedded\"\n\nMore Information needed"
] |
964d04c38ba33eb6500e55808b4614fc01809229
|
# Dataset Card for "icdst_multiwoz_turns_v24"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Brendan/icdst_multiwoz_turns_v24
|
[
"region:us"
] |
2023-10-12T23:07:28+00:00
|
{"dataset_info": {"features": [{"name": "dialogue_id", "dtype": "string"}, {"name": "turn_id", "dtype": "int8"}, {"name": "domains", "sequence": "string"}, {"name": "user_utterances", "sequence": "string"}, {"name": "system_utterances", "sequence": "string"}, {"name": "slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}]}, {"name": "turn_slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}]}, {"name": "last_slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}]}, {"name": "system_response_acts", "sequence": "string"}, {"name": "system_response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78112115, "num_examples": 54971}, {"name": "validation", "num_bytes": 10725891, "num_examples": 7374}, {"name": "test", "num_bytes": 10734111, "num_examples": 7368}, {"name": "valid_20p_ablation", "num_bytes": 2104741.561838893, "num_examples": 1447}, {"name": "valid_10p", "num_bytes": 1063279.9458909682, "num_examples": 731}, {"name": "valid_50p", "num_bytes": 5378945.608624898, "num_examples": 3698}, {"name": "1p_train_v1", "num_bytes": 744588.0238671299, "num_examples": 524}, {"name": "1p_train_v2", "num_bytes": 741746.0848447363, "num_examples": 522}, {"name": "1p_train_v3", "num_bytes": 822741.3469829547, "num_examples": 579}, {"name": "5p_train_v1", "num_bytes": 3880667.735078496, "num_examples": 2731}, {"name": "5p_train_v2", "num_bytes": 3913350.0338360225, "num_examples": 2754}, {"name": "5p_train_v3", "num_bytes": 3806777.3204962616, "num_examples": 2679}, {"name": "10p_train_v1", "num_bytes": 7786912.921358534, "num_examples": 5480}, {"name": "10p_train_v2", "num_bytes": 7785491.951847338, "num_examples": 5479}, {"name": "10p_train_v3", "num_bytes": 7691707.964108348, "num_examples": 5413}], "download_size": 6875945, "dataset_size": 145293067.4987746}}
|
2023-10-25T20:41:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "icdst_multiwoz_turns_v24"
More Information needed
|
[
"# Dataset Card for \"icdst_multiwoz_turns_v24\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"icdst_multiwoz_turns_v24\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"icdst_multiwoz_turns_v24\"\n\nMore Information needed"
] |
96cd0e9586c4d4514f0cfc09fbe1584e72026cdf
|
# Dataset Card for "icdst_multiwoz_turns_v21"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Brendan/icdst_multiwoz_turns_v21
|
[
"region:us"
] |
2023-10-12T23:12:36+00:00
|
{"dataset_info": {"features": [{"name": "dialogue_id", "dtype": "string"}, {"name": "turn_id", "dtype": "int8"}, {"name": "domains", "sequence": "string"}, {"name": "user_utterances", "sequence": "string"}, {"name": "system_utterances", "sequence": "string"}, {"name": "slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}]}, {"name": "turn_slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}]}, {"name": "last_slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}]}, {"name": "system_response_acts", "sequence": "string"}, {"name": "system_response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78112115, "num_examples": 54971}, {"name": "validation", "num_bytes": 10681377, "num_examples": 7374}, {"name": "test", "num_bytes": 10711425, "num_examples": 7368}, {"name": "valid_20p_ablation", "num_bytes": 2096006.5797396258, "num_examples": 1447}, {"name": "valid_10p", "num_bytes": 1058867.1802278275, "num_examples": 731}, {"name": "valid_50p", "num_bytes": 5356622.2058584215, "num_examples": 3698}, {"name": "1p_train_v1", "num_bytes": 744588.0238671299, "num_examples": 524}, {"name": "1p_train_v2", "num_bytes": 741746.0848447363, "num_examples": 522}, {"name": "1p_train_v3", "num_bytes": 822741.3469829547, "num_examples": 579}, {"name": "5p_train_v1", "num_bytes": 3880667.735078496, "num_examples": 2731}, {"name": "5p_train_v2", "num_bytes": 3913350.0338360225, "num_examples": 2754}, {"name": "5p_train_v3", "num_bytes": 3806777.3204962616, "num_examples": 2679}, {"name": "10p_train_v1", "num_bytes": 7786912.921358534, "num_examples": 5480}, {"name": "10p_train_v2", "num_bytes": 7785491.951847338, "num_examples": 5479}, {"name": "10p_train_v3", "num_bytes": 7691707.964108348, "num_examples": 5413}], "download_size": 6866897, "dataset_size": 145190396.3482457}}
|
2023-10-25T20:41:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "icdst_multiwoz_turns_v21"
More Information needed
|
[
"# Dataset Card for \"icdst_multiwoz_turns_v21\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"icdst_multiwoz_turns_v21\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"icdst_multiwoz_turns_v21\"\n\nMore Information needed"
] |
27547dbd2e1426cfc31d4dba154c71bac7b8403b
|
# Dataset Card for Evaluation run of bigscience/bloomz-7b1-mt
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bigscience/bloomz-7b1-mt
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bigscience__bloomz-7b1-mt",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-04T12:55:29.467627](https://huggingface.co/datasets/open-llm-leaderboard/details_bigscience__bloomz-7b1-mt/blob/main/results_2023-12-04T12-55-29.467627.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_bigscience__bloomz-7b1-mt
|
[
"region:us"
] |
2023-10-12T23:23:18+00:00
|
{"pretty_name": "Evaluation run of bigscience/bloomz-7b1-mt", "dataset_summary": "Dataset automatically created during the evaluation run of model [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bigscience__bloomz-7b1-mt\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-04T12:55:29.467627](https://huggingface.co/datasets/open-llm-leaderboard/details_bigscience__bloomz-7b1-mt/blob/main/results_2023-12-04T12-55-29.467627.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/bigscience/bloomz-7b1-mt", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_13T00_23_14.934221", "path": ["**/details_harness|drop|3_2023-10-13T00-23-14.934221.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-13T00-23-14.934221.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_13T00_23_14.934221", "path": ["**/details_harness|gsm8k|5_2023-10-13T00-23-14.934221.parquet"]}, {"split": "2023_12_03T14_51_42.619501", "path": ["**/details_harness|gsm8k|5_2023-12-03T14-51-42.619501.parquet"]}, {"split": "2023_12_03T15_54_23.973238", "path": ["**/details_harness|gsm8k|5_2023-12-03T15-54-23.973238.parquet"]}, {"split": "2023_12_03T15_55_23.937859", "path": ["**/details_harness|gsm8k|5_2023-12-03T15-55-23.937859.parquet"]}, {"split": "2023_12_04T09_44_40.128001", "path": ["**/details_harness|gsm8k|5_2023-12-04T09-44-40.128001.parquet"]}, {"split": "2023_12_04T09_44_57.525053", "path": ["**/details_harness|gsm8k|5_2023-12-04T09-44-57.525053.parquet"]}, {"split": "2023_12_04T12_55_16.770565", "path": ["**/details_harness|gsm8k|5_2023-12-04T12-55-16.770565.parquet"]}, {"split": "2023_12_04T12_55_29.467627", "path": ["**/details_harness|gsm8k|5_2023-12-04T12-55-29.467627.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-04T12-55-29.467627.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_13T00_23_14.934221", "path": ["**/details_harness|winogrande|5_2023-10-13T00-23-14.934221.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-13T00-23-14.934221.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_13T00_23_14.934221", "path": ["results_2023-10-13T00-23-14.934221.parquet"]}, {"split": "2023_12_03T14_51_42.619501", "path": ["results_2023-12-03T14-51-42.619501.parquet"]}, {"split": "2023_12_03T15_54_23.973238", "path": ["results_2023-12-03T15-54-23.973238.parquet"]}, {"split": "2023_12_03T15_55_23.937859", "path": ["results_2023-12-03T15-55-23.937859.parquet"]}, {"split": "2023_12_04T09_44_40.128001", "path": ["results_2023-12-04T09-44-40.128001.parquet"]}, {"split": "2023_12_04T09_44_57.525053", "path": ["results_2023-12-04T09-44-57.525053.parquet"]}, {"split": "2023_12_04T12_55_16.770565", "path": ["results_2023-12-04T12-55-16.770565.parquet"]}, {"split": "2023_12_04T12_55_29.467627", "path": ["results_2023-12-04T12-55-29.467627.parquet"]}, {"split": "latest", "path": ["results_2023-12-04T12-55-29.467627.parquet"]}]}]}
|
2023-12-04T12:55:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of bigscience/bloomz-7b1-mt
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model bigscience/bloomz-7b1-mt on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-04T12:55:29.467627(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of bigscience/bloomz-7b1-mt",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bigscience/bloomz-7b1-mt on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-04T12:55:29.467627(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of bigscience/bloomz-7b1-mt",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bigscience/bloomz-7b1-mt on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-04T12:55:29.467627(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bigscience/bloomz-7b1-mt## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model bigscience/bloomz-7b1-mt on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-04T12:55:29.467627(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
40f47a899fbe791635a68f5636898ac03a9310d3
|
# Dataset Card for "neutral_claim"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Brecon/neutral_claim
|
[
"region:us"
] |
2023-10-12T23:41:31+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31976.842105263157, "num_examples": 30}, {"name": "test", "num_bytes": 8527.157894736842, "num_examples": 8}], "download_size": 34081, "dataset_size": 40504.0}}
|
2023-10-12T23:41:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "neutral_claim"
More Information needed
|
[
"# Dataset Card for \"neutral_claim\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"neutral_claim\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"neutral_claim\"\n\nMore Information needed"
] |
feaa918df6600a9aec88d02d9f30ceb8fa1ae835
|
# Dataset Card for "billqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lazaroq11/billqa
|
[
"region:us"
] |
2023-10-12T23:48:36+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "additional_info", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 240602641, "num_examples": 9846}], "download_size": 9341153, "dataset_size": 240602641}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T23:58:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "billqa"
More Information needed
|
[
"# Dataset Card for \"billqa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"billqa\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"billqa\"\n\nMore Information needed"
] |
18884b050930adb365cc736bf869e4695ad142cf
|
# GPT-4V Eval samples
This is a hand curated images from the web and questions asked by myself to GPT-4V to understand its ability and limits.
I am mainly focus in localization, OCR ability and understanding of GPT-4V vision module. So the language part is skipped as we already seen in GPT-4. As long as GPT-4V can extract the required information in text, the rest of the LLM shouldn't have any issue answering the rest of the questions.
The numbers of examples is still pretty tiny and will continue to increase further in the future until I am satisfy with the size. So please check back from time to time.
Note : the dataset viewer had a bug which cause the image displayed differ from the actual dataset (Due to frequent update). Please load the dataset and save it on your local path for best accuracy.
## How to use:
```
import json
from datasets import load_dataset
dataset = load_dataset('theblackcat102/gpt-4v-eval-samples')['test']
print(dataset[0]['image'])
print(json.loads(dataset[0]['conversations']))
```
## Contributions
Please checkout my github repo for more details : [theblackcat102/gpt-4v-samples](https://github.com/theblackcat102/gpt-4v-samples)
## Citation
```
@article{yang2023dawn,
title={The Dawn of LMMs: Preliminary Explorations with GPT-4V (ision)},
author={Yang, Zhengyuan and Li, Linjie and Lin, Kevin and Wang, Jianfeng and Lin, Chung-Ching and Liu, Zicheng and Wang, Lijuan},
journal={arXiv preprint arXiv:2309.17421},
year={2023}
}
```
|
theblackcat102/gpt-4v-eval-samples
|
[
"region:us"
] |
2023-10-12T23:51:36+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conversations", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 334178840.35, "num_examples": 1682}], "download_size": 324453952, "dataset_size": 334178840.35}}
|
2023-11-05T12:20:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# GPT-4V Eval samples
This is a hand curated images from the web and questions asked by myself to GPT-4V to understand its ability and limits.
I am mainly focus in localization, OCR ability and understanding of GPT-4V vision module. So the language part is skipped as we already seen in GPT-4. As long as GPT-4V can extract the required information in text, the rest of the LLM shouldn't have any issue answering the rest of the questions.
The numbers of examples is still pretty tiny and will continue to increase further in the future until I am satisfy with the size. So please check back from time to time.
Note : the dataset viewer had a bug which cause the image displayed differ from the actual dataset (Due to frequent update). Please load the dataset and save it on your local path for best accuracy.
## How to use:
## Contributions
Please checkout my github repo for more details : theblackcat102/gpt-4v-samples
|
[
"# GPT-4V Eval samples\n\nThis is a hand curated images from the web and questions asked by myself to GPT-4V to understand its ability and limits. \n\nI am mainly focus in localization, OCR ability and understanding of GPT-4V vision module. So the language part is skipped as we already seen in GPT-4. As long as GPT-4V can extract the required information in text, the rest of the LLM shouldn't have any issue answering the rest of the questions. \n\nThe numbers of examples is still pretty tiny and will continue to increase further in the future until I am satisfy with the size. So please check back from time to time.\n\nNote : the dataset viewer had a bug which cause the image displayed differ from the actual dataset (Due to frequent update). Please load the dataset and save it on your local path for best accuracy.",
"## How to use:",
"## Contributions\n\nPlease checkout my github repo for more details : theblackcat102/gpt-4v-samples"
] |
[
"TAGS\n#region-us \n",
"# GPT-4V Eval samples\n\nThis is a hand curated images from the web and questions asked by myself to GPT-4V to understand its ability and limits. \n\nI am mainly focus in localization, OCR ability and understanding of GPT-4V vision module. So the language part is skipped as we already seen in GPT-4. As long as GPT-4V can extract the required information in text, the rest of the LLM shouldn't have any issue answering the rest of the questions. \n\nThe numbers of examples is still pretty tiny and will continue to increase further in the future until I am satisfy with the size. So please check back from time to time.\n\nNote : the dataset viewer had a bug which cause the image displayed differ from the actual dataset (Due to frequent update). Please load the dataset and save it on your local path for best accuracy.",
"## How to use:",
"## Contributions\n\nPlease checkout my github repo for more details : theblackcat102/gpt-4v-samples"
] |
[
6,
195,
5,
27
] |
[
"passage: TAGS\n#region-us \n# GPT-4V Eval samples\n\nThis is a hand curated images from the web and questions asked by myself to GPT-4V to understand its ability and limits. \n\nI am mainly focus in localization, OCR ability and understanding of GPT-4V vision module. So the language part is skipped as we already seen in GPT-4. As long as GPT-4V can extract the required information in text, the rest of the LLM shouldn't have any issue answering the rest of the questions. \n\nThe numbers of examples is still pretty tiny and will continue to increase further in the future until I am satisfy with the size. So please check back from time to time.\n\nNote : the dataset viewer had a bug which cause the image displayed differ from the actual dataset (Due to frequent update). Please load the dataset and save it on your local path for best accuracy.## How to use:## Contributions\n\nPlease checkout my github repo for more details : theblackcat102/gpt-4v-samples"
] |
22a339f961c40b07261bea44ec10849c7440b75f
|
## Dataset Description
- **Homepage:https://github.com/kaistAI/Prometheus**
- **Repository:https://github.com/kaistAI/Prometheus**
- **Paper:https://arxiv.org/abs/2310.08491**
- **Point of Contact:[email protected]**
# Dataset Card
### Dataset Summary
The Feedback Collection is a dataset designed to induce fine-grained evaluation capabilities into language models.\\

Recently, proprietary LLMs (e.g., GPT-4) have been used to evaluate long-form responses. In our experiments, we found that open-source LMs are not capable of evaluating long-form responses, showing low correlation with both human evaluators and GPT-4.\\
In our paper, we found that by (1) fine-tuning feedback generated by GPT-4 and (2) including the appropriate reference materials (reference answers & score rubrics), we can effectively induce fine-grained evaluation into open-source LMs.
The Feedback Collection provides 1K score rubrics, 20K instructions & reference answers, 100K responses & feedback (20K for each score in the range 1-5).\\
Experimental results show that Prometheus (a LM obtained by fine-tuning Llama-2-Chat on the Feedback Collection) can function as an evaluator in both an absolute scoring setting and a ranking scoring setting.
### Languages
English
## Dataset Structure
* instruction: The input that is given to the evaluator LM. It includes the instruction & response to evaluate, the reference answer, the score rubric.
* output: The output that the evaluator LM should generate. It includes the feedback and score decision divided by a phrase ```[RESULT]```.
* orig```_```instruction: The instruction to be evaluated. Note that this differs with the instruction that includes all the components.
* orig```_```response: The response to be evaluated.
* orig```_```reference```_```answer: A reference answer to the orig```_```instruction.
* orig```_```criteria: The score criteria used to evaluate the orig```_``` response.
* orig```_```score1```_```description: A description of when to give a score of 1 to the orig```_```response.
* orig```_```score2```_```description: A description of when to give a score of 2 to the orig```_```response.
* orig```_```score3```_```description: A description of when to give a score of 3 to the orig```_```response.
* orig```_```score4```_```description: A description of when to give a score of 4 to the orig```_```response.
* orig```_```score5```_```description: A description of when to give a score of 5 to the orig```_```response.
* orig```_```feedback: A feedback that critiques the orig```_```response.
* orig```_```score: An integer between 1 and 5 given to the orig```_```response.
In our paper, we trained the input using the following prompt format (already processed in the 'instruction'):
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{orig_instruction}
###Response to evaluate:
{orig_response}
###Reference Answer (Score 5):
{orig_reference_answer}
###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}
###Feedback:
```
The following prompt format (already processed in the 'output') was used to train the evaluator LM:
```
{orig_feedback}
[RESULT] {orig_score}
```
Then during evaluation, we parsed the prediction after the phrase ```[RESULT]```.
### Data Splits
| name | train |
|-------------------|------:|
|Feedback-Collection|99,952|
### Citation Information
If you find the following model helpful, please consider citing our paper!
```bibtex
@misc{kim2023prometheus,
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
year={2023},
eprint={2310.08491},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kaist-ai/Feedback-Collection
|
[
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"arxiv:2310.08491",
"region:us"
] |
2023-10-13T00:17:17+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "new_feedback_collection.json"}]}]}
|
2023-10-14T13:53:22+00:00
|
[
"2310.08491"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-2310.08491 #region-us
|
Dataset Description
-------------------
* Homepage:URL
* Repository:URL
* Paper:URL
* Point of Contact:seungone@URL
Dataset Card
============
### Dataset Summary
The Feedback Collection is a dataset designed to induce fine-grained evaluation capabilities into language models.\
!plot
Recently, proprietary LLMs (e.g., GPT-4) have been used to evaluate long-form responses. In our experiments, we found that open-source LMs are not capable of evaluating long-form responses, showing low correlation with both human evaluators and GPT-4.\
In our paper, we found that by (1) fine-tuning feedback generated by GPT-4 and (2) including the appropriate reference materials (reference answers & score rubrics), we can effectively induce fine-grained evaluation into open-source LMs.
The Feedback Collection provides 1K score rubrics, 20K instructions & reference answers, 100K responses & feedback (20K for each score in the range 1-5).\
Experimental results show that Prometheus (a LM obtained by fine-tuning Llama-2-Chat on the Feedback Collection) can function as an evaluator in both an absolute scoring setting and a ranking scoring setting.
### Languages
English
Dataset Structure
-----------------
* instruction: The input that is given to the evaluator LM. It includes the instruction & response to evaluate, the reference answer, the score rubric.
* output: The output that the evaluator LM should generate. It includes the feedback and score decision divided by a phrase .
* originstruction: The instruction to be evaluated. Note that this differs with the instruction that includes all the components.
* origresponse: The response to be evaluated.
* origreferenceanswer: A reference answer to the originstruction.
* origcriteria: The score criteria used to evaluate the orig response.
* origscore1description: A description of when to give a score of 1 to the origresponse.
* origscore2description: A description of when to give a score of 2 to the origresponse.
* origscore3description: A description of when to give a score of 3 to the origresponse.
* origscore4description: A description of when to give a score of 4 to the origresponse.
* origscore5description: A description of when to give a score of 5 to the origresponse.
* origfeedback: A feedback that critiques the origresponse.
* origscore: An integer between 1 and 5 given to the origresponse.
In our paper, we trained the input using the following prompt format (already processed in the 'instruction'):
The following prompt format (already processed in the 'output') was used to train the evaluator LM:
Then during evaluation, we parsed the prediction after the phrase .
### Data Splits
If you find the following model helpful, please consider citing our paper!
|
[
"### Dataset Summary\n\n\nThe Feedback Collection is a dataset designed to induce fine-grained evaluation capabilities into language models.\\\n\n\n!plot\n\n\nRecently, proprietary LLMs (e.g., GPT-4) have been used to evaluate long-form responses. In our experiments, we found that open-source LMs are not capable of evaluating long-form responses, showing low correlation with both human evaluators and GPT-4.\\\nIn our paper, we found that by (1) fine-tuning feedback generated by GPT-4 and (2) including the appropriate reference materials (reference answers & score rubrics), we can effectively induce fine-grained evaluation into open-source LMs.\n\n\nThe Feedback Collection provides 1K score rubrics, 20K instructions & reference answers, 100K responses & feedback (20K for each score in the range 1-5).\\\nExperimental results show that Prometheus (a LM obtained by fine-tuning Llama-2-Chat on the Feedback Collection) can function as an evaluator in both an absolute scoring setting and a ranking scoring setting.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\n* instruction: The input that is given to the evaluator LM. It includes the instruction & response to evaluate, the reference answer, the score rubric.\n* output: The output that the evaluator LM should generate. It includes the feedback and score decision divided by a phrase .\n* originstruction: The instruction to be evaluated. Note that this differs with the instruction that includes all the components.\n* origresponse: The response to be evaluated.\n* origreferenceanswer: A reference answer to the originstruction.\n* origcriteria: The score criteria used to evaluate the orig response.\n* origscore1description: A description of when to give a score of 1 to the origresponse.\n* origscore2description: A description of when to give a score of 2 to the origresponse.\n* origscore3description: A description of when to give a score of 3 to the origresponse.\n* origscore4description: A description of when to give a score of 4 to the origresponse.\n* origscore5description: A description of when to give a score of 5 to the origresponse.\n* origfeedback: A feedback that critiques the origresponse.\n* origscore: An integer between 1 and 5 given to the origresponse.\n\n\nIn our paper, we trained the input using the following prompt format (already processed in the 'instruction'):\n\n\nThe following prompt format (already processed in the 'output') was used to train the evaluator LM:\n\n\nThen during evaluation, we parsed the prediction after the phrase .",
"### Data Splits\n\n\n\nIf you find the following model helpful, please consider citing our paper!"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-2310.08491 #region-us \n",
"### Dataset Summary\n\n\nThe Feedback Collection is a dataset designed to induce fine-grained evaluation capabilities into language models.\\\n\n\n!plot\n\n\nRecently, proprietary LLMs (e.g., GPT-4) have been used to evaluate long-form responses. In our experiments, we found that open-source LMs are not capable of evaluating long-form responses, showing low correlation with both human evaluators and GPT-4.\\\nIn our paper, we found that by (1) fine-tuning feedback generated by GPT-4 and (2) including the appropriate reference materials (reference answers & score rubrics), we can effectively induce fine-grained evaluation into open-source LMs.\n\n\nThe Feedback Collection provides 1K score rubrics, 20K instructions & reference answers, 100K responses & feedback (20K for each score in the range 1-5).\\\nExperimental results show that Prometheus (a LM obtained by fine-tuning Llama-2-Chat on the Feedback Collection) can function as an evaluator in both an absolute scoring setting and a ranking scoring setting.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\n* instruction: The input that is given to the evaluator LM. It includes the instruction & response to evaluate, the reference answer, the score rubric.\n* output: The output that the evaluator LM should generate. It includes the feedback and score decision divided by a phrase .\n* originstruction: The instruction to be evaluated. Note that this differs with the instruction that includes all the components.\n* origresponse: The response to be evaluated.\n* origreferenceanswer: A reference answer to the originstruction.\n* origcriteria: The score criteria used to evaluate the orig response.\n* origscore1description: A description of when to give a score of 1 to the origresponse.\n* origscore2description: A description of when to give a score of 2 to the origresponse.\n* origscore3description: A description of when to give a score of 3 to the origresponse.\n* origscore4description: A description of when to give a score of 4 to the origresponse.\n* origscore5description: A description of when to give a score of 5 to the origresponse.\n* origfeedback: A feedback that critiques the origresponse.\n* origscore: An integer between 1 and 5 given to the origresponse.\n\n\nIn our paper, we trained the input using the following prompt format (already processed in the 'instruction'):\n\n\nThe following prompt format (already processed in the 'output') was used to train the evaluator LM:\n\n\nThen during evaluation, we parsed the prediction after the phrase .",
"### Data Splits\n\n\n\nIf you find the following model helpful, please consider citing our paper!"
] |
[
62,
244,
383,
20
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-2310.08491 #region-us \n### Dataset Summary\n\n\nThe Feedback Collection is a dataset designed to induce fine-grained evaluation capabilities into language models.\\\n\n\n!plot\n\n\nRecently, proprietary LLMs (e.g., GPT-4) have been used to evaluate long-form responses. In our experiments, we found that open-source LMs are not capable of evaluating long-form responses, showing low correlation with both human evaluators and GPT-4.\\\nIn our paper, we found that by (1) fine-tuning feedback generated by GPT-4 and (2) including the appropriate reference materials (reference answers & score rubrics), we can effectively induce fine-grained evaluation into open-source LMs.\n\n\nThe Feedback Collection provides 1K score rubrics, 20K instructions & reference answers, 100K responses & feedback (20K for each score in the range 1-5).\\\nExperimental results show that Prometheus (a LM obtained by fine-tuning Llama-2-Chat on the Feedback Collection) can function as an evaluator in both an absolute scoring setting and a ranking scoring setting."
] |
b24df66fb340e6bb7592b30bd5bc80f103ea8ae0
|
# Dataset Card for "biology_dataset_standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_0
|
[
"region:us"
] |
2023-10-13T01:13:44+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 0, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T20:21:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_0"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
90ec28e55920afec8c3a1c84c83122470cc43bfc
|
# Dataset Card for "biology_dataset_standardized_cluster_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_1
|
[
"region:us"
] |
2023-10-13T01:13:53+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 0, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T20:24:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_1"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
06c12f6191448ffdb96f610e418a5773e5dd8886
|
# Dataset Card for "biology_dataset_standardized_cluster_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_2
|
[
"region:us"
] |
2023-10-13T01:14:02+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:14:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_2"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
bfbdb952bf9659e6c217859d6f942dcd2b599c0e
|
# Dataset Card for "biology_dataset_standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_3
|
[
"region:us"
] |
2023-10-13T01:14:12+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:14:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_3"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
17660cb86b07cd5a12156715435323033356412c
|
# Dataset Card for "biology_dataset_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_4
|
[
"region:us"
] |
2023-10-13T01:14:21+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:14:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_4"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
106742029f10eefc1ac89b32e97bd7febe5c99cf
|
# Dataset Card for "biology_dataset_standardized_cluster_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_5
|
[
"region:us"
] |
2023-10-13T01:14:30+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:14:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_5"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_5\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_5\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_5\"\n\nMore Information needed"
] |
a949d5d91c97d47dd33318881b751ee4b9cc868f
|
# Dataset Card for "biology_dataset_standardized_cluster_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_6
|
[
"region:us"
] |
2023-10-13T01:14:40+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:14:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_6"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_6\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_6\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_6\"\n\nMore Information needed"
] |
989c50940e7784cb61825030a34a63c638c67455
|
# Dataset Card for "biology_dataset_standardized_cluster_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_7
|
[
"region:us"
] |
2023-10-13T01:14:49+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:14:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_7"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_7\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_7\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_7\"\n\nMore Information needed"
] |
2e1c5aa7fde8ea6c81df9d32ac65f2bcd0666082
|
# Dataset Card for "biology_dataset_standardized_cluster_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_8
|
[
"region:us"
] |
2023-10-13T01:14:58+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:15:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_8"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_8\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_8\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_8\"\n\nMore Information needed"
] |
abf5fb12e91a0b239be1bf056068d13743448a38
|
# Dataset Card for "biology_dataset_standardized_cluster_9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_9
|
[
"region:us"
] |
2023-10-13T01:15:07+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:15:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_9"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_9\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_9\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_9\"\n\nMore Information needed"
] |
e40f2262bb05e3fcf723a7ced435135647c8fd5b
|
# Dataset Card for "biology_dataset_standardized_cluster_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_10
|
[
"region:us"
] |
2023-10-13T01:15:17+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:15:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_10"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_10\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_10\"\n\nMore Information needed"
] |
c4f865cb0e0ae85acf489672c2cd94a80d378372
|
# Dataset Card for "biology_dataset_standardized_cluster_11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_11
|
[
"region:us"
] |
2023-10-13T01:15:26+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:15:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_11"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_11\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_11\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_11\"\n\nMore Information needed"
] |
6c8fdad49f906f3c6ddc294840e1febd59251343
|
# Dataset Card for "biology_dataset_standardized_cluster_12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_12
|
[
"region:us"
] |
2023-10-13T01:15:35+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:15:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_12"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_12\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_12\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_12\"\n\nMore Information needed"
] |
3191d731d3bbd70c5f2e5bb63660444fc9366b1f
|
# Dataset Card for "biology_dataset_standardized_cluster_13"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_13
|
[
"region:us"
] |
2023-10-13T01:15:45+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:15:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_13"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_13\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_13\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_13\"\n\nMore Information needed"
] |
8e24fbbdb63c81e360d86635f30b91d413a5d9d6
|
# Dataset Card for "biology_dataset_standardized_cluster_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_14
|
[
"region:us"
] |
2023-10-13T01:15:54+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:15:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_14"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_14\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_14\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_14\"\n\nMore Information needed"
] |
9988c829cd7666fa7fe216f3ecd787362b6f636e
|
# Dataset Card for "biology_dataset_standardized_cluster_15"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_15
|
[
"region:us"
] |
2023-10-13T01:16:04+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:16:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_15"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_15\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_15\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_15\"\n\nMore Information needed"
] |
31db51be9cf021b46d4ae9c1978c59ac15c38b60
|
# Dataset Card for "biology_dataset_standardized_cluster_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_16
|
[
"region:us"
] |
2023-10-13T01:16:13+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:16:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_16"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_16\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_16\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_16\"\n\nMore Information needed"
] |
4ec59ff2a2d7b242d9346b619cd5d4da2b4feef2
|
# Dataset Card for "biology_dataset_standardized_cluster_17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_17
|
[
"region:us"
] |
2023-10-13T01:16:23+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:16:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_17"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_17\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_17\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_17\"\n\nMore Information needed"
] |
17179cf6dfc0866040413fa196c46c2c261540a3
|
# Dataset Card for "biology_dataset_standardized_cluster_18"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_18
|
[
"region:us"
] |
2023-10-13T01:16:32+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:16:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_18"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_18\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_18\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_18\"\n\nMore Information needed"
] |
841fb05bd08b1f2852682592efa711a06b9b2480
|
# Dataset Card for Evaluation run of ziqingyang/chinese-llama-2-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ziqingyang/chinese-llama-2-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [ziqingyang/chinese-llama-2-13b](https://huggingface.co/ziqingyang/chinese-llama-2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ziqingyang__chinese-llama-2-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T02:16:28.624292](https://huggingface.co/datasets/open-llm-leaderboard/details_ziqingyang__chinese-llama-2-13b/blob/main/results_2023-10-13T02-16-28.624292.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.37741191275167785,
"em_stderr": 0.004964183842623747,
"f1": 0.42850880872483355,
"f1_stderr": 0.004835429715953239,
"acc": 0.39816494163081856,
"acc_stderr": 0.008707972830386747
},
"harness|drop|3": {
"em": 0.37741191275167785,
"em_stderr": 0.004964183842623747,
"f1": 0.42850880872483355,
"f1_stderr": 0.004835429715953239
},
"harness|gsm8k|5": {
"acc": 0.039423805913570885,
"acc_stderr": 0.005360280030342443
},
"harness|winogrande|5": {
"acc": 0.7569060773480663,
"acc_stderr": 0.01205566563043105
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_ziqingyang__chinese-llama-2-13b
|
[
"region:us"
] |
2023-10-13T01:16:32+00:00
|
{"pretty_name": "Evaluation run of ziqingyang/chinese-llama-2-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [ziqingyang/chinese-llama-2-13b](https://huggingface.co/ziqingyang/chinese-llama-2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ziqingyang__chinese-llama-2-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-13T02:16:28.624292](https://huggingface.co/datasets/open-llm-leaderboard/details_ziqingyang__chinese-llama-2-13b/blob/main/results_2023-10-13T02-16-28.624292.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.37741191275167785,\n \"em_stderr\": 0.004964183842623747,\n \"f1\": 0.42850880872483355,\n \"f1_stderr\": 0.004835429715953239,\n \"acc\": 0.39816494163081856,\n \"acc_stderr\": 0.008707972830386747\n },\n \"harness|drop|3\": {\n \"em\": 0.37741191275167785,\n \"em_stderr\": 0.004964183842623747,\n \"f1\": 0.42850880872483355,\n \"f1_stderr\": 0.004835429715953239\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.039423805913570885,\n \"acc_stderr\": 0.005360280030342443\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7569060773480663,\n \"acc_stderr\": 0.01205566563043105\n }\n}\n```", "repo_url": "https://huggingface.co/ziqingyang/chinese-llama-2-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_13T02_16_28.624292", "path": ["**/details_harness|drop|3_2023-10-13T02-16-28.624292.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-13T02-16-28.624292.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_13T02_16_28.624292", "path": ["**/details_harness|gsm8k|5_2023-10-13T02-16-28.624292.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-13T02-16-28.624292.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_13T02_16_28.624292", "path": ["**/details_harness|winogrande|5_2023-10-13T02-16-28.624292.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-13T02-16-28.624292.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_13T02_16_28.624292", "path": ["results_2023-10-13T02-16-28.624292.parquet"]}, {"split": "latest", "path": ["results_2023-10-13T02-16-28.624292.parquet"]}]}]}
|
2023-10-13T01:16:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of ziqingyang/chinese-llama-2-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model ziqingyang/chinese-llama-2-13b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-13T02:16:28.624292(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of ziqingyang/chinese-llama-2-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ziqingyang/chinese-llama-2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-13T02:16:28.624292(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of ziqingyang/chinese-llama-2-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ziqingyang/chinese-llama-2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-13T02:16:28.624292(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ziqingyang/chinese-llama-2-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ziqingyang/chinese-llama-2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-13T02:16:28.624292(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
682075020c9d8692008044fdee70d61c425a76a9
|
# Dataset Card for "biology_dataset_standardized_cluster_19"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pharaouk/biology_dataset_standardized_cluster_19
|
[
"region:us"
] |
2023-10-13T01:16:41+00:00
|
{"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 324, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-13T01:16:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biology_dataset_standardized_cluster_19"
More Information needed
|
[
"# Dataset Card for \"biology_dataset_standardized_cluster_19\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biology_dataset_standardized_cluster_19\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biology_dataset_standardized_cluster_19\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.