sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
728347616c9dcc1522c1ed77a23918ead993cafe |
# Dataset of laffey (Azur Lane)
This is the dataset of laffey (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 516 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 581 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 516 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 516 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 227 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 581 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 581 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/laffey_azurlane | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-24T15:24:24+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-24T15:24:37+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of laffey (Azur Lane)
=============================
This is the dataset of laffey (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
695fcc8905f6fe28b649be0d912ae4c24efb6022 | # Dataset Card for "MF_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lhallee/MF_reg | [
"region:us"
]
| 2023-11-24T15:27:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45686917, "num_examples": 26225}, {"name": "valid", "num_bytes": 5045807, "num_examples": 2904}, {"name": "test", "num_bytes": 6054931, "num_examples": 3350}], "download_size": 10849452, "dataset_size": 56787655}} | 2023-11-24T15:27:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "MF_reg"
More Information needed | [
"# Dataset Card for \"MF_reg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"MF_reg\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"MF_reg\"\n\nMore Information needed"
]
|
971c5549c9dea1f907768fb73dd5121e2bbc82e1 |
# MaralGPT dataset v0.1
This is an alpaca-styled dataset, but our data format is now like the model _zephyr_. | MaralGPT/maralgpt-dataset-v0-1 | [
"license:mit",
"region:us"
]
| 2023-11-24T15:30:28+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 50332815, "num_examples": 35117}], "download_size": 22605931, "dataset_size": 50332815}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-24T15:33:30+00:00 | []
| []
| TAGS
#license-mit #region-us
|
# MaralGPT dataset v0.1
This is an alpaca-styled dataset, but our data format is now like the model _zephyr_. | [
"# MaralGPT dataset v0.1\n\nThis is an alpaca-styled dataset, but our data format is now like the model _zephyr_."
]
| [
"TAGS\n#license-mit #region-us \n",
"# MaralGPT dataset v0.1\n\nThis is an alpaca-styled dataset, but our data format is now like the model _zephyr_."
]
| [
11,
35
]
| [
"passage: TAGS\n#license-mit #region-us \n# MaralGPT dataset v0.1\n\nThis is an alpaca-styled dataset, but our data format is now like the model _zephyr_."
]
|
23ad3109bac95849a305b37d99f73780f6b7541c |
# Dataset Card for CA-PT Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Data preparation](#data-preparation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-PT Parallel Corpus is a Catalan-Portuguese dataset of **9.892.953** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Portuguese.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- ca-pt_2023_09_01_full.ca: contains 9.892.953 Catalan sentences.
- ca-pt_2023_09_01_full.pt: contains 9.892.953 Portuguese sentences.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
The dataset is a combination of the following authentic datasets:
| Dataset | Sentences |
|:-------|-------:|
| CCMatrix v1 | 3.765.459 |
| WikiMatrix | 317.649 |
| GNOME | 1.752 |
| KDE4 | 117.828 |
| QED | 43.736 |
| TED2020 v1 | 41.461 |
| OpenSubtitles | 235.604 |
| GlobalVoices | 3.430 |
| Tatoeba | 723 |
| Europarl | 1.631.989 |
| **Total** | **6.159.631** |
All corpora except Europarl were collected from [Opus](https://opus.nlpl.eu/).
The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
The remaining **3.733.322** sentences are synthetic parallel data created from a random sampling of the Spanish-Portuguese corpora available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of **9.892.953** parallel sentences.
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to [email protected].
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (https://projecteaina.cat/). | projecte-aina/CA-PT_Parallel_Corpus | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:pt",
"language:multilingual",
"region:us"
]
| 2023-11-24T15:40:36+00:00 | {"language": ["ca", "pt", "multilingual"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CA-PT Parallel Corpus"} | 2024-01-17T13:49:29+00:00 | []
| [
"ca",
"pt",
"multilingual"
]
| TAGS
#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Portuguese #language-multilingual #region-us
| Dataset Card for CA-PT Parallel Corpus
======================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Splits
* Dataset Creation
+ Source Data
+ Data preparation
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Author
+ Contact Information
+ Copyright
+ Licensing information
+ Funding
Dataset Description
-------------------
### Dataset Summary
The CA-PT Parallel Corpus is a Catalan-Portuguese dataset of 9.892.953 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Portuguese.
Dataset Structure
-----------------
Two separated txt files are provided with the sentences sorted in the same order:
* ca-pt\_2023\_09\_01\_full.ca: contains 9.892.953 Catalan sentences.
* ca-pt\_2023\_09\_01\_full.pt: contains 9.892.953 Portuguese sentences.
### Data Splits
The dataset contains a single split: 'train'.
Dataset Creation
----------------
### Source Data
The dataset is a combination of the following authentic datasets:
All corpora except Europarl were collected from Opus.
The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by SoftCatalà.
The remaining 3.733.322 sentences are synthetic parallel data created from a random sampling of the Spanish-Portuguese corpora available on Opus and translated into Catalan using the PlanTL es-ca model.
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using LaBSE.
The filtered datasets are then concatenated to form a final corpus of 9.892.953 parallel sentences.
### Personal and Sensitive Information
No anonymisation process was performed.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
Additional Information
----------------------
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@URL.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL
| [
"### Dataset Summary\n\n\nThe CA-PT Parallel Corpus is a Catalan-Portuguese dataset of 9.892.953 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\n\nThe texts in the dataset are in Catalan and Portuguese.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-pt\\_2023\\_09\\_01\\_full.ca: contains 9.892.953 Catalan sentences.\n* ca-pt\\_2023\\_09\\_01\\_full.pt: contains 9.892.953 Portuguese sentences.",
"### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora except Europarl were collected from Opus.\nThe Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by SoftCatalà.\n\n\nThe remaining 3.733.322 sentences are synthetic parallel data created from a random sampling of the Spanish-Portuguese corpora available on Opus and translated into Catalan using the PlanTL es-ca model.",
"### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.892.953 parallel sentences.",
"### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.",
"### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.\n\n\nAdditional Information\n----------------------",
"### Author\n\n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\n\n\nFor further information, please send an email to langtech@URL.",
"### Copyright\n\n\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\n\n\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL"
]
| [
"TAGS\n#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Portuguese #language-multilingual #region-us \n",
"### Dataset Summary\n\n\nThe CA-PT Parallel Corpus is a Catalan-Portuguese dataset of 9.892.953 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\n\nThe texts in the dataset are in Catalan and Portuguese.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-pt\\_2023\\_09\\_01\\_full.ca: contains 9.892.953 Catalan sentences.\n* ca-pt\\_2023\\_09\\_01\\_full.pt: contains 9.892.953 Portuguese sentences.",
"### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora except Europarl were collected from Opus.\nThe Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by SoftCatalà.\n\n\nThe remaining 3.733.322 sentences are synthetic parallel data created from a random sampling of the Spanish-Portuguese corpora available on Opus and translated into Catalan using the PlanTL es-ca model.",
"### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.892.953 parallel sentences.",
"### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.",
"### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.\n\n\nAdditional Information\n----------------------",
"### Author\n\n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\n\n\nFor further information, please send an email to langtech@URL.",
"### Copyright\n\n\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\n\n\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL"
]
| [
59,
52,
45,
109,
25,
105,
78,
26,
32,
61,
50,
21,
18,
18,
24,
27
]
| [
"passage: TAGS\n#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Portuguese #language-multilingual #region-us \n### Dataset Summary\n\n\nThe CA-PT Parallel Corpus is a Catalan-Portuguese dataset of 9.892.953 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.### Languages\n\n\nThe texts in the dataset are in Catalan and Portuguese.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-pt\\_2023\\_09\\_01\\_full.ca: contains 9.892.953 Catalan sentences.\n* ca-pt\\_2023\\_09\\_01\\_full.pt: contains 9.892.953 Portuguese sentences.### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora except Europarl were collected from Opus.\nThe Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by SoftCatalà.\n\n\nThe remaining 3.733.322 sentences are synthetic parallel data created from a random sampling of the Spanish-Portuguese corpora available on Opus and translated into Catalan using the PlanTL es-ca model.### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.892.953 parallel sentences.### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------"
]
|
b34bc6c5dccf56f5886c6654c9b224de8a3f032e | # Dataset Card for "vsums_enq_batch_2_uniform_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Xapien/vsums_enq_batch_2_uniform_sample | [
"region:us"
]
| 2023-11-24T15:56:41+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "subject_A", "dtype": "string"}, {"name": "entity_sourcetext_A", "dtype": "string"}, {"name": "entity_fingerprint_A", "dtype": "string"}, {"name": "DRE_A", "dtype": "string"}, {"name": "embedding_A", "dtype": "string"}, {"name": "new_entity_description_A", "dtype": "string"}, {"name": "new_embedding_A", "dtype": "string"}, {"name": "Label_A", "dtype": "int64"}, {"name": "subject_B", "dtype": "string"}, {"name": "entity_sourcetext_B", "dtype": "string"}, {"name": "entity_fingerprint_B", "dtype": "string"}, {"name": "DRE_B", "dtype": "string"}, {"name": "embedding_B", "dtype": "string"}, {"name": "new_entity_description_B", "dtype": "string"}, {"name": "new_embedding_B", "dtype": "string"}, {"name": "Label_B", "dtype": "int64"}, {"name": "new_similarity", "dtype": "float64"}, {"name": "old_similarity", "dtype": "float64"}, {"name": "same_persona", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 102114904, "num_examples": 1360}], "download_size": 8322975, "dataset_size": 102114904}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-24T15:56:44+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "vsums_enq_batch_2_uniform_sample"
More Information needed | [
"# Dataset Card for \"vsums_enq_batch_2_uniform_sample\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"vsums_enq_batch_2_uniform_sample\"\n\nMore Information needed"
]
| [
6,
26
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"vsums_enq_batch_2_uniform_sample\"\n\nMore Information needed"
]
|
04a6b62ea4e7a8e197818cf9ae6a3d8e64f432b0 |
# UA-GEC instruction tuning
This dataset contains prompts and expected outputs for the grammatical error
correction task in the Ukrainian language. It is based on the
CC-BY-4.0-licensed [UA-GEC](https://github.com/grammarly/ua-gec) dataset. The
license of the original data is CC-BY-4.0.
This dataset contains 1,700 examples of fixing errors in long documents, and
~28,000 sentence-level examples.
The instructions ask to correct errors in the text. Sometimes the model outputs
the corrected text as is. At other times, it will add "Sure, here's the
corrected text". If the text doesn't contain any errors, sometimes the model
will just output the input text, and in other cases it will write "This text
doesn't contain grammatical errors.". The `template_id` field references a
specific template used in a sample. See the templates list below.
## Stats
Metric | Value
----------------------------------|-------
Number of document-level examples | 1,700
Number of sentence-level examples | 28,258
Number of input templates | 14
Number of output templates | 6
## Templates
Each template consists of three parts:
1. Instruction template.
2. Target template, positive case (there are grammatical errors in the text).
3. Target template, negative case (there are no grammatical errors in the text).
The following list contains `(template_id, instruction_template, target_neg,
target_pos)` tuples.
```
(0, "Виправ помилки у тексті.\n\n{src}", "Даний текст не містить помилок.", "{tgt}")
(1, "Перепиши наступний текст без помилок:\n\n# Текст\n{src}", "Даний текст не містить помилок.", "{tgt}")
(2, "Перепиши текст без помилок.\n\n{src}", "Даний текст не містить помилок.", "{tgt}")
(3, "Виправ помилки у тексті.\n\n{src}", "Даний текст не містить помилок.", "Звичайно. Ось текст з виправленими помилками: \n\n{tgt}")
(4, "Перевір, будь ласка, правильність граматики у наступному тексті.\n\n{src}", "Даний текст не містить помилок.", "{tgt}")
(5, "Виправ граматичні помилки в наступному тексті: {src}", "Даний текст не містить помилок.", "{tgt}")
(6, "Виправ граматичні помилки в наступному тексті: {src}", "Даний текст не містить помилок.", "Звичайно. Ось текст з виправленими помилками: \n\n{tgt}")
(7, "Перепиши текст без помилок.\n\n{src}", "Даний текст не містить помилок.", "Звичайно. Ось текст з виправленими помилками: \n\n{tgt}")
(8, "Переглянь, будь ласка, наступний текст. Виправи усі граматичні неточності.\n\n{src}", "Даний текст не містить помилок.", "Звичайно. Ось текст з виправленими помилками: \n\n{tgt}")
(9, "Виправ помилки у тексті.\n\n# Текст\n\n{src}", "Даний текст не містить помилок.", "Звичайно. Ось текст з виправленими помилками: \n\n{tgt}")
(10, "Перевір, будь ласка, правильність граматики у наступному тексті.\n\n{src}", "{tgt}", "{tgt}")
(11, "Перепиши текст без помилок.\n\n{src}", "{tgt}", "{tgt}")
(12, "Перепиши наступний текст без помилок:\n\n# Текст\n{src}", "{tgt}", "{tgt}")
(13, "Виправ помилки у тексті.\n\n{src}", "{tgt}", "{tgt}")
(14, "Перепиши наступний текст без помилок:\n\n# Текст\n{src}", "Даний текст не містить помилок.", "Звичайно. Ось текст з виправленими помилками: \n\n{tgt}")
(15, "Переглянь, будь ласка, наступний текст. Виправи усі граматичні неточності.\n\n{src}", "{tgt}", "{tgt}")
(16, "Переглянь, будь ласка, наступний текст. Виправи усі граматичні неточності.\n\n{src}", "Даний текст не містить помилок.", "{tgt}")
(17, "Перевір, будь ласка, правильність граматики у наступному тексті.\n\n{src}", "Даний текст не містить помилок.", "Звичайно. Ось текст з виправленими помилками: \n\n{tgt}")
(18, "Виправ помилки у тексті.\n\n# Текст\n\n{src}", "{tgt}", "{tgt}")
(19, "Виправ помилки у тексті.\n\n# Текст\n\n{src}", "Даний текст не містить помилок.", "{tgt}")
(20, "Виправ граматичні помилки в наступному тексті: {src}", "{tgt}", "{tgt}")
(21, "Виправ помилки у тексті.\n\n{src}", "Це речення написано без помилок.", "{tgt}")
(22, "Виправ граматичні помилки в наступному реченні: \"{src}\"", "Це речення написано без помилок.", "{tgt}")
(23, "Виправ граматичні помилки в наступному реченні: {src}", "Це речення написано без помилок.", "{tgt}")
(24, "Перепиши це речення без помилок:\n\n{src}", "{tgt}", "{tgt}")
(25, "Перепиши наступний текст без помилок:\n\n{src}", "Дане речення не містить помилок.", "{tgt}")
(26, "Перепиши наступний текст без помилок:\n\n{src}", "{tgt}", "{tgt}")
(27, "Перепиши це речення без помилок:\n\n{src}", "Дане речення не містить помилок.", "{tgt}")
(28, "Виправ граматичні помилки в наступному реченні: \"{src}\"", "{tgt}", "{tgt}")
(29, "Виправ помилки у тексті.\n\n{src}", "{tgt}", "{tgt}")
(30, "Виправ граматичні помилки в наступному реченні: {src}", "{tgt}", "{tgt}")
(31, "Перепиши текст без помилок.\n\n{src}", "{tgt}", "{tgt}")
(32, "Перепиши текст без помилок.\n\n{src}", "Дане речення не містить помилок.", "{tgt}")
(33, "Виправ граматичні помилки в наступному реченні: {src}", "Дане речення не містить помилок.", "{tgt}")
(34, "Виправ граматичні помилки в наступному реченні: \"{src}\"", "Дане речення не містить помилок.", "{tgt}")
(35, "Виправ помилки у тексті.\n\n{src}", "Дане речення не містить помилок.", "{tgt}")
(36, "Перепиши наступний текст без помилок:\n\n{src}", "Це речення написано без помилок.", "{tgt}")
(37, "Перепиши це речення без помилок:\n\n{src}", "Це речення написано без помилок.", "{tgt}")
(38, "Перепиши текст без помилок.\n\n{src}", "Це речення написано без помилок.", "{tgt}")
```
`{src}` is a placeholder for a source text (that may or may not contain
grammatical error).
`{tgt}` is a placeholder for the corrected text.
| osyvokon/ua_gec_instruction_tuning | [
"size_categories:10K<n<100K",
"language:uk",
"license:cc-by-4.0",
"region:us"
]
| 2023-11-24T16:14:18+00:00 | {"language": ["uk"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"]} | 2024-02-02T11:25:34+00:00 | []
| [
"uk"
]
| TAGS
#size_categories-10K<n<100K #language-Ukrainian #license-cc-by-4.0 #region-us
| UA-GEC instruction tuning
=========================
This dataset contains prompts and expected outputs for the grammatical error
correction task in the Ukrainian language. It is based on the
CC-BY-4.0-licensed UA-GEC dataset. The
license of the original data is CC-BY-4.0.
This dataset contains 1,700 examples of fixing errors in long documents, and
~28,000 sentence-level examples.
The instructions ask to correct errors in the text. Sometimes the model outputs
the corrected text as is. At other times, it will add "Sure, here's the
corrected text". If the text doesn't contain any errors, sometimes the model
will just output the input text, and in other cases it will write "This text
doesn't contain grammatical errors.". The 'template\_id' field references a
specific template used in a sample. See the templates list below.
Stats
-----
Templates
---------
Each template consists of three parts:
1. Instruction template.
2. Target template, positive case (there are grammatical errors in the text).
3. Target template, negative case (there are no grammatical errors in the text).
The following list contains '(template\_id, instruction\_template, target\_neg,
target\_pos)' tuples.
'{src}' is a placeholder for a source text (that may or may not contain
grammatical error).
'{tgt}' is a placeholder for the corrected text.
| []
| [
"TAGS\n#size_categories-10K<n<100K #language-Ukrainian #license-cc-by-4.0 #region-us \n"
]
| [
34
]
| [
"passage: TAGS\n#size_categories-10K<n<100K #language-Ukrainian #license-cc-by-4.0 #region-us \n"
]
|
05876b3716c279ec9e5bffed2c11a601255003c8 | # Dataset Card for "mswc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | confit/mswc | [
"region:us"
]
| 2023-11-24T16:33:13+00:00 | {"configs": [{"config_name": "eng", "data_files": [{"split": "train", "path": "eng/train-*"}, {"split": "validation", "path": "eng/validation-*"}, {"split": "test", "path": "eng/test-*"}]}, {"config_name": "ind", "data_files": [{"split": "train", "path": "ind/train-*"}, {"split": "validation", "path": "ind/validation-*"}, {"split": "test", "path": "ind/test-*"}]}, {"config_name": "spa", "data_files": [{"split": "train", "path": "spa/train-*"}, {"split": "validation", "path": "spa/validation-*"}, {"split": "test", "path": "spa/test-*"}]}], "dataset_info": [{"config_name": "eng", "features": [{"name": "filename", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "aaron", "1": "abba", "2": "abel", "3": "abigail", "4": "abilene", "5": "abner", "6": "abraham", "7": "abrahams", "8": "abram", "9": "adam", "10": "agrippa", "11": "alexander", "12": "alexandria", "13": "ammon", "14": "amos", "15": "andrew", "16": "anna", "17": "antioch", "18": "antiochus", "19": "apollonia", "20": "arabia", "21": "aram", "22": "archelaus", "23": "ariel", "24": "artemis", "25": "asa", "26": "asher", "27": "ashur", "28": "asia", "29": "assemble", "30": "assyria", "31": "athens", "32": "augustus", "33": "babylon", "34": "babylonia", "35": "bani", "36": "barak", "37": "barnabas", "38": "bartholomew", "39": "baruch", "40": "bela", "41": "benjamin", "42": "berea", "43": "bernice", "44": "beth", "45": "bethany", "46": "bethel", "47": "bethesda", "48": "bethlehem", "49": "caesar", "50": "caesarea", "51": "cain", "52": "caleb", "53": "cana", "54": "canaan", "55": "carmel", "56": "castor", "57": "cesar", "58": "chios", "59": "christ", "60": "cilicia", "61": "claudia", "62": "claudius", "63": "clement", "64": "corinth", "65": "cornelius", "66": "crete", "67": "cyprus", "68": "cyrus", "69": "dalmatia", "70": "damascus", "71": "dan", "72": "daniel", "73": "darius", "74": "david", "75": "deborah", "76": "demetrius", "77": "diana", "78": "dinah", "79": "dionysius", "80": "drusilla", "81": "eden", "82": "egypt", "83": "elam", "84": "eli", "85": "elia", "86": "elias", "87": "eliezer", "88": "elijah", "89": "elim", "90": "elisabeth", "91": "elizabeth", "92": "elon", "93": "enoch", "94": "enos", "95": "ephesus", "96": "ephraim", "97": "esther", "98": "ethan", "99": "ethiopia", "100": "eunice", "101": "euphrates", "102": "eve", "103": "ezra", "104": "felix", "105": "gabriel", "106": "gad", "107": "gaius", "108": "galilee", "109": "gaza", "110": "gideon", "111": "gilead", "112": "goshen", "113": "greece", "114": "hadad", "115": "hades", "116": "hagar", "117": "ham", "118": "hannah", "119": "heber", "120": "hebrew", "121": "hebron", "122": "hermes", "123": "hermon", "124": "herod", "125": "hiram", "126": "hosanna", "127": "hush", "128": "immanuel", "129": "india", "130": "ira", "131": "isaac", "132": "isaiah", "133": "ishmael", "134": "israel", "135": "italy", "136": "jacob", "137": "james", "138": "jared", "139": "jason", "140": "jeremiah", "141": "jericho", "142": "jerusalem", "143": "jesse", "144": "jesus", "145": "jethro", "146": "jew", "147": "jezebel", "148": "joanna", "149": "job", "150": "joel", "151": "john", "152": "jonah", "153": "jonas", "154": "jonathan", "155": "jordan", "156": "joseph", "157": "joshua", "158": "josiah", "159": "judah", "160": "judas", "161": "jude", "162": "judith", "163": "julia", "164": "julius", "165": "justus", "166": "kos", "167": "laban", "168": "lazarus", "169": "leah", "170": "lebanon", "171": "levi", "172": "libya", "173": "linus", "174": "lois", "175": "lot", "176": "lucius", "177": "luke", "178": "lydia", "179": "macedonia", "180": "magdalene", "181": "magi", "182": "maker", "183": "malta", "184": "mariam", "185": "mark", "186": "martha", "187": "mary", "188": "matthew", "189": "melchizedek", "190": "mesopotamia", "191": "messiah", "192": "michael", "193": "midian", "194": "miriam", "195": "moab", "196": "mordecai", "197": "moses", "198": "myra", "199": "naomi", "200": "narcissus", "201": "nathanael", "202": "nazareth", "203": "nebuchadnezzar", "204": "nicolas", "205": "niger", "206": "nile", "207": "noah", "208": "paul", "209": "paulus", "210": "perez", "211": "persia", "212": "peter", "213": "pharaoh", "214": "philadelphia", "215": "philip", "216": "phoebe", "217": "phoenix", "218": "pontus", "219": "priscilla", "220": "publius", "221": "rachel", "222": "rebecca", "223": "rebekah", "224": "reuben", "225": "rhoda", "226": "rhodes", "227": "rome", "228": "rufus", "229": "salem", "230": "salim", "231": "salome", "232": "samson", "233": "samuel", "234": "sarah", "235": "sardis", "236": "satan", "237": "saul", "238": "seleucia", "239": "seth", "240": "sharon", "241": "shiloh", "242": "shout", "243": "shun", "244": "silas", "245": "simeon", "246": "simon", "247": "sinai", "248": "sion", "249": "smyrna", "250": "sodom", "251": "solomon", "252": "spain", "253": "stephen", "254": "susanna", "255": "syracuse", "256": "syria", "257": "tabitha", "258": "tabor", "259": "tamar", "260": "theophilus", "261": "thomas", "262": "thummim", "263": "tiberius", "264": "timothy", "265": "titus", "266": "tobias", "267": "tyre", "268": "urim", "269": "zeus", "270": "zion"}}}}], "splits": [{"name": "train", "num_bytes": 1215893, "num_examples": 26744}, {"name": "validation", "num_bytes": 159193, "num_examples": 3491}, {"name": "test", "num_bytes": 159142, "num_examples": 3491}], "download_size": 397181, "dataset_size": 1534228}, {"config_name": "ind", "features": [{"name": "filename", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "agustus", "1": "anak", "2": "asia", "3": "dan", "4": "kuat", "5": "pulau", "6": "raja", "7": "rumahnya", "8": "selama", "9": "selamat", "10": "selatan", "11": "tahan", "12": "teman", "13": "tuhan"}}}}], "splits": [{"name": "train", "num_bytes": 26080, "num_examples": 575}, {"name": "validation", "num_bytes": 3756, "num_examples": 83}, {"name": "test", "num_bytes": 3664, "num_examples": 81}], "download_size": 12806, "dataset_size": 33500}, {"config_name": "spa", "features": [{"name": "filename", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "abel", "1": "abismo", "2": "ad\u00e1n", "3": "agar", "4": "alejandro", "5": "alejandr\u00eda", "6": "ana", "7": "andr\u00e9s", "8": "antioqu\u00eda", "9": "apolo", "10": "arabia", "11": "artemisa", "12": "asia", "13": "atenas", "14": "augusto", "15": "babilonia", "16": "benjam\u00edn", "17": "berenice", "18": "bordeando", "19": "capadocia", "20": "ca\u00edn", "21": "cesar", "22": "chipre", "23": "claudia", "24": "claudio", "25": "clemente", "26": "consejo", "27": "constructor", "28": "corinto", "29": "cornelio", "30": "creta", "31": "cristo", "32": "cuarto", "33": "damasco", "34": "dan", "35": "daniel", "36": "david", "37": "demetrio", "38": "dionisio", "39": "dirigente", "40": "efra\u00edn", "41": "egipto", "42": "elisabet", "43": "el\u00edas", "44": "eneas", "45": "eran", "46": "espa\u00f1a", "47": "esteban", "48": "etiop\u00eda", "49": "eva", "50": "evita", "51": "fara\u00f3n", "52": "felipe", "53": "filadelfia", "54": "filem\u00f3n", "55": "fil\u00f3logo", "56": "gabriel", "57": "gobernaba", "58": "grecia", "59": "hebreo", "60": "hermes", "61": "iliria", "62": "ira", "63": "isaac", "64": "israel", "65": "italia", "66": "jacob", "67": "jes\u00fas", "68": "joel", "69": "jord\u00e1n", "70": "jos\u00e9", "71": "juan", "72": "juana", "73": "judas", "74": "judea", "75": "jud\u00edo", "76": "julia", "77": "julio", "78": "justo", "79": "libia", "80": "lidia", "81": "lino", "82": "lucas", "83": "lucio", "84": "l\u00e1zaro", "85": "macedonia", "86": "maestros", "87": "magdalena", "88": "malta", "89": "marcos", "90": "marta", "91": "mar\u00eda", "92": "mateo", "93": "mat\u00edas", "94": "mesopotamia", "95": "mes\u00edas", "96": "miguel", "97": "narciso", "98": "negro", "99": "nicanor", "100": "nicol\u00e1s", "101": "oiga", "102": "olimpo", "103": "pablo", "104": "partos", "105": "paulo", "106": "pedro", "107": "peor", "108": "pesan", "109": "pirro", "110": "ponto", "111": "rebeca", "112": "re\u00fanen", "113": "roma", "114": "rufo", "115": "sabios", "116": "salem", "117": "salm\u00f3n", "118": "salom\u00e9", "119": "salom\u00f3n", "120": "samuel", "121": "santiago", "122": "sara", "123": "satan\u00e1s", "124": "segundo", "125": "sergio", "126": "set", "127": "se\u00f1or", "128": "sime\u00f3n", "129": "sim\u00f3n", "130": "siracusa", "131": "siria", "132": "situ\u00f3", "133": "sur", "134": "susana", "135": "tara", "136": "tercio", "137": "tiberio", "138": "tim\u00f3n", "139": "tiro", "140": "tito", "141": "tom\u00e1s", "142": "urbano", "143": "viva", "144": "zara", "145": "zeus"}}}}], "splits": [{"name": "train", "num_bytes": 431605, "num_examples": 9283}, {"name": "validation", "num_bytes": 57583, "num_examples": 1238}, {"name": "test", "num_bytes": 57583, "num_examples": 1238}], "download_size": 148219, "dataset_size": 546771}]} | 2023-11-24T17:05:48+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mswc"
More Information needed | [
"# Dataset Card for \"mswc\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mswc\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mswc\"\n\nMore Information needed"
]
|
e809543f14927ad49261b72cedcf85bdc4d4d271 |
# Dataset Card for textclass_descriptives_vectors
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("nataliaElv/textclass_descriptives_vectors")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("nataliaElv/textclass_descriptives_vectors")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| prompt | Prompt | text | True | True |
| context | Context | text | False | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| class | Classify the instruction according to its class | label_selection | True | N/A | ['closed_qa', 'classification', 'open_qa', 'information_extraction', 'brainstorming', 'general_qa', 'summarization', 'creative_writing'] |
| response | Response | text | True | N/A | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
**✨ NEW** The **vectors** are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the **vectors_settings** when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The **vectors** are optional and identified by the pre-defined vector name in the dataset configuration file in `argilla.yaml`.
| Vector Name | Title | Dimensions |
|-------------|-------|------------|
| prompt-similarity | prompt-similarity | [1, 768] |
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| n_characters | No. of characters | integer | None - None | True |
| passed_quality_check | Passed quality check? | terms | - | True |
| flesch_reading_ease | Reading ease | float | None - None | True |
| entropy | Entropy | float | None - None | True |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"prompt": "Can brain cells move? By movement I mean long distance migration (preferably within the brain only)."
},
"metadata": {
"entropy": 0.4352176404374839,
"flesch_reading_ease": 82.39000000000001,
"n_characters": 85,
"passed_quality_check": "True"
},
"responses": [],
"suggestions": [
{
"agent": "setfit",
"question_name": "class",
"score": 0.2823514938354492,
"type": null,
"value": "brainstorming"
},
{
"agent": null,
"question_name": "response",
"score": null,
"type": null,
"value": "The question is relatively broad and one should take into account that the brain not only consists of neurons, but also glial cells (supportive cells) and pre-mitotic neuronal stem cells. Furthermore, as critical fellow-scientists have indicated, developmental stage is very important, as the developing embryonic brain is very different from the adult brain.\nHowever, after sifting through various publications, the answer to the question is actually remarkably simple: Yes, brain cells migrate.\nIn the adult brain glial cells migrate in the brain (Kl\u00e4mbt, 2009). Glial cells are involved in a myriad of functions, but a notable example of migrating glial cells are the oligodendrocytes that migrate relative long distances to find their target axons onto which they wrap themselves to form the insulating myelin sheath (Tsai and Miller, 2002).\nNeuronal stem cells migrate over long distances in response to injury (Imitola et al., 2004) and they migrate from specific stem-cell locations (e.g., hippocampus and subventricular zone) to other regions (Clarke, 2003).\nPost-mitotic, but non-differentiated neurons have been shown to migrate in the adult brain in fish (Scott et al., 2012), and in mammals and non-human primates as well (Sawada et al., 2011).\nNot surprisingly, glial cells, stem cells and neurons also migrate during embryonic development. Most notably, post-mitotic neurons destined to fulfill peripheral functions have to migrate over relatively long distances from the neural crest to their target locations (Neuroscience, 2nd ed, Neuronal Migration)."
}
],
"vectors": {
"prompt-similarity": [
-0.013013245537877083,
0.01881960965692997,
0.018717532977461815,
-0.014981311745941639,
0.03672853484749794,
-0.015297300182282925,
0.031154541298747063,
0.009528533555567265,
-0.031607501208782196,
-0.039829764515161514,
-0.019534926861524582,
-0.019294919446110725,
-0.047140125185251236,
0.03812485188245773,
-0.018894944339990616,
0.039123568683862686,
0.03436238318681717,
-0.007996739819645882,
0.013651853427290916,
-0.016834214329719543,
-0.02929615043103695,
0.002512674080207944,
0.008257705718278885,
0.03932825103402138,
0.031019780784845352,
-0.028575727716088295,
-0.022710563614964485,
0.0132012739777565,
-0.048433348536491394,
-0.02651829645037651,
0.01601981930434704,
-0.006484998855739832,
-0.07150214165449142,
-0.010764969512820244,
0.00407565338537097,
-0.007564086001366377,
-0.015640858560800552,
-0.012789258733391762,
0.00717244204133749,
-0.051655009388923645,
-0.030335327610373497,
0.007193537428975105,
-0.020686019212007523,
0.016904372721910477,
-0.057382386177778244,
0.020192697644233704,
-0.0621950700879097,
0.0034242896363139153,
-0.04375811666250229,
-0.012516515329480171,
-0.04787379130721092,
0.05757446959614754,
0.045590516179800034,
-0.019442711025476456,
0.02614322304725647,
0.022066324949264526,
-0.017174094915390015,
-0.03904383257031441,
-0.014966102316975594,
-0.04261021316051483,
0.06123539060354233,
0.01483749970793724,
-0.009737796150147915,
-0.021765291690826416,
-0.001423536567017436,
-0.04854138195514679,
0.03245295211672783,
0.02051699534058571,
-0.05414895340800285,
-0.03563692420721054,
-0.0506395623087883,
-0.06071240082383156,
-0.017511913552880287,
0.006278000771999359,
0.009547360241413116,
-0.05603624880313873,
-0.0038324843626469374,
0.012652688659727573,
0.06399084627628326,
0.01680467091500759,
0.030588308349251747,
0.023556867614388466,
-0.04122614115476608,
0.06281794607639313,
0.002343484666198492,
-0.03668874129652977,
-0.01711929589509964,
-4.190538675175048e-06,
-0.05742541700601578,
0.04727115109562874,
-0.04583971947431564,
-0.01956474594771862,
0.02877974882721901,
0.05513108894228935,
0.015185099095106125,
-0.006118557415902615,
0.0272984616458416,
-0.02677239291369915,
-0.009623365476727486,
0.05534995347261429,
-0.02598058618605137,
-0.04715755954384804,
-0.022215673699975014,
-0.009219354949891567,
-0.05435849353671074,
-0.03680011257529259,
-0.008128424175083637,
-0.029657825827598572,
0.022026637569069862,
-0.012166539207100868,
-0.025011586025357246,
-0.02193683199584484,
-0.00693196477368474,
0.006336281541734934,
-0.043086495250463486,
0.05915242061018944,
0.02211538702249527,
-0.023119445890188217,
0.007697188761085272,
-0.0552712008357048,
0.03299417346715927,
0.05157257989048958,
-0.03600669652223587,
0.044204846024513245,
0.025432858616113663,
0.007447212003171444,
0.006279517896473408,
0.03376108407974243,
-0.040294621139764786,
-0.058066226541996,
0.012761987745761871,
0.04904710873961449,
-0.012213962152600288,
-0.013692168518900871,
0.027355555444955826,
-0.0023957074154168367,
0.028188826516270638,
-0.027611739933490753,
0.029400011524558067,
0.0013150176964700222,
0.0362129732966423,
0.012163455598056316,
0.03474310413002968,
-0.007054436486214399,
0.02536170184612274,
-0.07868500053882599,
-0.04395574703812599,
-0.04243417829275131,
0.002584034577012062,
-0.0005564193706959486,
-0.019545502960681915,
0.05276765301823616,
0.0394630953669548,
-0.057229649275541306,
-0.01710808463394642,
0.05301479622721672,
-0.03010011836886406,
0.03373352438211441,
-0.04287588968873024,
-0.006589761935174465,
0.02951083518564701,
-0.019792240113019943,
0.012560124509036541,
-0.022978615015745163,
-0.01804402843117714,
-0.01765276864171028,
0.050604935735464096,
-0.031133880838751793,
-0.03520930930972099,
0.06622219830751419,
-0.04686705023050308,
0.01252678595483303,
0.06677322834730148,
0.0012780202087014914,
-0.007755340542644262,
-0.002916350495070219,
0.062082815915346146,
-0.003067526500672102,
0.006080616265535355,
-0.036430295556783676,
-0.06199180707335472,
0.02642948180437088,
-0.00425749970600009,
0.025306515395641327,
-0.0014685469213873148,
-0.028660226613283157,
0.052989762276411057,
-0.01557255256921053,
0.009855816140770912,
-0.0121422428637743,
-0.03747929632663727,
-0.08137062191963196,
0.007190469186753035,
0.011331912130117416,
0.06765188276767731,
-0.022611519321799278,
-0.02787146158516407,
0.05748944729566574,
0.00487024150788784,
0.039478056132793427,
0.01931411400437355,
0.013803835026919842,
0.04888024553656578,
-0.037333935499191284,
-0.027693377807736397,
0.059805672615766525,
0.03614082559943199,
0.005785312503576279,
0.013619908131659031,
0.05161786451935768,
-0.00884980708360672,
0.010016173124313354,
0.042678751051425934,
-0.027733702212572098,
0.027968743816018105,
-0.037427231669425964,
-0.002935838419944048,
-0.01202351227402687,
0.006725606042891741,
-0.07508431375026703,
-0.0060306512750685215,
0.008263292722404003,
-0.025336965918540955,
0.04014277085661888,
0.008093785494565964,
0.08171582221984863,
0.07616759836673737,
-0.0771564468741417,
0.022446291521191597,
0.008821032010018826,
0.013829128816723824,
0.02364560402929783,
-0.0022572220768779516,
0.03746487572789192,
-0.005879886448383331,
0.008362085558474064,
-0.013305987231433392,
-0.06773458421230316,
0.047247979789972305,
-0.054940834641456604,
0.006651178002357483,
0.04406357184052467,
0.0032514971680939198,
0.06607890874147415,
-0.023339349776506424,
-0.015506909228861332,
0.056580446660518646,
-0.013175010681152344,
-0.009680991992354393,
0.003048372222110629,
-0.02173807844519615,
-0.03575072064995766,
0.0034152292646467686,
0.0023930943571031094,
0.032616451382637024,
-0.08494752645492554,
-0.04464119300246239,
-0.008594084531068802,
0.07189679890871048,
0.039310749620199203,
-0.0032280997838824987,
0.0571722686290741,
0.031821854412555695,
-0.018074551597237587,
-0.05658836290240288,
-0.10419323295354843,
-0.038979772478342056,
-0.004710170906037092,
0.06021471694111824,
0.02279377542436123,
0.06624987721443176,
-0.0021200855262577534,
0.02761155366897583,
9.02639476407785e-06,
-0.021869199350476265,
0.024204667657613754,
0.06580100208520889,
0.002844455884769559,
-0.01991298981010914,
-0.0200088731944561,
0.02950236387550831,
0.06952787935733795,
-0.017109204083681107,
-0.029190661385655403,
0.022067055106163025,
-0.05215190351009369,
-0.002498551970347762,
-0.003893302520737052,
-0.004048035945743322,
0.044902484863996506,
0.01182111818343401,
0.014091513119637966,
0.007183252368122339,
0.035346873104572296,
-0.005363106727600098,
0.05331592261791229,
0.04623641446232796,
-0.01476075779646635,
-0.010740607045590878,
-0.019701674580574036,
0.00595542136579752,
0.03692961856722832,
0.012378417886793613,
-0.022257760167121887,
0.003160405671223998,
-1.8131876231564092e-06,
-0.017647042870521545,
-0.03700786456465721,
-0.24109095335006714,
0.006522865034639835,
-0.0008469457970932126,
-0.03644183278083801,
0.017320087179541588,
0.01328502781689167,
0.003192389849573374,
-0.028336772695183754,
-0.03504892438650131,
-0.0014239358715713024,
-0.03514610975980759,
0.022008158266544342,
-0.011342125944793224,
0.05192045867443085,
0.03085877001285553,
-0.025241609662771225,
0.0237770676612854,
-0.05109399929642677,
-0.010781534016132355,
0.0020606154575943947,
-0.04335577413439751,
-0.028212837874889374,
0.0002747350081335753,
0.046457286924123764,
0.010325346142053604,
0.08826259523630142,
-0.043199118226766586,
-0.010338421911001205,
-0.06027568131685257,
0.009151126258075237,
-0.01782579906284809,
-0.027093859389424324,
0.007199855055660009,
-0.019019782543182373,
0.022030359134078026,
-0.010693224146962166,
0.0009507028153166175,
-0.026087958365678787,
0.024485325440764427,
-0.04338093847036362,
-0.04680050536990166,
-0.03561573103070259,
-0.02055582031607628,
0.0038633362855762243,
0.06559355556964874,
-0.023061249405145645,
-0.017895730212330818,
0.0038954829797148705,
0.008263446390628815,
0.04940579831600189,
-0.008470145985484123,
-0.0014497878728434443,
-0.0061887046322226524,
0.03428115323185921,
-0.0007602313999086618,
-0.009981812909245491,
0.027376258745789528,
0.026810050010681152,
-0.03568948805332184,
-0.0058975000865757465,
0.02460271678864956,
-0.01275318767875433,
-0.03641323372721672,
-0.044666923582553864,
0.029698815196752548,
-0.03262021392583847,
-0.02356722205877304,
-0.04117002710700035,
0.0848817452788353,
-0.004286558832973242,
-0.018582580611109734,
0.013618958182632923,
-0.03509534150362015,
-0.06519659608602524,
0.028257008641958237,
0.021286210045218468,
-0.06835642457008362,
-0.054849766194820404,
-0.01941634714603424,
0.035323113203048706,
-0.025973310694098473,
0.002146123442798853,
0.026771889999508858,
0.05470979958772659,
-0.03781023249030113,
-0.04531051591038704,
0.012180115096271038,
0.0009777187369763851,
-0.0416688397526741,
-0.013594291172921658,
0.09633821249008179,
0.00042126362677663565,
0.02082621492445469,
-0.011436634697020054,
0.052587978541851044,
0.04485282301902771,
-0.011207791976630688,
-0.028182996436953545,
0.028562700375914574,
-0.0452943854033947,
0.06573814153671265,
-0.04766593873500824,
0.029138406738638878,
-0.014932483434677124,
0.012515360489487648,
-0.008935957215726376,
-0.05353805422782898,
0.026841312646865845,
0.03796624764800072,
0.012656201608479023,
0.03330421447753906,
0.011739440262317657,
0.030942635610699654,
-0.04102332144975662,
0.015347322449088097,
-0.05560077726840973,
0.008390153758227825,
0.07054135203361511,
0.028721380978822708,
0.0028039051685482264,
-0.020784109830856323,
0.009438532404601574,
-0.0605308897793293,
-0.01866653747856617,
-0.06967351585626602,
0.03392767161130905,
0.006826978642493486,
0.025683172047138214,
-0.0034906533546745777,
0.029044777154922485,
-0.015162697061896324,
0.0038685882464051247,
0.0499376617372036,
0.02318284660577774,
0.010678326711058617,
-0.014715512283146381,
-0.042784977704286575,
-0.002209000289440155,
-0.014008396305143833,
-0.028120383620262146,
0.0026574472431093454,
0.030087493360042572,
0.03461616113781929,
0.03625616058707237,
-0.011008461937308311,
0.043217092752456665,
-0.045464660972356796,
0.022507434710860252,
-0.02420778200030327,
-0.002824041061103344,
0.028755616396665573,
-0.04187369719147682,
-0.015139559283852577,
-0.053725019097328186,
-0.025201475247740746,
-0.012609651312232018,
0.04252387210726738,
0.02392260916531086,
0.016753822565078735,
-0.03215314820408821,
-0.01936139352619648,
-0.046136122196912766,
-0.005073823034763336,
0.008640735410153866,
-0.009679833427071571,
0.07807573676109314,
-0.012567133642733097,
-0.031146127730607986,
-0.026593416929244995,
0.026098934933543205,
0.024264968931674957,
-0.0075249760411679745,
-0.06842546164989471,
0.03510553762316704,
-0.006868013646453619,
0.01947402022778988,
-0.029724987223744392,
-0.03539305925369263,
0.028799021616578102,
0.030593188479542732,
0.03373757004737854,
-0.028323186561465263,
-0.005245779640972614,
0.0025080086197704077,
0.06109020859003067,
-0.0414900928735733,
0.05396903306245804,
-0.047728512436151505,
-0.017351394519209862,
0.02362070232629776,
-0.007311966270208359,
0.028682058677077293,
-0.014722640626132488,
-0.007481182459741831,
-0.035072099417448044,
-0.021136067807674408,
0.019015248864889145,
0.008854486048221588,
-0.0005861225072294474,
-0.012599045410752296,
0.0175931416451931,
-0.04479547217488289,
-0.008386379107832909,
0.03618542104959488,
0.01628889888525009,
-0.08031677454710007,
0.039770182222127914,
0.041299525648355484,
-0.008586069568991661,
0.038849104195833206,
-0.019013259559869766,
0.015810709446668625,
-0.026148298755288124,
0.03409867733716965,
0.012881561182439327,
0.0007065649842843413,
-0.010571092367172241,
-0.04538531228899956,
-0.005888957995921373,
0.010284706018865108,
-0.00910396408289671,
0.0024551369715481997,
-0.028111808001995087,
-0.056267447769641876,
-0.03570198640227318,
0.0007470435812138021,
-0.03200932964682579,
3.1971394491847605e-05,
0.07073836773633957,
-0.025731729343533516,
0.016087668016552925,
-0.019969554618000984,
-0.02380352094769478,
0.07783369719982147,
-0.0077037508599460125,
-0.026075275614857674,
0.03502178564667702,
-0.005804023705422878,
-0.015163084492087364,
0.06934002041816711,
0.0368470698595047,
0.017380570992827415,
-0.03955657035112381,
-0.028987567871809006,
0.027637561783194542,
0.04501322656869888,
-0.026961492374539375,
0.00020521112310234457,
-0.0452781617641449,
0.049811046570539474,
0.028363030403852463,
0.004181100055575371,
0.0021030332427471876,
-0.015064270235598087,
0.05535869300365448,
-0.029472526162862778,
-0.04478950425982475,
0.0027753578033298254,
-0.004514075815677643,
-0.023607026785612106,
0.023749861866235733,
0.01957106776535511,
-0.024119185283780098,
-0.01694166287779808,
0.04224187880754471,
0.017501620575785637,
-0.004305294249206781,
0.018400326371192932,
0.044329140335321426,
-0.06549150496721268,
0.008912339806556702,
-0.03948299214243889,
-0.03004170022904873,
0.0032710819505155087,
-0.019911974668502808,
0.02723447047173977,
-0.022703979164361954,
0.034845732152462006,
0.05078149959445,
-0.06074056029319763,
-0.01075307372957468,
0.07076920568943024,
0.0021933179814368486,
-0.03962651267647743,
0.024789808318018913,
-0.07408491522073746,
0.0247175469994545,
-0.03231014311313629,
-0.02483881451189518,
0.002730102278292179,
0.037088677287101746,
-0.0033236793242394924,
0.005284950602799654,
0.014846455305814743,
0.03255154564976692,
0.02706083469092846,
0.049154844135046005,
0.06594257056713104,
-0.02415977232158184,
0.026963576674461365,
-0.07380963861942291,
0.06781016290187836,
0.018511293455958366,
-0.015869174152612686,
-0.038478851318359375,
0.0335836261510849,
0.02612367272377014,
-0.06550119817256927,
0.01825067587196827,
0.013035713694989681,
-0.008435440249741077,
-0.08638200908899307,
0.05963002145290375,
0.024324510246515274,
-0.02895611710846424,
-0.04167400300502777,
0.04319422319531441,
-0.05413385480642319,
0.015215273015201092,
0.03725837171077728,
-0.004908927250653505,
-0.002934563672170043,
0.041528936475515366,
0.012155082076787949,
0.04147651046514511,
0.05855671316385269,
-0.0299361739307642,
0.02512580342590809,
0.020929407328367233,
0.06349261105060577,
0.053939227014780045,
0.05713503807783127,
-0.0038927458226680756,
0.07881465554237366,
-0.012467852793633938,
-0.034171897917985916,
0.020261041820049286,
-0.0021278418134897947,
-0.002377619966864586,
0.004330282565206289,
0.012825283221900463,
0.04088682681322098,
0.008562165312469006,
0.0359053835272789,
-0.053358469158411026,
0.011921711266040802,
0.020781131461262703,
0.036604978144168854,
0.03237057104706764,
0.027678076177835464,
0.025395873934030533,
0.024215875193476677,
-0.02316826581954956,
-0.049021363258361816,
-0.005335877649486065,
-0.04324529692530632,
0.033709343522787094,
0.009520786814391613,
-0.06291788816452026,
0.016032546758651733,
-0.017273124307394028,
0.03564963862299919,
0.06645374745130539,
0.0019759878050535917,
0.04844486713409424,
-0.033923204988241196,
0.03365401178598404,
-0.03546270355582237,
0.017526622861623764,
0.05221246927976608,
0.027283355593681335,
0.00947093591094017,
-0.027012217789888382,
-0.001877183560281992,
0.016856137663125992,
0.013093618676066399,
0.025977004319429398,
-0.06342248618602753,
-0.002382427453994751,
0.02860536240041256,
0.05974981561303139,
-0.03283765912055969,
-0.04812508821487427,
-0.05995623767375946,
-0.037662360817193985,
-0.035185620188713074,
-0.01508689671754837,
0.035811878740787506,
-0.052011068910360336,
-0.059904687106609344,
-0.026118896901607513,
-0.010637863539159298,
-0.011021668091416359,
-0.03290007635951042,
-0.030089853331446648,
-0.03142952546477318,
0.04359989985823631,
0.040401678532361984,
0.02362644672393799,
0.013705096207559109,
0.08372753113508224,
-0.029495922848582268,
-0.06889309734106064,
0.00678789708763361,
-0.007068346720188856,
0.07379143685102463,
-0.02387312427163124,
-0.0024106407072395086,
-0.08333039283752441,
0.018529068678617477,
0.03415510058403015,
0.022234655916690826,
-0.10251957923173904,
0.036007318645715714,
-0.00660698814317584,
0.00572143355384469,
0.026509005576372147,
-0.011688550002872944,
-0.008342253975570202,
-0.04845166578888893,
-0.030434146523475647,
0.0014085661387071013,
-0.03824504837393761,
0.06172807887196541,
-0.03449011966586113,
0.07329946011304855,
0.029795274138450623,
0.026717940345406532,
-0.045109957456588745,
0.024327795952558517,
-0.008753367699682713,
0.01352944690734148,
-0.023602385073900223,
-0.036179229617118835,
-0.008612464182078838,
-0.12454637885093689,
-0.016345543786883354,
-0.012179647572338581,
-0.02734498679637909,
-0.05160606652498245,
0.019233766943216324,
-0.027092240750789642,
0.016395756974816322,
-0.012205400504171848,
-0.014156125485897064,
-0.04153557866811752,
-0.020725106820464134,
-0.03977225720882416,
-0.05970294773578644,
-0.0023274689447134733,
-0.0164078027009964,
-0.021304765716195107,
0.053715966641902924,
-0.017753545194864273,
0.010519351810216904,
0.004593766760081053,
-0.03116416372358799,
-0.027580147609114647,
0.0033015876542776823,
0.033720631152391434
]
}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"class": [],
"class-suggestion": "brainstorming",
"class-suggestion-metadata": {
"agent": "setfit",
"score": 0.2823514938354492,
"type": null
},
"context": null,
"external_id": null,
"metadata": "{\"n_characters\": 85, \"passed_quality_check\": \"True\", \"flesch_reading_ease\": 82.39000000000001, \"entropy\": 0.4352176404374839}",
"prompt": "Can brain cells move? By movement I mean long distance migration (preferably within the brain only).",
"response": [],
"response-suggestion": "The question is relatively broad and one should take into account that the brain not only consists of neurons, but also glial cells (supportive cells) and pre-mitotic neuronal stem cells. Furthermore, as critical fellow-scientists have indicated, developmental stage is very important, as the developing embryonic brain is very different from the adult brain.\nHowever, after sifting through various publications, the answer to the question is actually remarkably simple: Yes, brain cells migrate.\nIn the adult brain glial cells migrate in the brain (Kl\u00e4mbt, 2009). Glial cells are involved in a myriad of functions, but a notable example of migrating glial cells are the oligodendrocytes that migrate relative long distances to find their target axons onto which they wrap themselves to form the insulating myelin sheath (Tsai and Miller, 2002).\nNeuronal stem cells migrate over long distances in response to injury (Imitola et al., 2004) and they migrate from specific stem-cell locations (e.g., hippocampus and subventricular zone) to other regions (Clarke, 2003).\nPost-mitotic, but non-differentiated neurons have been shown to migrate in the adult brain in fish (Scott et al., 2012), and in mammals and non-human primates as well (Sawada et al., 2011).\nNot surprisingly, glial cells, stem cells and neurons also migrate during embryonic development. Most notably, post-mitotic neurons destined to fulfill peripheral functions have to migrate over relatively long distances from the neural crest to their target locations (Neuroscience, 2nd ed, Neuronal Migration).",
"response-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"vectors": {
"prompt-similarity": [
-0.013013245537877083,
0.01881960965692997,
0.018717532977461815,
-0.014981311745941639,
0.03672853484749794,
-0.015297300182282925,
0.031154541298747063,
0.009528533555567265,
-0.031607501208782196,
-0.039829764515161514,
-0.019534926861524582,
-0.019294919446110725,
-0.047140125185251236,
0.03812485188245773,
-0.018894944339990616,
0.039123568683862686,
0.03436238318681717,
-0.007996739819645882,
0.013651853427290916,
-0.016834214329719543,
-0.02929615043103695,
0.002512674080207944,
0.008257705718278885,
0.03932825103402138,
0.031019780784845352,
-0.028575727716088295,
-0.022710563614964485,
0.0132012739777565,
-0.048433348536491394,
-0.02651829645037651,
0.01601981930434704,
-0.006484998855739832,
-0.07150214165449142,
-0.010764969512820244,
0.00407565338537097,
-0.007564086001366377,
-0.015640858560800552,
-0.012789258733391762,
0.00717244204133749,
-0.051655009388923645,
-0.030335327610373497,
0.007193537428975105,
-0.020686019212007523,
0.016904372721910477,
-0.057382386177778244,
0.020192697644233704,
-0.0621950700879097,
0.0034242896363139153,
-0.04375811666250229,
-0.012516515329480171,
-0.04787379130721092,
0.05757446959614754,
0.045590516179800034,
-0.019442711025476456,
0.02614322304725647,
0.022066324949264526,
-0.017174094915390015,
-0.03904383257031441,
-0.014966102316975594,
-0.04261021316051483,
0.06123539060354233,
0.01483749970793724,
-0.009737796150147915,
-0.021765291690826416,
-0.001423536567017436,
-0.04854138195514679,
0.03245295211672783,
0.02051699534058571,
-0.05414895340800285,
-0.03563692420721054,
-0.0506395623087883,
-0.06071240082383156,
-0.017511913552880287,
0.006278000771999359,
0.009547360241413116,
-0.05603624880313873,
-0.0038324843626469374,
0.012652688659727573,
0.06399084627628326,
0.01680467091500759,
0.030588308349251747,
0.023556867614388466,
-0.04122614115476608,
0.06281794607639313,
0.002343484666198492,
-0.03668874129652977,
-0.01711929589509964,
-4.190538675175048e-06,
-0.05742541700601578,
0.04727115109562874,
-0.04583971947431564,
-0.01956474594771862,
0.02877974882721901,
0.05513108894228935,
0.015185099095106125,
-0.006118557415902615,
0.0272984616458416,
-0.02677239291369915,
-0.009623365476727486,
0.05534995347261429,
-0.02598058618605137,
-0.04715755954384804,
-0.022215673699975014,
-0.009219354949891567,
-0.05435849353671074,
-0.03680011257529259,
-0.008128424175083637,
-0.029657825827598572,
0.022026637569069862,
-0.012166539207100868,
-0.025011586025357246,
-0.02193683199584484,
-0.00693196477368474,
0.006336281541734934,
-0.043086495250463486,
0.05915242061018944,
0.02211538702249527,
-0.023119445890188217,
0.007697188761085272,
-0.0552712008357048,
0.03299417346715927,
0.05157257989048958,
-0.03600669652223587,
0.044204846024513245,
0.025432858616113663,
0.007447212003171444,
0.006279517896473408,
0.03376108407974243,
-0.040294621139764786,
-0.058066226541996,
0.012761987745761871,
0.04904710873961449,
-0.012213962152600288,
-0.013692168518900871,
0.027355555444955826,
-0.0023957074154168367,
0.028188826516270638,
-0.027611739933490753,
0.029400011524558067,
0.0013150176964700222,
0.0362129732966423,
0.012163455598056316,
0.03474310413002968,
-0.007054436486214399,
0.02536170184612274,
-0.07868500053882599,
-0.04395574703812599,
-0.04243417829275131,
0.002584034577012062,
-0.0005564193706959486,
-0.019545502960681915,
0.05276765301823616,
0.0394630953669548,
-0.057229649275541306,
-0.01710808463394642,
0.05301479622721672,
-0.03010011836886406,
0.03373352438211441,
-0.04287588968873024,
-0.006589761935174465,
0.02951083518564701,
-0.019792240113019943,
0.012560124509036541,
-0.022978615015745163,
-0.01804402843117714,
-0.01765276864171028,
0.050604935735464096,
-0.031133880838751793,
-0.03520930930972099,
0.06622219830751419,
-0.04686705023050308,
0.01252678595483303,
0.06677322834730148,
0.0012780202087014914,
-0.007755340542644262,
-0.002916350495070219,
0.062082815915346146,
-0.003067526500672102,
0.006080616265535355,
-0.036430295556783676,
-0.06199180707335472,
0.02642948180437088,
-0.00425749970600009,
0.025306515395641327,
-0.0014685469213873148,
-0.028660226613283157,
0.052989762276411057,
-0.01557255256921053,
0.009855816140770912,
-0.0121422428637743,
-0.03747929632663727,
-0.08137062191963196,
0.007190469186753035,
0.011331912130117416,
0.06765188276767731,
-0.022611519321799278,
-0.02787146158516407,
0.05748944729566574,
0.00487024150788784,
0.039478056132793427,
0.01931411400437355,
0.013803835026919842,
0.04888024553656578,
-0.037333935499191284,
-0.027693377807736397,
0.059805672615766525,
0.03614082559943199,
0.005785312503576279,
0.013619908131659031,
0.05161786451935768,
-0.00884980708360672,
0.010016173124313354,
0.042678751051425934,
-0.027733702212572098,
0.027968743816018105,
-0.037427231669425964,
-0.002935838419944048,
-0.01202351227402687,
0.006725606042891741,
-0.07508431375026703,
-0.0060306512750685215,
0.008263292722404003,
-0.025336965918540955,
0.04014277085661888,
0.008093785494565964,
0.08171582221984863,
0.07616759836673737,
-0.0771564468741417,
0.022446291521191597,
0.008821032010018826,
0.013829128816723824,
0.02364560402929783,
-0.0022572220768779516,
0.03746487572789192,
-0.005879886448383331,
0.008362085558474064,
-0.013305987231433392,
-0.06773458421230316,
0.047247979789972305,
-0.054940834641456604,
0.006651178002357483,
0.04406357184052467,
0.0032514971680939198,
0.06607890874147415,
-0.023339349776506424,
-0.015506909228861332,
0.056580446660518646,
-0.013175010681152344,
-0.009680991992354393,
0.003048372222110629,
-0.02173807844519615,
-0.03575072064995766,
0.0034152292646467686,
0.0023930943571031094,
0.032616451382637024,
-0.08494752645492554,
-0.04464119300246239,
-0.008594084531068802,
0.07189679890871048,
0.039310749620199203,
-0.0032280997838824987,
0.0571722686290741,
0.031821854412555695,
-0.018074551597237587,
-0.05658836290240288,
-0.10419323295354843,
-0.038979772478342056,
-0.004710170906037092,
0.06021471694111824,
0.02279377542436123,
0.06624987721443176,
-0.0021200855262577534,
0.02761155366897583,
9.02639476407785e-06,
-0.021869199350476265,
0.024204667657613754,
0.06580100208520889,
0.002844455884769559,
-0.01991298981010914,
-0.0200088731944561,
0.02950236387550831,
0.06952787935733795,
-0.017109204083681107,
-0.029190661385655403,
0.022067055106163025,
-0.05215190351009369,
-0.002498551970347762,
-0.003893302520737052,
-0.004048035945743322,
0.044902484863996506,
0.01182111818343401,
0.014091513119637966,
0.007183252368122339,
0.035346873104572296,
-0.005363106727600098,
0.05331592261791229,
0.04623641446232796,
-0.01476075779646635,
-0.010740607045590878,
-0.019701674580574036,
0.00595542136579752,
0.03692961856722832,
0.012378417886793613,
-0.022257760167121887,
0.003160405671223998,
-1.8131876231564092e-06,
-0.017647042870521545,
-0.03700786456465721,
-0.24109095335006714,
0.006522865034639835,
-0.0008469457970932126,
-0.03644183278083801,
0.017320087179541588,
0.01328502781689167,
0.003192389849573374,
-0.028336772695183754,
-0.03504892438650131,
-0.0014239358715713024,
-0.03514610975980759,
0.022008158266544342,
-0.011342125944793224,
0.05192045867443085,
0.03085877001285553,
-0.025241609662771225,
0.0237770676612854,
-0.05109399929642677,
-0.010781534016132355,
0.0020606154575943947,
-0.04335577413439751,
-0.028212837874889374,
0.0002747350081335753,
0.046457286924123764,
0.010325346142053604,
0.08826259523630142,
-0.043199118226766586,
-0.010338421911001205,
-0.06027568131685257,
0.009151126258075237,
-0.01782579906284809,
-0.027093859389424324,
0.007199855055660009,
-0.019019782543182373,
0.022030359134078026,
-0.010693224146962166,
0.0009507028153166175,
-0.026087958365678787,
0.024485325440764427,
-0.04338093847036362,
-0.04680050536990166,
-0.03561573103070259,
-0.02055582031607628,
0.0038633362855762243,
0.06559355556964874,
-0.023061249405145645,
-0.017895730212330818,
0.0038954829797148705,
0.008263446390628815,
0.04940579831600189,
-0.008470145985484123,
-0.0014497878728434443,
-0.0061887046322226524,
0.03428115323185921,
-0.0007602313999086618,
-0.009981812909245491,
0.027376258745789528,
0.026810050010681152,
-0.03568948805332184,
-0.0058975000865757465,
0.02460271678864956,
-0.01275318767875433,
-0.03641323372721672,
-0.044666923582553864,
0.029698815196752548,
-0.03262021392583847,
-0.02356722205877304,
-0.04117002710700035,
0.0848817452788353,
-0.004286558832973242,
-0.018582580611109734,
0.013618958182632923,
-0.03509534150362015,
-0.06519659608602524,
0.028257008641958237,
0.021286210045218468,
-0.06835642457008362,
-0.054849766194820404,
-0.01941634714603424,
0.035323113203048706,
-0.025973310694098473,
0.002146123442798853,
0.026771889999508858,
0.05470979958772659,
-0.03781023249030113,
-0.04531051591038704,
0.012180115096271038,
0.0009777187369763851,
-0.0416688397526741,
-0.013594291172921658,
0.09633821249008179,
0.00042126362677663565,
0.02082621492445469,
-0.011436634697020054,
0.052587978541851044,
0.04485282301902771,
-0.011207791976630688,
-0.028182996436953545,
0.028562700375914574,
-0.0452943854033947,
0.06573814153671265,
-0.04766593873500824,
0.029138406738638878,
-0.014932483434677124,
0.012515360489487648,
-0.008935957215726376,
-0.05353805422782898,
0.026841312646865845,
0.03796624764800072,
0.012656201608479023,
0.03330421447753906,
0.011739440262317657,
0.030942635610699654,
-0.04102332144975662,
0.015347322449088097,
-0.05560077726840973,
0.008390153758227825,
0.07054135203361511,
0.028721380978822708,
0.0028039051685482264,
-0.020784109830856323,
0.009438532404601574,
-0.0605308897793293,
-0.01866653747856617,
-0.06967351585626602,
0.03392767161130905,
0.006826978642493486,
0.025683172047138214,
-0.0034906533546745777,
0.029044777154922485,
-0.015162697061896324,
0.0038685882464051247,
0.0499376617372036,
0.02318284660577774,
0.010678326711058617,
-0.014715512283146381,
-0.042784977704286575,
-0.002209000289440155,
-0.014008396305143833,
-0.028120383620262146,
0.0026574472431093454,
0.030087493360042572,
0.03461616113781929,
0.03625616058707237,
-0.011008461937308311,
0.043217092752456665,
-0.045464660972356796,
0.022507434710860252,
-0.02420778200030327,
-0.002824041061103344,
0.028755616396665573,
-0.04187369719147682,
-0.015139559283852577,
-0.053725019097328186,
-0.025201475247740746,
-0.012609651312232018,
0.04252387210726738,
0.02392260916531086,
0.016753822565078735,
-0.03215314820408821,
-0.01936139352619648,
-0.046136122196912766,
-0.005073823034763336,
0.008640735410153866,
-0.009679833427071571,
0.07807573676109314,
-0.012567133642733097,
-0.031146127730607986,
-0.026593416929244995,
0.026098934933543205,
0.024264968931674957,
-0.0075249760411679745,
-0.06842546164989471,
0.03510553762316704,
-0.006868013646453619,
0.01947402022778988,
-0.029724987223744392,
-0.03539305925369263,
0.028799021616578102,
0.030593188479542732,
0.03373757004737854,
-0.028323186561465263,
-0.005245779640972614,
0.0025080086197704077,
0.06109020859003067,
-0.0414900928735733,
0.05396903306245804,
-0.047728512436151505,
-0.017351394519209862,
0.02362070232629776,
-0.007311966270208359,
0.028682058677077293,
-0.014722640626132488,
-0.007481182459741831,
-0.035072099417448044,
-0.021136067807674408,
0.019015248864889145,
0.008854486048221588,
-0.0005861225072294474,
-0.012599045410752296,
0.0175931416451931,
-0.04479547217488289,
-0.008386379107832909,
0.03618542104959488,
0.01628889888525009,
-0.08031677454710007,
0.039770182222127914,
0.041299525648355484,
-0.008586069568991661,
0.038849104195833206,
-0.019013259559869766,
0.015810709446668625,
-0.026148298755288124,
0.03409867733716965,
0.012881561182439327,
0.0007065649842843413,
-0.010571092367172241,
-0.04538531228899956,
-0.005888957995921373,
0.010284706018865108,
-0.00910396408289671,
0.0024551369715481997,
-0.028111808001995087,
-0.056267447769641876,
-0.03570198640227318,
0.0007470435812138021,
-0.03200932964682579,
3.1971394491847605e-05,
0.07073836773633957,
-0.025731729343533516,
0.016087668016552925,
-0.019969554618000984,
-0.02380352094769478,
0.07783369719982147,
-0.0077037508599460125,
-0.026075275614857674,
0.03502178564667702,
-0.005804023705422878,
-0.015163084492087364,
0.06934002041816711,
0.0368470698595047,
0.017380570992827415,
-0.03955657035112381,
-0.028987567871809006,
0.027637561783194542,
0.04501322656869888,
-0.026961492374539375,
0.00020521112310234457,
-0.0452781617641449,
0.049811046570539474,
0.028363030403852463,
0.004181100055575371,
0.0021030332427471876,
-0.015064270235598087,
0.05535869300365448,
-0.029472526162862778,
-0.04478950425982475,
0.0027753578033298254,
-0.004514075815677643,
-0.023607026785612106,
0.023749861866235733,
0.01957106776535511,
-0.024119185283780098,
-0.01694166287779808,
0.04224187880754471,
0.017501620575785637,
-0.004305294249206781,
0.018400326371192932,
0.044329140335321426,
-0.06549150496721268,
0.008912339806556702,
-0.03948299214243889,
-0.03004170022904873,
0.0032710819505155087,
-0.019911974668502808,
0.02723447047173977,
-0.022703979164361954,
0.034845732152462006,
0.05078149959445,
-0.06074056029319763,
-0.01075307372957468,
0.07076920568943024,
0.0021933179814368486,
-0.03962651267647743,
0.024789808318018913,
-0.07408491522073746,
0.0247175469994545,
-0.03231014311313629,
-0.02483881451189518,
0.002730102278292179,
0.037088677287101746,
-0.0033236793242394924,
0.005284950602799654,
0.014846455305814743,
0.03255154564976692,
0.02706083469092846,
0.049154844135046005,
0.06594257056713104,
-0.02415977232158184,
0.026963576674461365,
-0.07380963861942291,
0.06781016290187836,
0.018511293455958366,
-0.015869174152612686,
-0.038478851318359375,
0.0335836261510849,
0.02612367272377014,
-0.06550119817256927,
0.01825067587196827,
0.013035713694989681,
-0.008435440249741077,
-0.08638200908899307,
0.05963002145290375,
0.024324510246515274,
-0.02895611710846424,
-0.04167400300502777,
0.04319422319531441,
-0.05413385480642319,
0.015215273015201092,
0.03725837171077728,
-0.004908927250653505,
-0.002934563672170043,
0.041528936475515366,
0.012155082076787949,
0.04147651046514511,
0.05855671316385269,
-0.0299361739307642,
0.02512580342590809,
0.020929407328367233,
0.06349261105060577,
0.053939227014780045,
0.05713503807783127,
-0.0038927458226680756,
0.07881465554237366,
-0.012467852793633938,
-0.034171897917985916,
0.020261041820049286,
-0.0021278418134897947,
-0.002377619966864586,
0.004330282565206289,
0.012825283221900463,
0.04088682681322098,
0.008562165312469006,
0.0359053835272789,
-0.053358469158411026,
0.011921711266040802,
0.020781131461262703,
0.036604978144168854,
0.03237057104706764,
0.027678076177835464,
0.025395873934030533,
0.024215875193476677,
-0.02316826581954956,
-0.049021363258361816,
-0.005335877649486065,
-0.04324529692530632,
0.033709343522787094,
0.009520786814391613,
-0.06291788816452026,
0.016032546758651733,
-0.017273124307394028,
0.03564963862299919,
0.06645374745130539,
0.0019759878050535917,
0.04844486713409424,
-0.033923204988241196,
0.03365401178598404,
-0.03546270355582237,
0.017526622861623764,
0.05221246927976608,
0.027283355593681335,
0.00947093591094017,
-0.027012217789888382,
-0.001877183560281992,
0.016856137663125992,
0.013093618676066399,
0.025977004319429398,
-0.06342248618602753,
-0.002382427453994751,
0.02860536240041256,
0.05974981561303139,
-0.03283765912055969,
-0.04812508821487427,
-0.05995623767375946,
-0.037662360817193985,
-0.035185620188713074,
-0.01508689671754837,
0.035811878740787506,
-0.052011068910360336,
-0.059904687106609344,
-0.026118896901607513,
-0.010637863539159298,
-0.011021668091416359,
-0.03290007635951042,
-0.030089853331446648,
-0.03142952546477318,
0.04359989985823631,
0.040401678532361984,
0.02362644672393799,
0.013705096207559109,
0.08372753113508224,
-0.029495922848582268,
-0.06889309734106064,
0.00678789708763361,
-0.007068346720188856,
0.07379143685102463,
-0.02387312427163124,
-0.0024106407072395086,
-0.08333039283752441,
0.018529068678617477,
0.03415510058403015,
0.022234655916690826,
-0.10251957923173904,
0.036007318645715714,
-0.00660698814317584,
0.00572143355384469,
0.026509005576372147,
-0.011688550002872944,
-0.008342253975570202,
-0.04845166578888893,
-0.030434146523475647,
0.0014085661387071013,
-0.03824504837393761,
0.06172807887196541,
-0.03449011966586113,
0.07329946011304855,
0.029795274138450623,
0.026717940345406532,
-0.045109957456588745,
0.024327795952558517,
-0.008753367699682713,
0.01352944690734148,
-0.023602385073900223,
-0.036179229617118835,
-0.008612464182078838,
-0.12454637885093689,
-0.016345543786883354,
-0.012179647572338581,
-0.02734498679637909,
-0.05160606652498245,
0.019233766943216324,
-0.027092240750789642,
0.016395756974816322,
-0.012205400504171848,
-0.014156125485897064,
-0.04153557866811752,
-0.020725106820464134,
-0.03977225720882416,
-0.05970294773578644,
-0.0023274689447134733,
-0.0164078027009964,
-0.021304765716195107,
0.053715966641902924,
-0.017753545194864273,
0.010519351810216904,
0.004593766760081053,
-0.03116416372358799,
-0.027580147609114647,
0.0033015876542776823,
0.033720631152391434
]
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **prompt** is of type `text`.
* (optional) **context** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **class** is of type `label_selection` with the following allowed values ['closed_qa', 'classification', 'open_qa', 'information_extraction', 'brainstorming', 'general_qa', 'summarization', 'creative_writing'].
* **response** is of type `text`.
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **class-suggestion** is of type `label_selection` with the following allowed values ['closed_qa', 'classification', 'open_qa', 'information_extraction', 'brainstorming', 'general_qa', 'summarization', 'creative_writing'].
* (optional) **response-suggestion** is of type `text`.
* **✨ NEW** **Vectors**: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.
* (optional) **prompt-similarity** is of type `float32` and has a dimension of (1, `768`).
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
This is a supervised fine-tuning dataset that contains instructions. Please write the response to the instruction in the response field. Take the context into account when writing the response.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | nataliaElv/textclass_descriptives_vectors | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
]
| 2023-11-24T16:35:32+00:00 | {"size_categories": "1K<n<10K", "tags": ["rlfh", "argilla", "human-feedback"]} | 2023-11-24T16:35:34+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #rlfh #argilla #human-feedback #region-us
| Dataset Card for textclass\_descriptives\_vectors
=================================================
This dataset has been created with Argilla.
As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'.
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla.
* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'.
* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:
### Load with 'datasets'
To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:
### Supported Tasks and Leaderboards
This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.
There are no leaderboards associated with this dataset.
### Languages
Dataset Structure
-----------------
### Data in Argilla
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking.
The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
NEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.
Vector Name: prompt-similarity, Title: prompt-similarity, Dimensions: [1, 768]
The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
While the same record in HuggingFace 'datasets' looks as follows:
### Data Fields
Among the dataset fields, we differentiate between the following:
* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
+ prompt is of type 'text'.
+ (optional) context is of type 'text'.
* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.
+ class is of type 'label\_selection' with the following allowed values ['closed\_qa', 'classification', 'open\_qa', 'information\_extraction', 'brainstorming', 'general\_qa', 'summarization', 'creative\_writing'].
+ response is of type 'text'.
* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
+ (optional) class-suggestion is of type 'label\_selection' with the following allowed values ['closed\_qa', 'classification', 'open\_qa', 'information\_extraction', 'brainstorming', 'general\_qa', 'summarization', 'creative\_writing'].
+ (optional) response-suggestion is of type 'text'.
* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.
+ (optional) prompt-similarity is of type 'float32' and has a dimension of (1, '768').
Additionally, we also have two more fields that are optional and are the following:
* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
* external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is 'train'.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation guidelines
This is a supervised fine-tuning dataset that contains instructions. Please write the response to the instruction in the response field. Take the context into account when writing the response.
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\nNEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.\n\n\nVector Name: prompt-similarity, Title: prompt-similarity, Dimensions: [1, 768]\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ prompt is of type 'text'.\n\t+ (optional) context is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ class is of type 'label\\_selection' with the following allowed values ['closed\\_qa', 'classification', 'open\\_qa', 'information\\_extraction', 'brainstorming', 'general\\_qa', 'summarization', 'creative\\_writing'].\n\t+ response is of type 'text'.\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) class-suggestion is of type 'label\\_selection' with the following allowed values ['closed\\_qa', 'classification', 'open\\_qa', 'information\\_extraction', 'brainstorming', 'general\\_qa', 'summarization', 'creative\\_writing'].\n\t+ (optional) response-suggestion is of type 'text'.\n* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.\n\n\n\t+ (optional) prompt-similarity is of type 'float32' and has a dimension of (1, '768').\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines\n\n\nThis is a supervised fine-tuning dataset that contains instructions. Please write the response to the instruction in the response field. Take the context into account when writing the response.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#size_categories-1K<n<10K #rlfh #argilla #human-feedback #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\nNEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.\n\n\nVector Name: prompt-similarity, Title: prompt-similarity, Dimensions: [1, 768]\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ prompt is of type 'text'.\n\t+ (optional) context is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ class is of type 'label\\_selection' with the following allowed values ['closed\\_qa', 'classification', 'open\\_qa', 'information\\_extraction', 'brainstorming', 'general\\_qa', 'summarization', 'creative\\_writing'].\n\t+ response is of type 'text'.\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) class-suggestion is of type 'label\\_selection' with the following allowed values ['closed\\_qa', 'classification', 'open\\_qa', 'information\\_extraction', 'brainstorming', 'general\\_qa', 'summarization', 'creative\\_writing'].\n\t+ (optional) response-suggestion is of type 'text'.\n* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.\n\n\n\t+ (optional) prompt-similarity is of type 'float32' and has a dimension of (1, '768').\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines\n\n\nThis is a supervised fine-tuning dataset that contains instructions. Please write the response to the instruction in the response field. Take the context into account when writing the response.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
29,
162,
40,
53,
68,
11,
521,
40,
730,
27,
7,
4,
10,
10,
5,
45,
5,
9,
18,
7,
8,
14,
6,
6,
5
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #rlfh #argilla #human-feedback #region-us \n### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.### Languages\n\n\nDataset Structure\n-----------------",
"passage: ### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\nNEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.\n\n\nVector Name: prompt-similarity, Title: prompt-similarity, Dimensions: [1, 768]\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:"
]
|
20265763dc7ec85d86e5915941c654035eebe68a | # Dataset Card for "897fa162"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/897fa162 | [
"region:us"
]
| 2023-11-24T16:46:42+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 226, "num_examples": 10}], "download_size": 1465, "dataset_size": 226}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-24T16:46:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "897fa162"
More Information needed | [
"# Dataset Card for \"897fa162\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"897fa162\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"897fa162\"\n\nMore Information needed"
]
|
902ec1a19d3380f80a2393b56aaead7c8f59b390 | # Dataset Card for "emodb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | confit/emodb | [
"region:us"
]
| 2023-11-24T17:01:25+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "anxiety", "1": "disgust", "2": "happiness", "3": "boredom", "4": "neutral", "5": "sadness", "6": "anger"}}}}], "splits": [{"name": "train", "num_bytes": 6992, "num_examples": 304}, {"name": "test", "num_bytes": 5313, "num_examples": 231}], "download_size": 6510, "dataset_size": 12305}} | 2023-11-24T18:25:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "emodb"
More Information needed | [
"# Dataset Card for \"emodb\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"emodb\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"emodb\"\n\nMore Information needed"
]
|
55bda0e57bb200dc6c8becf9cb2efbf09eadcbd3 | # My Dataset readme | ronanarraig/sample_dataset | [
"region:us"
]
| 2023-11-24T17:28:09+00:00 | {"extra_gated_prompt": "Purchase access to this repo [HERE](https://buy.stripe.com/dR616I1mo99D6pabII)"} | 2023-11-27T19:19:16+00:00 | []
| []
| TAGS
#region-us
| # My Dataset readme | [
"# My Dataset readme"
]
| [
"TAGS\n#region-us \n",
"# My Dataset readme"
]
| [
6,
6
]
| [
"passage: TAGS\n#region-us \n# My Dataset readme"
]
|
c6e955cc0e47537565b5f02e0e70aad70ca2afff | # Dataset Card for "sdu_es_hf_top2vec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tomashs/sdu_es_hf_top2vec | [
"region:us"
]
| 2023-11-24T18:15:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "acronym", "dtype": "string"}, {"name": "ID", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "topic_vector", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 8976591, "num_examples": 6267}, {"name": "dev", "num_bytes": 1168484, "num_examples": 818}], "download_size": 8479136, "dataset_size": 10145075}} | 2023-11-24T18:15:32+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sdu_es_hf_top2vec"
More Information needed | [
"# Dataset Card for \"sdu_es_hf_top2vec\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sdu_es_hf_top2vec\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sdu_es_hf_top2vec\"\n\nMore Information needed"
]
|
0dd14a47fd4e4126cf7c2c429785b683594e22b6 | ## A small datasets of Lego Sets with BLIP-2 Generated Captions
This can be used to fine-tune SDXL with data-efficient fine-tuning techniques like DreamBooth.
Example image 👇

| merve/lego_sets_latest | [
"task_categories:text-to-image",
"license:apache-2.0",
"region:us"
]
| 2023-11-24T18:24:52+00:00 | {"license": "apache-2.0", "task_categories": ["text-to-image"]} | 2024-01-06T12:36:27+00:00 | []
| []
| TAGS
#task_categories-text-to-image #license-apache-2.0 #region-us
| ## A small datasets of Lego Sets with BLIP-2 Generated Captions
This can be used to fine-tune SDXL with data-efficient fine-tuning techniques like DreamBooth.
Example image
!image/png
| [
"## A small datasets of Lego Sets with BLIP-2 Generated Captions\n\nThis can be used to fine-tune SDXL with data-efficient fine-tuning techniques like DreamBooth.\n\nExample image \n\n!image/png"
]
| [
"TAGS\n#task_categories-text-to-image #license-apache-2.0 #region-us \n",
"## A small datasets of Lego Sets with BLIP-2 Generated Captions\n\nThis can be used to fine-tune SDXL with data-efficient fine-tuning techniques like DreamBooth.\n\nExample image \n\n!image/png"
]
| [
26,
51
]
| [
"passage: TAGS\n#task_categories-text-to-image #license-apache-2.0 #region-us \n## A small datasets of Lego Sets with BLIP-2 Generated Captions\n\nThis can be used to fine-tune SDXL with data-efficient fine-tuning techniques like DreamBooth.\n\nExample image \n\n!image/png"
]
|
adf55b4905e7d2e6caea03f7425f262edfb967a5 |
# Dataset of serina (Blue Archive)
This is the dataset of serina (Blue Archive), containing 194 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 194 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 528 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 611 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 194 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 194 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 194 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 528 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 528 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 507 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 611 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 611 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/serina_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-24T19:06:02+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-24T19:06:20+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of serina (Blue Archive)
================================
This is the dataset of serina (Blue Archive), containing 194 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
970f3a444b1a5414f4af967fb2994347192d48fb |
# 🌈Ko-various-dataset
- [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) 포함.
- 추가적으로, [skt/kobest_v1](https://huggingface.co/datasets/skt/kobest_v1) 데이터셋 중 `COPA`와 `Hellaswag`를 [adaptLLM](https://huggingface.co/AdaptLLM)의 논문을 참고하여서 instruction-output dataset으로 만들어서 추가함.
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭
# 전처리
```
# Make the special text lists, manually.
[\n\t-=+,#/\$?:^$.@*\"–∼①②③④⑤ⓐⓑⓒ㉮㉯㉰㈜®...(중략)...∂Σ∩∅φμσℝλΛ≥℃∉⊂θ±€Øπ√≠≤ε∈∫ωηαβ÷≈ס̊°²/]
```
- 위의 정규표현식을 이용하여, 한국어 및 영어를 제외한 다양한 외국어, 이모지, 특수 문자 등등 제거.
- Output 답변이 너무 짧은 경우 제거.
- 번역 task 최대한 제거. (~번역 task는 한국어로 번역하면 거의 100% 오류)
| kyujinpy/Ko-various-dataset | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2023-11-24T19:33:35+00:00 | {"license": "cc-by-nc-sa-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57552968, "num_examples": 38174}], "download_size": 29047684, "dataset_size": 57552968}} | 2023-11-26T15:51:57+00:00 | []
| []
| TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
# Ko-various-dataset
- kyujinpy/KOR-OpenOrca-Platypus-v3 포함.
- 추가적으로, skt/kobest_v1 데이터셋 중 'COPA'와 'Hellaswag'를 adaptLLM의 논문을 참고하여서 instruction-output dataset으로 만들어서 추가함.
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다
# 전처리
- 위의 정규표현식을 이용하여, 한국어 및 영어를 제외한 다양한 외국어, 이모지, 특수 문자 등등 제거.
- Output 답변이 너무 짧은 경우 제거.
- 번역 task 최대한 제거. (~번역 task는 한국어로 번역하면 거의 100% 오류)
| [
"# Ko-various-dataset\n\n- kyujinpy/KOR-OpenOrca-Platypus-v3 포함. \n- 추가적으로, skt/kobest_v1 데이터셋 중 'COPA'와 'Hellaswag'를 adaptLLM의 논문을 참고하여서 instruction-output dataset으로 만들어서 추가함. \n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다",
"# 전처리\n \n- 위의 정규표현식을 이용하여, 한국어 및 영어를 제외한 다양한 외국어, 이모지, 특수 문자 등등 제거.\n- Output 답변이 너무 짧은 경우 제거.\n- 번역 task 최대한 제거. (~번역 task는 한국어로 번역하면 거의 100% 오류)"
]
| [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"# Ko-various-dataset\n\n- kyujinpy/KOR-OpenOrca-Platypus-v3 포함. \n- 추가적으로, skt/kobest_v1 데이터셋 중 'COPA'와 'Hellaswag'를 adaptLLM의 논문을 참고하여서 instruction-output dataset으로 만들어서 추가함. \n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다",
"# 전처리\n \n- 위의 정규표현식을 이용하여, 한국어 및 영어를 제외한 다양한 외국어, 이모지, 특수 문자 등등 제거.\n- Output 답변이 너무 짧은 경우 제거.\n- 번역 task 최대한 제거. (~번역 task는 한국어로 번역하면 거의 100% 오류)"
]
| [
19,
106,
63
]
| [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n# Ko-various-dataset\n\n- kyujinpy/KOR-OpenOrca-Platypus-v3 포함. \n- 추가적으로, skt/kobest_v1 데이터셋 중 'COPA'와 'Hellaswag'를 adaptLLM의 논문을 참고하여서 instruction-output dataset으로 만들어서 추가함. \n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다# 전처리\n \n- 위의 정규표현식을 이용하여, 한국어 및 영어를 제외한 다양한 외국어, 이모지, 특수 문자 등등 제거.\n- Output 답변이 너무 짧은 경우 제거.\n- 번역 task 최대한 제거. (~번역 task는 한국어로 번역하면 거의 100% 오류)"
]
|
726998fc7e3e7994ccf2fb00d8a5943481d3ea40 |
## Proverbot Scrapes
Here we include a dump of proofs in coq-gym using the proverbot9001 tool. | brando/Coq-Gym-Data-Set | [
"license:apache-2.0",
"region:us"
]
| 2023-11-24T19:41:52+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "relevant_lemmas", "sequence": "string"}, {"name": "prev_tactics", "sequence": "string"}, {"name": "context", "struct": [{"name": "bg_goals", "list": [{"name": "goal", "dtype": "string"}, {"name": "hypotheses", "sequence": "string"}]}, {"name": "fg_goals", "list": [{"name": "goal", "dtype": "string"}, {"name": "hypotheses", "sequence": "string"}]}, {"name": "given_up_goals", "list": [{"name": "goal", "dtype": "string"}, {"name": "hypotheses", "sequence": "string"}]}, {"name": "shelved_goals", "list": [{"name": "goal", "dtype": "string"}, {"name": "hypotheses", "sequence": "string"}]}]}, {"name": "tactic", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 4006839384, "num_examples": 363042}], "download_size": 27586028, "dataset_size": 4006839384}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-12-05T06:09:09+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
|
## Proverbot Scrapes
Here we include a dump of proofs in coq-gym using the proverbot9001 tool. | [
"## Proverbot Scrapes\n\nHere we include a dump of proofs in coq-gym using the proverbot9001 tool."
]
| [
"TAGS\n#license-apache-2.0 #region-us \n",
"## Proverbot Scrapes\n\nHere we include a dump of proofs in coq-gym using the proverbot9001 tool."
]
| [
14,
28
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n## Proverbot Scrapes\n\nHere we include a dump of proofs in coq-gym using the proverbot9001 tool."
]
|
231a0b42dced4406d74b70393bbfe02ca5847c7c | # Dataset Card for Dataset Name
The Foundation Model Transparency Index is an ongoing initiative of the CRFM to comprehensively assess the transparency of foundation model developers.
- **Created by:** Center for Research on Foundation Models
- **License:** [Creative Commons Attribution 4.0 International](https://github.com/stanford-crfm/fmti/blob/main/LICENSE.md)
## Dataset Sources
- **Repository:** https://github.com/stanford-crfm/fmti
- **Paper:** https://arxiv.org/abs/2310.12941
- **Website:** https://crfm.stanford.edu/fmti/
## Uses
Assess the transparency of foundation model developers.
## Dataset Structure
- `Domain`: `Upstream` (model building), `Model` (properties and function), and `Downstream` (distribution and usage).
- `Subdomain`: Data, labor, compute, methods, model basics, model access, capabilities, risks, mitigations, distribution, usage policy, feedback, and impact.
- `Indicator`: Name of the indicator.
- `Definition`: Question for the developers of the model.
- `Notes`: Explanation of how to receive the point.
- References: `Reference_1` and `Reference_2` with their corresponding `Link_1` and `Link_2`.
## Citation
```
@article{bommasani2023fmti,
author = {Bommasani, Rishi and
Klyman, Kevin and
Longpre, Shayne and
Kapoor, Sayash and
Maslej, Nestor and
Xiong, Betty and
Zhang, Daniel and
Liang, Percy},
title = {The Foundation Model Transparency Index},
month = oct,
year = 2023,
url = {https://crfm.stanford.edu/fmti/FMTI.pdf}
}
```
| mariagrandury/fmti-indicators | [
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"arxiv:2310.12941",
"region:us"
]
| 2023-11-24T20:10:25+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["n<1K"], "pretty_name": "The Foundation Model Transparency Index"} | 2023-11-24T20:36:33+00:00 | [
"2310.12941"
]
| [
"en"
]
| TAGS
#size_categories-n<1K #language-English #license-cc-by-4.0 #arxiv-2310.12941 #region-us
| # Dataset Card for Dataset Name
The Foundation Model Transparency Index is an ongoing initiative of the CRFM to comprehensively assess the transparency of foundation model developers.
- Created by: Center for Research on Foundation Models
- License: Creative Commons Attribution 4.0 International
## Dataset Sources
- Repository: URL
- Paper: URL
- Website: URL
## Uses
Assess the transparency of foundation model developers.
## Dataset Structure
- 'Domain': 'Upstream' (model building), 'Model' (properties and function), and 'Downstream' (distribution and usage).
- 'Subdomain': Data, labor, compute, methods, model basics, model access, capabilities, risks, mitigations, distribution, usage policy, feedback, and impact.
- 'Indicator': Name of the indicator.
- 'Definition': Question for the developers of the model.
- 'Notes': Explanation of how to receive the point.
- References: 'Reference_1' and 'Reference_2' with their corresponding 'Link_1' and 'Link_2'.
| [
"# Dataset Card for Dataset Name\n\nThe Foundation Model Transparency Index is an ongoing initiative of the CRFM to comprehensively assess the transparency of foundation model developers.\n\n- Created by: Center for Research on Foundation Models\n- License: Creative Commons Attribution 4.0 International",
"## Dataset Sources\n\n- Repository: URL\n- Paper: URL\n- Website: URL",
"## Uses\n\nAssess the transparency of foundation model developers.",
"## Dataset Structure\n\n- 'Domain': 'Upstream' (model building), 'Model' (properties and function), and 'Downstream' (distribution and usage).\n- 'Subdomain': Data, labor, compute, methods, model basics, model access, capabilities, risks, mitigations, distribution, usage policy, feedback, and impact.\n- 'Indicator': Name of the indicator.\n- 'Definition': Question for the developers of the model.\n- 'Notes': Explanation of how to receive the point.\n- References: 'Reference_1' and 'Reference_2' with their corresponding 'Link_1' and 'Link_2'."
]
| [
"TAGS\n#size_categories-n<1K #language-English #license-cc-by-4.0 #arxiv-2310.12941 #region-us \n",
"# Dataset Card for Dataset Name\n\nThe Foundation Model Transparency Index is an ongoing initiative of the CRFM to comprehensively assess the transparency of foundation model developers.\n\n- Created by: Center for Research on Foundation Models\n- License: Creative Commons Attribution 4.0 International",
"## Dataset Sources\n\n- Repository: URL\n- Paper: URL\n- Website: URL",
"## Uses\n\nAssess the transparency of foundation model developers.",
"## Dataset Structure\n\n- 'Domain': 'Upstream' (model building), 'Model' (properties and function), and 'Downstream' (distribution and usage).\n- 'Subdomain': Data, labor, compute, methods, model basics, model access, capabilities, risks, mitigations, distribution, usage policy, feedback, and impact.\n- 'Indicator': Name of the indicator.\n- 'Definition': Question for the developers of the model.\n- 'Notes': Explanation of how to receive the point.\n- References: 'Reference_1' and 'Reference_2' with their corresponding 'Link_1' and 'Link_2'."
]
| [
38,
57,
19,
16,
162
]
| [
"passage: TAGS\n#size_categories-n<1K #language-English #license-cc-by-4.0 #arxiv-2310.12941 #region-us \n# Dataset Card for Dataset Name\n\nThe Foundation Model Transparency Index is an ongoing initiative of the CRFM to comprehensively assess the transparency of foundation model developers.\n\n- Created by: Center for Research on Foundation Models\n- License: Creative Commons Attribution 4.0 International## Dataset Sources\n\n- Repository: URL\n- Paper: URL\n- Website: URL## Uses\n\nAssess the transparency of foundation model developers.## Dataset Structure\n\n- 'Domain': 'Upstream' (model building), 'Model' (properties and function), and 'Downstream' (distribution and usage).\n- 'Subdomain': Data, labor, compute, methods, model basics, model access, capabilities, risks, mitigations, distribution, usage policy, feedback, and impact.\n- 'Indicator': Name of the indicator.\n- 'Definition': Question for the developers of the model.\n- 'Notes': Explanation of how to receive the point.\n- References: 'Reference_1' and 'Reference_2' with their corresponding 'Link_1' and 'Link_2'."
]
|
19e33fab2cf9a2b4285436f50436b2eae1d9eac1 |
# Dataset of le_malin (Azur Lane)
This is the dataset of le_malin (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 530 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 598 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 530 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 530 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 347 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 598 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 598 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/le_malin_azurlane | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-24T20:31:19+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-24T20:31:39+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of le\_malin (Azur Lane)
================================
This is the dataset of le\_malin (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
6156d68ead03eaa0dccf71b0f172f19923e870f9 | # Dataset Card for "latent-trees-agreement-ID"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | michaelginn/latent-trees-agreement-ID | [
"region:us"
]
| 2023-11-24T20:32:27+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "depth", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 107176.8, "num_examples": 2400}, {"name": "eval", "num_bytes": 35725.6, "num_examples": 800}, {"name": "test", "num_bytes": 35725.6, "num_examples": 800}], "download_size": 56457, "dataset_size": 178628.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval", "path": "data/eval-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-14T03:34:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "latent-trees-agreement-ID"
More Information needed | [
"# Dataset Card for \"latent-trees-agreement-ID\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"latent-trees-agreement-ID\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"latent-trees-agreement-ID\"\n\nMore Information needed"
]
|
5d3ee82861dea24551a3772ab8ab6b1ff2eede7b |
# Dataset Card for Evaluation run of euclaise/Ferret_7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/euclaise/Ferret_7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [euclaise/Ferret_7B](https://huggingface.co/euclaise/Ferret_7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_euclaise__Ferret_7B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-24T20:51:17.073037](https://huggingface.co/datasets/open-llm-leaderboard/details_euclaise__Ferret_7B_public/blob/main/results_2023-11-24T20-51-17.073037.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5942716228548698,
"acc_stderr": 0.033152282530121875,
"acc_norm": 0.6048893408330033,
"acc_norm_stderr": 0.03399052086609082,
"mc1": 0.2766217870257038,
"mc1_stderr": 0.015659605755326923,
"mc2": 0.3993660994529629,
"mc2_stderr": 0.014553301107110514,
"em": 0.001572986577181208,
"em_stderr": 0.00040584511324177344,
"f1": 0.06532718120805381,
"f1_stderr": 0.0014896342146480434
},
"harness|arc:challenge|25": {
"acc": 0.5776450511945392,
"acc_stderr": 0.014434138713379983,
"acc_norm": 0.6228668941979523,
"acc_norm_stderr": 0.014163366896192598
},
"harness|hellaswag|10": {
"acc": 0.6250746863174667,
"acc_stderr": 0.004831142570475506,
"acc_norm": 0.8132842063333997,
"acc_norm_stderr": 0.0038888680996290764
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.04605661864718381,
"acc_norm": 0.3,
"acc_norm_stderr": 0.04605661864718381
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.0421850621536888,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.0421850621536888
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316091,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316091
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6566037735849056,
"acc_stderr": 0.02922452646912479,
"acc_norm": 0.6566037735849056,
"acc_norm_stderr": 0.02922452646912479
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.03899073687357335,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.03899073687357335
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5722543352601156,
"acc_stderr": 0.03772446857518027,
"acc_norm": 0.5722543352601156,
"acc_norm_stderr": 0.03772446857518027
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082635,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082635
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.71,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108101,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108101
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.047028804320496165,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.047028804320496165
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3862433862433862,
"acc_stderr": 0.025075981767601677,
"acc_norm": 0.3862433862433862,
"acc_norm_stderr": 0.025075981767601677
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.0436031486007746,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.0436031486007746
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6774193548387096,
"acc_stderr": 0.026593084516572277,
"acc_norm": 0.6774193548387096,
"acc_norm_stderr": 0.026593084516572277
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.035145285621750094,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.035145285621750094
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7474747474747475,
"acc_stderr": 0.030954055470365897,
"acc_norm": 0.7474747474747475,
"acc_norm_stderr": 0.030954055470365897
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.026499057701397467,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.026499057701397467
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5897435897435898,
"acc_stderr": 0.024939313906940798,
"acc_norm": 0.5897435897435898,
"acc_norm_stderr": 0.024939313906940798
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.027420019350945277,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.027420019350945277
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.030868682604121622,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.030868682604121622
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.03879687024073327,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.03879687024073327
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7889908256880734,
"acc_stderr": 0.01749392240411265,
"acc_norm": 0.7889908256880734,
"acc_norm_stderr": 0.01749392240411265
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.03362277436608043,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.03362277436608043
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.029771775228145635,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.029771775228145635
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159256,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159256
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.03076935200822914,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.03076935200822914
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7251908396946565,
"acc_stderr": 0.03915345408847836,
"acc_norm": 0.7251908396946565,
"acc_norm_stderr": 0.03915345408847836
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7107438016528925,
"acc_stderr": 0.04139112727635463,
"acc_norm": 0.7107438016528925,
"acc_norm_stderr": 0.04139112727635463
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7239263803680982,
"acc_stderr": 0.035123852837050475,
"acc_norm": 0.7239263803680982,
"acc_norm_stderr": 0.035123852837050475
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010213,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010213
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8034188034188035,
"acc_stderr": 0.026035386098951292,
"acc_norm": 0.8034188034188035,
"acc_norm_stderr": 0.026035386098951292
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.789272030651341,
"acc_stderr": 0.014583812465862545,
"acc_norm": 0.789272030651341,
"acc_norm_stderr": 0.014583812465862545
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.630057803468208,
"acc_stderr": 0.02599247202930638,
"acc_norm": 0.630057803468208,
"acc_norm_stderr": 0.02599247202930638
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3787709497206704,
"acc_stderr": 0.01622353351036512,
"acc_norm": 0.3787709497206704,
"acc_norm_stderr": 0.01622353351036512
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6535947712418301,
"acc_stderr": 0.02724561304721537,
"acc_norm": 0.6535947712418301,
"acc_norm_stderr": 0.02724561304721537
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6559485530546624,
"acc_stderr": 0.026981478043648043,
"acc_norm": 0.6559485530546624,
"acc_norm_stderr": 0.026981478043648043
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6759259259259259,
"acc_stderr": 0.026041766202717163,
"acc_norm": 0.6759259259259259,
"acc_norm_stderr": 0.026041766202717163
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4574468085106383,
"acc_stderr": 0.029719281272236837,
"acc_norm": 0.4574468085106383,
"acc_norm_stderr": 0.029719281272236837
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3898305084745763,
"acc_stderr": 0.012456386619082606,
"acc_norm": 0.3898305084745763,
"acc_norm_stderr": 0.012456386619082606
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5992647058823529,
"acc_stderr": 0.029768263528933105,
"acc_norm": 0.5992647058823529,
"acc_norm_stderr": 0.029768263528933105
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6225490196078431,
"acc_stderr": 0.019610851474880297,
"acc_norm": 0.6225490196078431,
"acc_norm_stderr": 0.019610851474880297
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6272727272727273,
"acc_stderr": 0.046313813194254656,
"acc_norm": 0.6272727272727273,
"acc_norm_stderr": 0.046313813194254656
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6326530612244898,
"acc_stderr": 0.030862144921087558,
"acc_norm": 0.6326530612244898,
"acc_norm_stderr": 0.030862144921087558
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7761194029850746,
"acc_stderr": 0.029475250236017204,
"acc_norm": 0.7761194029850746,
"acc_norm_stderr": 0.029475250236017204
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774711,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774711
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.783625730994152,
"acc_stderr": 0.031581495393387324,
"acc_norm": 0.783625730994152,
"acc_norm_stderr": 0.031581495393387324
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2766217870257038,
"mc1_stderr": 0.015659605755326923,
"mc2": 0.3993660994529629,
"mc2_stderr": 0.014553301107110514
},
"harness|winogrande|5": {
"acc": 0.7750591949486977,
"acc_stderr": 0.011735043564126742
},
"harness|drop|3": {
"em": 0.001572986577181208,
"em_stderr": 0.00040584511324177344,
"f1": 0.06532718120805381,
"f1_stderr": 0.0014896342146480434
},
"harness|gsm8k|5": {
"acc": 0.02047005307050796,
"acc_stderr": 0.003900413385915721
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_euclaise__Ferret_7B | [
"region:us"
]
| 2023-11-24T20:54:18+00:00 | {"pretty_name": "Evaluation run of euclaise/Ferret_7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [euclaise/Ferret_7B](https://huggingface.co/euclaise/Ferret_7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_euclaise__Ferret_7B_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-24T20:51:17.073037](https://huggingface.co/datasets/open-llm-leaderboard/details_euclaise__Ferret_7B_public/blob/main/results_2023-11-24T20-51-17.073037.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5942716228548698,\n \"acc_stderr\": 0.033152282530121875,\n \"acc_norm\": 0.6048893408330033,\n \"acc_norm_stderr\": 0.03399052086609082,\n \"mc1\": 0.2766217870257038,\n \"mc1_stderr\": 0.015659605755326923,\n \"mc2\": 0.3993660994529629,\n \"mc2_stderr\": 0.014553301107110514,\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.00040584511324177344,\n \"f1\": 0.06532718120805381,\n \"f1_stderr\": 0.0014896342146480434\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5776450511945392,\n \"acc_stderr\": 0.014434138713379983,\n \"acc_norm\": 0.6228668941979523,\n \"acc_norm_stderr\": 0.014163366896192598\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6250746863174667,\n \"acc_stderr\": 0.004831142570475506,\n \"acc_norm\": 0.8132842063333997,\n \"acc_norm_stderr\": 0.0038888680996290764\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.04605661864718381,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.04605661864718381\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.0421850621536888,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.0421850621536888\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316091,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316091\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6566037735849056,\n \"acc_stderr\": 0.02922452646912479,\n \"acc_norm\": 0.6566037735849056,\n \"acc_norm_stderr\": 0.02922452646912479\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.03899073687357335,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.03899073687357335\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5722543352601156,\n \"acc_stderr\": 0.03772446857518027,\n \"acc_norm\": 0.5722543352601156,\n \"acc_norm_stderr\": 0.03772446857518027\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108101,\n \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108101\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.047028804320496165,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.047028804320496165\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3862433862433862,\n \"acc_stderr\": 0.025075981767601677,\n \"acc_norm\": 0.3862433862433862,\n \"acc_norm_stderr\": 0.025075981767601677\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.0436031486007746,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.0436031486007746\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6774193548387096,\n \"acc_stderr\": 0.026593084516572277,\n \"acc_norm\": 0.6774193548387096,\n \"acc_norm_stderr\": 0.026593084516572277\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.035145285621750094,\n \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.035145285621750094\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7474747474747475,\n \"acc_stderr\": 0.030954055470365897,\n \"acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.030954055470365897\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.026499057701397467,\n \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.026499057701397467\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5897435897435898,\n \"acc_stderr\": 0.024939313906940798,\n \"acc_norm\": 0.5897435897435898,\n \"acc_norm_stderr\": 0.024939313906940798\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2814814814814815,\n \"acc_stderr\": 0.027420019350945277,\n \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.027420019350945277\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.030868682604121622,\n \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.030868682604121622\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.03879687024073327,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.03879687024073327\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7889908256880734,\n \"acc_stderr\": 0.01749392240411265,\n \"acc_norm\": 0.7889908256880734,\n \"acc_norm_stderr\": 0.01749392240411265\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608043,\n \"acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608043\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.029771775228145635,\n \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.029771775228145635\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159256,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159256\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.03915345408847836,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.03915345408847836\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7107438016528925,\n \"acc_stderr\": 0.04139112727635463,\n \"acc_norm\": 0.7107438016528925,\n \"acc_norm_stderr\": 0.04139112727635463\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8034188034188035,\n \"acc_stderr\": 0.026035386098951292,\n \"acc_norm\": 0.8034188034188035,\n \"acc_norm_stderr\": 0.026035386098951292\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.789272030651341,\n \"acc_stderr\": 0.014583812465862545,\n \"acc_norm\": 0.789272030651341,\n \"acc_norm_stderr\": 0.014583812465862545\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.630057803468208,\n \"acc_stderr\": 0.02599247202930638,\n \"acc_norm\": 0.630057803468208,\n \"acc_norm_stderr\": 0.02599247202930638\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3787709497206704,\n \"acc_stderr\": 0.01622353351036512,\n \"acc_norm\": 0.3787709497206704,\n \"acc_norm_stderr\": 0.01622353351036512\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6535947712418301,\n \"acc_stderr\": 0.02724561304721537,\n \"acc_norm\": 0.6535947712418301,\n \"acc_norm_stderr\": 0.02724561304721537\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n \"acc_stderr\": 0.026981478043648043,\n \"acc_norm\": 0.6559485530546624,\n \"acc_norm_stderr\": 0.026981478043648043\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6759259259259259,\n \"acc_stderr\": 0.026041766202717163,\n \"acc_norm\": 0.6759259259259259,\n \"acc_norm_stderr\": 0.026041766202717163\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4574468085106383,\n \"acc_stderr\": 0.029719281272236837,\n \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.029719281272236837\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3898305084745763,\n \"acc_stderr\": 0.012456386619082606,\n \"acc_norm\": 0.3898305084745763,\n \"acc_norm_stderr\": 0.012456386619082606\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5992647058823529,\n \"acc_stderr\": 0.029768263528933105,\n \"acc_norm\": 0.5992647058823529,\n \"acc_norm_stderr\": 0.029768263528933105\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6225490196078431,\n \"acc_stderr\": 0.019610851474880297,\n \"acc_norm\": 0.6225490196078431,\n \"acc_norm_stderr\": 0.019610851474880297\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n \"acc_stderr\": 0.046313813194254656,\n \"acc_norm\": 0.6272727272727273,\n \"acc_norm_stderr\": 0.046313813194254656\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6326530612244898,\n \"acc_stderr\": 0.030862144921087558,\n \"acc_norm\": 0.6326530612244898,\n \"acc_norm_stderr\": 0.030862144921087558\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7761194029850746,\n \"acc_stderr\": 0.029475250236017204,\n \"acc_norm\": 0.7761194029850746,\n \"acc_norm_stderr\": 0.029475250236017204\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774711,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774711\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.031581495393387324,\n \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.031581495393387324\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2766217870257038,\n \"mc1_stderr\": 0.015659605755326923,\n \"mc2\": 0.3993660994529629,\n \"mc2_stderr\": 0.014553301107110514\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7750591949486977,\n \"acc_stderr\": 0.011735043564126742\n },\n \"harness|drop|3\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.00040584511324177344,\n \"f1\": 0.06532718120805381,\n \"f1_stderr\": 0.0014896342146480434\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.02047005307050796,\n \"acc_stderr\": 0.003900413385915721\n }\n}\n```", "repo_url": "https://huggingface.co/euclaise/Ferret_7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|arc:challenge|25_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|drop|3_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|gsm8k|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hellaswag|10_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-24T20-51-17.073037.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["**/details_harness|winogrande|5_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-24T20-51-17.073037.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_24T20_51_17.073037", "path": ["results_2023-11-24T20-51-17.073037.parquet"]}, {"split": "latest", "path": ["results_2023-11-24T20-51-17.073037.parquet"]}]}]} | 2023-11-24T20:55:03+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of euclaise/Ferret_7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model euclaise/Ferret_7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-24T20:51:17.073037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of euclaise/Ferret_7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model euclaise/Ferret_7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-24T20:51:17.073037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of euclaise/Ferret_7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model euclaise/Ferret_7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-24T20:51:17.073037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
18,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of euclaise/Ferret_7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model euclaise/Ferret_7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-24T20:51:17.073037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
18070db092eca11f0022b8be0afb4f112b116603 |
# scientificbeekeeping
raw webtext | BEE-spoke-data/scientificbeekeeping | [
"license:apache-2.0",
"region:us"
]
| 2023-11-24T21:01:28+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10438862, "num_examples": 471}], "download_size": 4117007, "dataset_size": 10438862}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-24T21:34:04+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
|
# scientificbeekeeping
raw webtext | [
"# scientificbeekeeping\n\n\nraw webtext"
]
| [
"TAGS\n#license-apache-2.0 #region-us \n",
"# scientificbeekeeping\n\n\nraw webtext"
]
| [
14,
9
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n# scientificbeekeeping\n\n\nraw webtext"
]
|
84455ce085bba13064813d60632c1db26809513b |
# NewsQASum, a dataset for question answering and summarization of news
<!-- Provide a quick summary of the dataset. -->
This dataset contains the CNN articles at the overlap between the [newsqa](https://huggingface.co/datasets/newsqa) question-answering
dataset and the [CNN DailyMail](https://huggingface.co/datasets/cnn_dailymail) summarization dataset. Each article is annotated with
a summary and a list of questions and corresponding answers.
**Tasks:** QA, summarization, text retrieval
**Genre:** News stories
**Language:** English | glnmario/news-qa-summarization | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
]
| 2023-11-24T21:37:27+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["summarization", "question-answering", "text-retrieval", "text-generation"], "pretty_name": "NewsQASum"} | 2023-11-24T22:55:39+00:00 | []
| [
"en"
]
| TAGS
#task_categories-summarization #task_categories-question-answering #task_categories-text-retrieval #task_categories-text-generation #size_categories-10K<n<100K #language-English #region-us
|
# NewsQASum, a dataset for question answering and summarization of news
This dataset contains the CNN articles at the overlap between the newsqa question-answering
dataset and the CNN DailyMail summarization dataset. Each article is annotated with
a summary and a list of questions and corresponding answers.
Tasks: QA, summarization, text retrieval
Genre: News stories
Language: English | [
"# NewsQASum, a dataset for question answering and summarization of news\n\n\n\nThis dataset contains the CNN articles at the overlap between the newsqa question-answering \ndataset and the CNN DailyMail summarization dataset. Each article is annotated with \na summary and a list of questions and corresponding answers.\n\nTasks: QA, summarization, text retrieval \nGenre: News stories \nLanguage: English"
]
| [
"TAGS\n#task_categories-summarization #task_categories-question-answering #task_categories-text-retrieval #task_categories-text-generation #size_categories-10K<n<100K #language-English #region-us \n",
"# NewsQASum, a dataset for question answering and summarization of news\n\n\n\nThis dataset contains the CNN articles at the overlap between the newsqa question-answering \ndataset and the CNN DailyMail summarization dataset. Each article is annotated with \na summary and a list of questions and corresponding answers.\n\nTasks: QA, summarization, text retrieval \nGenre: News stories \nLanguage: English"
]
| [
67,
94
]
| [
"passage: TAGS\n#task_categories-summarization #task_categories-question-answering #task_categories-text-retrieval #task_categories-text-generation #size_categories-10K<n<100K #language-English #region-us \n# NewsQASum, a dataset for question answering and summarization of news\n\n\n\nThis dataset contains the CNN articles at the overlap between the newsqa question-answering \ndataset and the CNN DailyMail summarization dataset. Each article is annotated with \na summary and a list of questions and corresponding answers.\n\nTasks: QA, summarization, text retrieval \nGenre: News stories \nLanguage: English"
]
|
705304c9ae2f5e25558ad8895abcfea2fe8039d2 | # Dataset Card for "trial_Level_2_A"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jadasdn/trial_Level_2_A | [
"region:us"
]
| 2023-11-24T22:58:05+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2624020367.8833747, "num_examples": 58098}], "download_size": 2607714351, "dataset_size": 2624020367.8833747}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-24T23:02:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "trial_Level_2_A"
More Information needed | [
"# Dataset Card for \"trial_Level_2_A\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"trial_Level_2_A\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"trial_Level_2_A\"\n\nMore Information needed"
]
|
2f0656fbf9c7d187620a2918dd76d241aa5c2f03 | # Dataset Card for "timit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | confit/timit | [
"region:us"
]
| 2023-11-25T00:42:23+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "FADG0", "1": "FAEM0", "2": "FAJW0", "3": "FAKS0", "4": "FALK0", "5": "FALR0", "6": "FAPB0", "7": "FASW0", "8": "FAWF0", "9": "FBAS0", "10": "FBCG1", "11": "FBCH0", "12": "FBJL0", "13": "FBLV0", "14": "FBMH0", "15": "FBMJ0", "16": "FCAG0", "17": "FCAJ0", "18": "FCAL1", "19": "FCAU0", "20": "FCDR1", "21": "FCEG0", "22": "FCFT0", "23": "FCJF0", "24": "FCJS0", "25": "FCKE0", "26": "FCLT0", "27": "FCMG0", "28": "FCMH0", "29": "FCMH1", "30": "FCMM0", "31": "FCMR0", "32": "FCRH0", "33": "FCRZ0", "34": "FCYL0", "35": "FDAC1", "36": "FDAS1", "37": "FDAW0", "38": "FDFB0", "39": "FDHC0", "40": "FDJH0", "41": "FDKN0", "42": "FDML0", "43": "FDMS0", "44": "FDMY0", "45": "FDNC0", "46": "FDRD1", "47": "FDRW0", "48": "FDTD0", "49": "FDXW0", "50": "FEAC0", "51": "FEAR0", "52": "FECD0", "53": "FEDW0", "54": "FEEH0", "55": "FELC0", "56": "FEME0", "57": "FETB0", "58": "FEXM0", "59": "FGCS0", "60": "FGDP0", "61": "FGJD0", "62": "FGMB0", "63": "FGMD0", "64": "FGRW0", "65": "FGWR0", "66": "FHES0", "67": "FHEW0", "68": "FHLM0", "69": "FHXS0", "70": "FISB0", "71": "FJAS0", "72": "FJCS0", "73": "FJDM2", "74": "FJEM0", "75": "FJEN0", "76": "FJHK0", "77": "FJKL0", "78": "FJLG0", "79": "FJLM0", "80": "FJLR0", "81": "FJMG0", "82": "FJRB0", "83": "FJRE0", "84": "FJRP1", "85": "FJSA0", "86": "FJSJ0", "87": "FJSK0", "88": "FJSP0", "89": "FJWB0", "90": "FJWB1", "91": "FJXM0", "92": "FJXP0", "93": "FKAA0", "94": "FKDE0", "95": "FKDW0", "96": "FKFB0", "97": "FKKH0", "98": "FKLC0", "99": "FKLC1", "100": "FKLH0", "101": "FKMS0", "102": "FKSR0", "103": "FLAC0", "104": "FLAG0", "105": "FLAS0", "106": "FLBW0", "107": "FLEH0", "108": "FLET0", "109": "FLHD0", "110": "FLJA0", "111": "FLJD0", "112": "FLJG0", "113": "FLKD0", "114": "FLKM0", "115": "FLMA0", "116": "FLMC0", "117": "FLMK0", "118": "FLNH0", "119": "FLOD0", "120": "FLTM0", "121": "FMAF0", "122": "FMAH0", "123": "FMAH1", "124": "FMBG0", "125": "FMCM0", "126": "FMEM0", "127": "FMGD0", "128": "FMJB0", "129": "FMJF0", "130": "FMJU0", "131": "FMKC0", "132": "FMKF0", "133": "FMLD0", "134": "FMMH0", "135": "FMML0", "136": "FMPG0", "137": "FNKL0", "138": "FNLP0", "139": "FNMR0", "140": "FNTB0", "141": "FPAB1", "142": "FPAC0", "143": "FPAD0", "144": "FPAF0", "145": "FPAS0", "146": "FPAZ0", "147": "FPJF0", "148": "FPKT0", "149": "FPLS0", "150": "FPMY0", "151": "FRAM1", "152": "FREH0", "153": "FREW0", "154": "FRJB0", "155": "FRLL0", "156": "FRNG0", "157": "FSAG0", "158": "FSAH0", "159": "FSAK0", "160": "FSBK0", "161": "FSCN0", "162": "FSDC0", "163": "FSDJ0", "164": "FSEM0", "165": "FSGF0", "166": "FSJG0", "167": "FSJK1", "168": "FSJS0", "169": "FSJW0", "170": "FSKC0", "171": "FSKL0", "172": "FSKP0", "173": "FSLB1", "174": "FSLS0", "175": "FSMA0", "176": "FSMM0", "177": "FSMS1", "178": "FSPM0", "179": "FSRH0", "180": "FSSB0", "181": "FSXA0", "182": "FTAJ0", "183": "FTBR0", "184": "FTBW0", "185": "FTLG0", "186": "FTLH0", "187": "FTMG0", "188": "FUTB0", "189": "FVFB0", "190": "FVKB0", "191": "FVMH0", "192": "MABC0", "193": "MABW0", "194": "MADC0", "195": "MADD0", "196": "MAEB0", "197": "MAEO0", "198": "MAFM0", "199": "MAHH0", "200": "MAJC0", "201": "MAJP0", "202": "MAKB0", "203": "MAKR0", "204": "MAPV0", "205": "MARC0", "206": "MARW0", "207": "MBAR0", "208": "MBBR0", "209": "MBCG0", "210": "MBDG0", "211": "MBEF0", "212": "MBGT0", "213": "MBJK0", "214": "MBJV0", "215": "MBMA0", "216": "MBMA1", "217": "MBML0", "218": "MBNS0", "219": "MBOM0", "220": "MBPM0", "221": "MBSB0", "222": "MBTH0", "223": "MBWM0", "224": "MBWP0", "225": "MCAE0", "226": "MCAL0", "227": "MCCS0", "228": "MCDC0", "229": "MCDD0", "230": "MCDR0", "231": "MCEF0", "232": "MCEM0", "233": "MCEW0", "234": "MCHH0", "235": "MCHL0", "236": "MCLK0", "237": "MCLM0", "238": "MCMB0", "239": "MCMJ0", "240": "MCPM0", "241": "MCRC0", "242": "MCRE0", "243": "MCSH0", "244": "MCSS0", "245": "MCTH0", "246": "MCTM0", "247": "MCTT0", "248": "MCTW0", "249": "MCXM0", "250": "MDAB0", "251": "MDAC0", "252": "MDAC2", "253": "MDAS0", "254": "MDAW1", "255": "MDBB0", "256": "MDBB1", "257": "MDBP0", "258": "MDCD0", "259": "MDCM0", "260": "MDDC0", "261": "MDED0", "262": "MDEF0", "263": "MDEM0", "264": "MDHL0", "265": "MDHS0", "266": "MDJM0", "267": "MDKS0", "268": "MDLB0", "269": "MDLC0", "270": "MDLC1", "271": "MDLC2", "272": "MDLD0", "273": "MDLF0", "274": "MDLH0", "275": "MDLM0", "276": "MDLR0", "277": "MDLR1", "278": "MDLS0", "279": "MDMA0", "280": "MDMT0", "281": "MDNS0", "282": "MDPB0", "283": "MDPK0", "284": "MDPS0", "285": "MDRB0", "286": "MDRD0", "287": "MDRM0", "288": "MDSC0", "289": "MDSJ0", "290": "MDSS0", "291": "MDSS1", "292": "MDTB0", "293": "MDVC0", "294": "MDWA0", "295": "MDWD0", "296": "MDWH0", "297": "MDWK0", "298": "MDWM0", "299": "MEAL0", "300": "MEDR0", "301": "MEFG0", "302": "MEGJ0", "303": "MEJL0", "304": "MEJS0", "305": "MERS0", "306": "MESD0", "307": "MESG0", "308": "MESJ0", "309": "MEWM0", "310": "MFER0", "311": "MFGK0", "312": "MFMC0", "313": "MFRM0", "314": "MFWK0", "315": "MFXS0", "316": "MFXV0", "317": "MGAF0", "318": "MGAG0", "319": "MGAK0", "320": "MGAR0", "321": "MGAW0", "322": "MGES0", "323": "MGJC0", "324": "MGJF0", "325": "MGLB0", "326": "MGMM0", "327": "MGRL0", "328": "MGRP0", "329": "MGRT0", "330": "MGSH0", "331": "MGSL0", "332": "MGWT0", "333": "MGXP0", "334": "MHBS0", "335": "MHIT0", "336": "MHJB0", "337": "MHMG0", "338": "MHMR0", "339": "MHPG0", "340": "MHRM0", "341": "MHXL0", "342": "MILB0", "343": "MJAC0", "344": "MJAE0", "345": "MJAI0", "346": "MJAR0", "347": "MJBG0", "348": "MJBR0", "349": "MJDA0", "350": "MJDC0", "351": "MJDE0", "352": "MJDG0", "353": "MJDH0", "354": "MJDM0", "355": "MJDM1", "356": "MJEB0", "357": "MJEB1", "358": "MJEE0", "359": "MJES0", "360": "MJFC0", "361": "MJFH0", "362": "MJFR0", "363": "MJHI0", "364": "MJJB0", "365": "MJJG0", "366": "MJJJ0", "367": "MJJM0", "368": "MJKR0", "369": "MJLB0", "370": "MJLG1", "371": "MJLN0", "372": "MJLS0", "373": "MJMA0", "374": "MJMD0", "375": "MJMM0", "376": "MJMP0", "377": "MJPG0", "378": "MJPM0", "379": "MJPM1", "380": "MJRA0", "381": "MJRF0", "382": "MJRG0", "383": "MJRH0", "384": "MJRH1", "385": "MJRK0", "386": "MJRP0", "387": "MJSR0", "388": "MJSW0", "389": "MJTC0", "390": "MJTH0", "391": "MJVW0", "392": "MJWG0", "393": "MJWS0", "394": "MJWT0", "395": "MJXA0", "396": "MJXL0", "397": "MKAG0", "398": "MKAH0", "399": "MKAJ0", "400": "MKAM0", "401": "MKCH0", "402": "MKCL0", "403": "MKDB0", "404": "MKDD0", "405": "MKDR0", "406": "MKDT0", "407": "MKES0", "408": "MKJL0", "409": "MKJO0", "410": "MKLN0", "411": "MKLR0", "412": "MKLS0", "413": "MKLS1", "414": "MKLT0", "415": "MKLW0", "416": "MKRG0", "417": "MKXL0", "418": "MLBC0", "419": "MLEL0", "420": "MLIH0", "421": "MLJB0", "422": "MLJC0", "423": "MLJH0", "424": "MLLL0", "425": "MLNS0", "426": "MLNT0", "427": "MLSH0", "428": "MMAA0", "429": "MMAB0", "430": "MMAB1", "431": "MMAG0", "432": "MMAM0", "433": "MMAR0", "434": "MMBS0", "435": "MMCC0", "436": "MMDB0", "437": "MMDB1", "438": "MMDG0", "439": "MMDH0", "440": "MMDM0", "441": "MMDM1", "442": "MMDM2", "443": "MMDS0", "444": "MMEA0", "445": "MMEB0", "446": "MMGC0", "447": "MMGG0", "448": "MMGK0", "449": "MMJB1", "450": "MMJR0", "451": "MMLM0", "452": "MMPM0", "453": "MMRP0", "454": "MMSM0", "455": "MMVP0", "456": "MMWB0", "457": "MMWH0", "458": "MMWS0", "459": "MMWS1", "460": "MMXS0", "461": "MNET0", "462": "MNJM0", "463": "MNLS0", "464": "MNTW0", "465": "MPAB0", "466": "MPAM0", "467": "MPAM1", "468": "MPAR0", "469": "MPCS0", "470": "MPDF0", "471": "MPEB0", "472": "MPFU0", "473": "MPGH0", "474": "MPGL0", "475": "MPGR0", "476": "MPGR1", "477": "MPLB0", "478": "MPMB0", "479": "MPPC0", "480": "MPRB0", "481": "MPRD0", "482": "MPRK0", "483": "MPRT0", "484": "MPSW0", "485": "MPWM0", "486": "MRAB0", "487": "MRAB1", "488": "MRAI0", "489": "MRAM0", "490": "MRAV0", "491": "MRBC0", "492": "MRCG0", "493": "MRCS0", "494": "MRCW0", "495": "MRCZ0", "496": "MRDD0", "497": "MRDM0", "498": "MRDS0", "499": "MREB0", "500": "MREE0", "501": "MREH1", "502": "MREM0", "503": "MRES0", "504": "MREW1", "505": "MRFK0", "506": "MRFL0", "507": "MRGG0", "508": "MRGM0", "509": "MRGS0", "510": "MRHL0", "511": "MRJB1", "512": "MRJH0", "513": "MRJM0", "514": "MRJM1", "515": "MRJM3", "516": "MRJM4", "517": "MRJO0", "518": "MRJR0", "519": "MRJS0", "520": "MRJT0", "521": "MRKM0", "522": "MRKO0", "523": "MRLD0", "524": "MRLJ0", "525": "MRLJ1", "526": "MRLK0", "527": "MRLR0", "528": "MRMB0", "529": "MRMG0", "530": "MRMH0", "531": "MRML0", "532": "MRMS0", "533": "MRMS1", "534": "MROA0", "535": "MRPC0", "536": "MRPC1", "537": "MRPP0", "538": "MRRE0", "539": "MRRK0", "540": "MRSO0", "541": "MRSP0", "542": "MRTC0", "543": "MRTJ0", "544": "MRTK0", "545": "MRVG0", "546": "MRWA0", "547": "MRWS0", "548": "MRWS1", "549": "MRXB0", "550": "MSAH1", "551": "MSAS0", "552": "MSAT0", "553": "MSAT1", "554": "MSDB0", "555": "MSDH0", "556": "MSDS0", "557": "MSEM1", "558": "MSES0", "559": "MSFH0", "560": "MSFH1", "561": "MSFV0", "562": "MSJK0", "563": "MSJS1", "564": "MSLB0", "565": "MSMC0", "566": "MSMR0", "567": "MSMS0", "568": "MSRG0", "569": "MSRR0", "570": "MSTF0", "571": "MSTK0", "572": "MSVS0", "573": "MTAA0", "574": "MTAB0", "575": "MTAS0", "576": "MTAS1", "577": "MTAT0", "578": "MTAT1", "579": "MTBC0", "580": "MTCS0", "581": "MTDB0", "582": "MTDP0", "583": "MTDT0", "584": "MTEB0", "585": "MTER0", "586": "MTHC0", "587": "MTJG0", "588": "MTJM0", "589": "MTJS0", "590": "MTJU0", "591": "MTKD0", "592": "MTKP0", "593": "MTLB0", "594": "MTLC0", "595": "MTLS0", "596": "MTML0", "597": "MTMN0", "598": "MTMR0", "599": "MTMT0", "600": "MTPF0", "601": "MTPG0", "602": "MTPP0", "603": "MTPR0", "604": "MTQC0", "605": "MTRC0", "606": "MTRR0", "607": "MTRT0", "608": "MTWH0", "609": "MTWH1", "610": "MTXS0", "611": "MVJH0", "612": "MVLO0", "613": "MVRW0", "614": "MWAC0", "615": "MWAD0", "616": "MWAR0", "617": "MWBT0", "618": "MWCH0", "619": "MWDK0", "620": "MWEM0", "621": "MWEW0", "622": "MWGR0", "623": "MWJG0", "624": "MWRE0", "625": "MWRP0", "626": "MWSB0", "627": "MWSH0", "628": "MWVW0", "629": "MZMB0"}}}}], "splits": [{"name": "train", "num_bytes": 136862, "num_examples": 3780}, {"name": "validation", "num_bytes": 46145, "num_examples": 1260}, {"name": "test", "num_bytes": 46508, "num_examples": 1260}], "download_size": 124769, "dataset_size": 229515}} | 2023-11-25T00:42:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "timit"
More Information needed | [
"# Dataset Card for \"timit\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"timit\"\n\nMore Information needed"
]
| [
6,
11
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"timit\"\n\nMore Information needed"
]
|
3b5ac2bd45b16e66aba2845f91020b02ec63c2aa | # Dataset Card for "binhvq_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jetaudio/binhvq_news | [
"region:us"
]
| 2023-11-25T00:45:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68939074439.0, "num_examples": 19582227}, {"name": "validation", "num_bytes": 349157289.0, "num_examples": 104519}], "download_size": 35606535605, "dataset_size": 69288231728.0}} | 2023-11-25T04:03:44+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "binhvq_news"
More Information needed | [
"# Dataset Card for \"binhvq_news\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"binhvq_news\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"binhvq_news\"\n\nMore Information needed"
]
|
b085ad16ff3e2d75a060a6f41e830d4bccfaa9f7 | # Dataset Card for "rapids-codegen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bdice/rapids-codegen | [
"region:us"
]
| 2023-11-25T00:49:35+00:00 | {"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 378315950, "num_examples": 16827}], "download_size": 151107014, "dataset_size": 378315950}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-27T03:47:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "rapids-codegen"
More Information needed | [
"# Dataset Card for \"rapids-codegen\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"rapids-codegen\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"rapids-codegen\"\n\nMore Information needed"
]
|
bae092306df117a582249dfefd7882648477f12c |
### Description 🙅♂️🤖
Bank of Ghana historical and real-time exchange rates data. [Bank of Ghana](https://www.bog.gov.gh/treasury-and-the-markets/historical-interbank-fx-rates/)
Click Here:[](https://colab.research.google.com/drive/1zZUIyp9zBhwL5CqHS3Ggf5vJCr_yTYw0?usp=sharing)
### Data Format
```shell
{
"date": "...",
"currency": "...",
"currency_pair": "...",
"buying": "...",
"selling": "...",
"mid_rate": "..."
}
```
### Load Dataset
```shell
pip install datasets
```
```python
from datasets import load_dataset
rates = load_dataset("worldboss/bank-of-ghana-rates", split="train")
pd.DataFrame(rates).head()
```
### Author
The data was constructed by Theophilus Siameh ([email protected]). | worldboss/bank-of-ghana-rates | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"ghana",
"news",
"ghana-news",
"bank-of-ghana",
"exchange-rates",
"ghana data",
"region:us"
]
| 2023-11-25T01:01:13+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["conversational", "text-generation", "summarization", "question-answering", "text-classification", "text-retrieval", "translation"], "pretty_name": "No Robots", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Date", "dtype": "string"}, {"name": "Currency", "dtype": "string"}, {"name": "Currency_Pair", "dtype": "string"}, {"name": "Buying", "dtype": "string"}, {"name": "Selling", "dtype": "string"}, {"name": "Mid_Rate", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8628801, "num_examples": 132525}], "download_size": 2273117, "dataset_size": 8628801}, "tags": ["ghana", "news", "ghana-news", "bank-of-ghana", "exchange-rates", "ghana data"]} | 2024-01-10T08:43:08+00:00 | []
| [
"en"
]
| TAGS
#task_categories-conversational #task_categories-text-generation #task_categories-summarization #task_categories-question-answering #task_categories-text-classification #task_categories-text-retrieval #task_categories-translation #size_categories-100K<n<1M #language-English #license-apache-2.0 #ghana #news #ghana-news #bank-of-ghana #exchange-rates #ghana data #region-us
|
### Description ️
Bank of Ghana historical and real-time exchange rates data. Bank of Ghana
Click Here:. | [
"### Description ️\nBank of Ghana historical and real-time exchange rates data. Bank of Ghana\n\nClick Here:."
]
| [
"TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-summarization #task_categories-question-answering #task_categories-text-classification #task_categories-text-retrieval #task_categories-translation #size_categories-100K<n<1M #language-English #license-apache-2.0 #ghana #news #ghana-news #bank-of-ghana #exchange-rates #ghana data #region-us \n",
"### Description ️\nBank of Ghana historical and real-time exchange rates data. Bank of Ghana\n\nClick Here:."
]
| [
132,
32,
4,
6,
22
]
| [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-summarization #task_categories-question-answering #task_categories-text-classification #task_categories-text-retrieval #task_categories-translation #size_categories-100K<n<1M #language-English #license-apache-2.0 #ghana #news #ghana-news #bank-of-ghana #exchange-rates #ghana data #region-us \n### Description ️\nBank of Ghana historical and real-time exchange rates data. Bank of Ghana\n\nClick Here:."
]
|
36e094cd911f43219451e71007861e76845b091a |
### Description 🙅♂️🤖
Bank of Ghana historical and real-time treasury bills data. [Bank of Ghana](https://www.bog.gov.gh/treasury-and-the-markets/treasury-bill-rates/)
Click Here: [](https://colab.research.google.com/drive/1zZUIyp9zBhwL5CqHS3Ggf5vJCr_yTYw0?usp=sharing)
### Data Format
```shell
{
"issue_date": "...",
"tender": "...",
"security_type": "...",
"discount_rate": "...",
"interest_rate": "..."
}
```
### Load Dataset
```shell
pip install datasets
```
```python
from datasets import load_dataset
treasury = load_dataset("worldboss/bank-of-ghana-treasury-bills", split="train")
pd.DataFrame(treasury).head()
```
### Author
The data was constructed by Theophilus Siameh ([email protected]). | worldboss/bank-of-ghana-treasury-bills | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"ghana",
"news",
"ghana-news",
"bank-of-ghana",
"exchange-rates",
"ghana data",
"bank of ghana",
"region:us"
]
| 2023-11-25T01:23:08+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational", "text-generation", "summarization", "question-answering", "text-classification", "text-retrieval", "translation"], "pretty_name": "No Robots", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Issue_Date", "dtype": "string"}, {"name": "Tender", "dtype": "int64"}, {"name": "Security_Type", "dtype": "string"}, {"name": "Discount_Rate", "dtype": "float64"}, {"name": "Interest_Rate", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 50338, "num_examples": 958}], "download_size": 23906, "dataset_size": 50338}, "tags": ["ghana", "news", "ghana-news", "bank-of-ghana", "exchange-rates", "ghana data", "bank of ghana"]} | 2024-01-10T08:43:23+00:00 | []
| [
"en"
]
| TAGS
#task_categories-conversational #task_categories-text-generation #task_categories-summarization #task_categories-question-answering #task_categories-text-classification #task_categories-text-retrieval #task_categories-translation #size_categories-10K<n<100K #language-English #license-apache-2.0 #ghana #news #ghana-news #bank-of-ghana #exchange-rates #ghana data #bank of ghana #region-us
|
### Description ️
Bank of Ghana historical and real-time treasury bills data. Bank of Ghana
Click Here: . | [
"### Description ️ \nBank of Ghana historical and real-time treasury bills data. Bank of Ghana\n\nClick Here: ."
]
| [
"TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-summarization #task_categories-question-answering #task_categories-text-classification #task_categories-text-retrieval #task_categories-translation #size_categories-10K<n<100K #language-English #license-apache-2.0 #ghana #news #ghana-news #bank-of-ghana #exchange-rates #ghana data #bank of ghana #region-us \n",
"### Description ️ \nBank of Ghana historical and real-time treasury bills data. Bank of Ghana\n\nClick Here: ."
]
| [
137,
35,
4,
6,
22
]
| [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-summarization #task_categories-question-answering #task_categories-text-classification #task_categories-text-retrieval #task_categories-translation #size_categories-10K<n<100K #language-English #license-apache-2.0 #ghana #news #ghana-news #bank-of-ghana #exchange-rates #ghana data #bank of ghana #region-us \n### Description ️ \nBank of Ghana historical and real-time treasury bills data. Bank of Ghana\n\nClick Here: ."
]
|
911c494cf719373c0ebf70182009e290deb8a3d5 |
# Dataset Card for Evaluation run of Sayan01/Llama-Flan-XL2base
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Sayan01/Llama-Flan-XL2base
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Sayan01/Llama-Flan-XL2base](https://huggingface.co/Sayan01/Llama-Flan-XL2base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Sayan01__Llama-Flan-XL2base_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-25T01:29:13.925640](https://huggingface.co/datasets/open-llm-leaderboard/details_Sayan01__Llama-Flan-XL2base_public/blob/main/results_2023-11-25T01-29-13.925640.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.23221079815429288,
"acc_stderr": 0.02994811714846116,
"acc_norm": 0.23187497505966656,
"acc_norm_stderr": 0.030736580620987688,
"mc1": 0.2423500611995104,
"mc1_stderr": 0.01500067437357034,
"mc2": 0.5058224656335896,
"mc2_stderr": 0.016425425630600676,
"em": 0.00010486577181208053,
"em_stderr": 0.00010486577181208623,
"f1": 0.0029037332214765076,
"f1_stderr": 0.0002952362942135874
},
"harness|arc:challenge|25": {
"acc": 0.1757679180887372,
"acc_stderr": 0.01112285086312048,
"acc_norm": 0.20648464163822525,
"acc_norm_stderr": 0.011828865619002316
},
"harness|hellaswag|10": {
"acc": 0.2592113124875523,
"acc_stderr": 0.004373062283376514,
"acc_norm": 0.2533359888468433,
"acc_norm_stderr": 0.0043403282041351975
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.18518518518518517,
"acc_stderr": 0.03355677216313142,
"acc_norm": 0.18518518518518517,
"acc_norm_stderr": 0.03355677216313142
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21509433962264152,
"acc_stderr": 0.02528839450289137,
"acc_norm": 0.21509433962264152,
"acc_norm_stderr": 0.02528839450289137
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.20809248554913296,
"acc_stderr": 0.030952890217749874,
"acc_norm": 0.20809248554913296,
"acc_norm_stderr": 0.030952890217749874
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.26382978723404255,
"acc_stderr": 0.028809989854102973,
"acc_norm": 0.26382978723404255,
"acc_norm_stderr": 0.028809989854102973
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813365,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813365
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.20899470899470898,
"acc_stderr": 0.02094048156533486,
"acc_norm": 0.20899470899470898,
"acc_norm_stderr": 0.02094048156533486
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.04040610178208841,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.04040610178208841
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.1774193548387097,
"acc_stderr": 0.02173254068932927,
"acc_norm": 0.1774193548387097,
"acc_norm_stderr": 0.02173254068932927
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.15270935960591134,
"acc_stderr": 0.02530890453938063,
"acc_norm": 0.15270935960591134,
"acc_norm_stderr": 0.02530890453938063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24242424242424243,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.24242424242424243,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.17676767676767677,
"acc_stderr": 0.027178752639044915,
"acc_norm": 0.17676767676767677,
"acc_norm_stderr": 0.027178752639044915
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.028697873971860664,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.028697873971860664
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.20256410256410257,
"acc_stderr": 0.020377660970371372,
"acc_norm": 0.20256410256410257,
"acc_norm_stderr": 0.020377660970371372
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2111111111111111,
"acc_stderr": 0.024882116857655075,
"acc_norm": 0.2111111111111111,
"acc_norm_stderr": 0.024882116857655075
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.1986754966887417,
"acc_stderr": 0.03257847384436776,
"acc_norm": 0.1986754966887417,
"acc_norm_stderr": 0.03257847384436776
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.1926605504587156,
"acc_stderr": 0.016909276884936094,
"acc_norm": 0.1926605504587156,
"acc_norm_stderr": 0.016909276884936094
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1527777777777778,
"acc_stderr": 0.024536326026134224,
"acc_norm": 0.1527777777777778,
"acc_norm_stderr": 0.024536326026134224
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2696078431372549,
"acc_stderr": 0.031145570659486782,
"acc_norm": 0.2696078431372549,
"acc_norm_stderr": 0.031145570659486782
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.26582278481012656,
"acc_stderr": 0.02875679962965834,
"acc_norm": 0.26582278481012656,
"acc_norm_stderr": 0.02875679962965834
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.31390134529147984,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.31390134529147984,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2396694214876033,
"acc_stderr": 0.03896878985070417,
"acc_norm": 0.2396694214876033,
"acc_norm_stderr": 0.03896878985070417
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22085889570552147,
"acc_stderr": 0.032591773927421776,
"acc_norm": 0.22085889570552147,
"acc_norm_stderr": 0.032591773927421776
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3125,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2905982905982906,
"acc_stderr": 0.02974504857267404,
"acc_norm": 0.2905982905982906,
"acc_norm_stderr": 0.02974504857267404
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23754789272030652,
"acc_stderr": 0.015218733046150193,
"acc_norm": 0.23754789272030652,
"acc_norm_stderr": 0.015218733046150193
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.023929155517351284,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.023929155517351284
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.1864951768488746,
"acc_stderr": 0.02212243977248077,
"acc_norm": 0.1864951768488746,
"acc_norm_stderr": 0.02212243977248077
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.21604938271604937,
"acc_stderr": 0.022899162918445806,
"acc_norm": 0.21604938271604937,
"acc_norm_stderr": 0.022899162918445806
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23404255319148937,
"acc_stderr": 0.025257861359432417,
"acc_norm": 0.23404255319148937,
"acc_norm_stderr": 0.025257861359432417
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2503259452411995,
"acc_stderr": 0.011064151027165443,
"acc_norm": 0.2503259452411995,
"acc_norm_stderr": 0.011064151027165443
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.18382352941176472,
"acc_stderr": 0.023529242185193106,
"acc_norm": 0.18382352941176472,
"acc_norm_stderr": 0.023529242185193106
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.25,
"acc_stderr": 0.01751781884501444,
"acc_norm": 0.25,
"acc_norm_stderr": 0.01751781884501444
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.18775510204081633,
"acc_stderr": 0.02500025603954621,
"acc_norm": 0.18775510204081633,
"acc_norm_stderr": 0.02500025603954621
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.03036049015401465,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.03036049015401465
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.28313253012048195,
"acc_stderr": 0.03507295431370518,
"acc_norm": 0.28313253012048195,
"acc_norm_stderr": 0.03507295431370518
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3216374269005848,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.3216374269005848,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2423500611995104,
"mc1_stderr": 0.01500067437357034,
"mc2": 0.5058224656335896,
"mc2_stderr": 0.016425425630600676
},
"harness|winogrande|5": {
"acc": 0.5090765588003157,
"acc_stderr": 0.014050170094497704
},
"harness|drop|3": {
"em": 0.00010486577181208053,
"em_stderr": 0.00010486577181208623,
"f1": 0.0029037332214765076,
"f1_stderr": 0.0002952362942135874
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Sayan01__Llama-Flan-XL2base | [
"region:us"
]
| 2023-11-25T01:31:39+00:00 | {"pretty_name": "Evaluation run of Sayan01/Llama-Flan-XL2base", "dataset_summary": "Dataset automatically created during the evaluation run of model [Sayan01/Llama-Flan-XL2base](https://huggingface.co/Sayan01/Llama-Flan-XL2base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sayan01__Llama-Flan-XL2base_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-25T01:29:13.925640](https://huggingface.co/datasets/open-llm-leaderboard/details_Sayan01__Llama-Flan-XL2base_public/blob/main/results_2023-11-25T01-29-13.925640.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.23221079815429288,\n \"acc_stderr\": 0.02994811714846116,\n \"acc_norm\": 0.23187497505966656,\n \"acc_norm_stderr\": 0.030736580620987688,\n \"mc1\": 0.2423500611995104,\n \"mc1_stderr\": 0.01500067437357034,\n \"mc2\": 0.5058224656335896,\n \"mc2_stderr\": 0.016425425630600676,\n \"em\": 0.00010486577181208053,\n \"em_stderr\": 0.00010486577181208623,\n \"f1\": 0.0029037332214765076,\n \"f1_stderr\": 0.0002952362942135874\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.1757679180887372,\n \"acc_stderr\": 0.01112285086312048,\n \"acc_norm\": 0.20648464163822525,\n \"acc_norm_stderr\": 0.011828865619002316\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2592113124875523,\n \"acc_stderr\": 0.004373062283376514,\n \"acc_norm\": 0.2533359888468433,\n \"acc_norm_stderr\": 0.0043403282041351975\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.18518518518518517,\n \"acc_stderr\": 0.03355677216313142,\n \"acc_norm\": 0.18518518518518517,\n \"acc_norm_stderr\": 0.03355677216313142\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.21509433962264152,\n \"acc_stderr\": 0.02528839450289137,\n \"acc_norm\": 0.21509433962264152,\n \"acc_norm_stderr\": 0.02528839450289137\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.20809248554913296,\n \"acc_stderr\": 0.030952890217749874,\n \"acc_norm\": 0.20809248554913296,\n \"acc_norm_stderr\": 0.030952890217749874\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.20899470899470898,\n \"acc_stderr\": 0.02094048156533486,\n \"acc_norm\": 0.20899470899470898,\n \"acc_norm_stderr\": 0.02094048156533486\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.04040610178208841,\n \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.04040610178208841\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.1774193548387097,\n \"acc_stderr\": 0.02173254068932927,\n \"acc_norm\": 0.1774193548387097,\n \"acc_norm_stderr\": 0.02173254068932927\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.15270935960591134,\n \"acc_stderr\": 0.02530890453938063,\n \"acc_norm\": 0.15270935960591134,\n \"acc_norm_stderr\": 0.02530890453938063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.17676767676767677,\n \"acc_stderr\": 0.027178752639044915,\n \"acc_norm\": 0.17676767676767677,\n \"acc_norm_stderr\": 0.027178752639044915\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.20256410256410257,\n \"acc_stderr\": 0.020377660970371372,\n \"acc_norm\": 0.20256410256410257,\n \"acc_norm_stderr\": 0.020377660970371372\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2111111111111111,\n \"acc_stderr\": 0.024882116857655075,\n \"acc_norm\": 0.2111111111111111,\n \"acc_norm_stderr\": 0.024882116857655075\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436776,\n \"acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436776\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.1926605504587156,\n \"acc_stderr\": 0.016909276884936094,\n \"acc_norm\": 0.1926605504587156,\n \"acc_norm_stderr\": 0.016909276884936094\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.1527777777777778,\n \"acc_stderr\": 0.024536326026134224,\n \"acc_norm\": 0.1527777777777778,\n \"acc_norm_stderr\": 0.024536326026134224\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2696078431372549,\n \"acc_stderr\": 0.031145570659486782,\n \"acc_norm\": 0.2696078431372549,\n \"acc_norm_stderr\": 0.031145570659486782\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.26582278481012656,\n \"acc_stderr\": 0.02875679962965834,\n \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.02875679962965834\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.31390134529147984,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.31390134529147984,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.2396694214876033,\n \"acc_stderr\": 0.03896878985070417,\n \"acc_norm\": 0.2396694214876033,\n \"acc_norm_stderr\": 0.03896878985070417\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.22085889570552147,\n \"acc_stderr\": 0.032591773927421776,\n \"acc_norm\": 0.22085889570552147,\n \"acc_norm_stderr\": 0.032591773927421776\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2905982905982906,\n \"acc_stderr\": 0.02974504857267404,\n \"acc_norm\": 0.2905982905982906,\n \"acc_norm_stderr\": 0.02974504857267404\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23754789272030652,\n \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.23754789272030652,\n \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.023929155517351284,\n \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.023929155517351284\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.1864951768488746,\n \"acc_stderr\": 0.02212243977248077,\n \"acc_norm\": 0.1864951768488746,\n \"acc_norm_stderr\": 0.02212243977248077\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.21604938271604937,\n \"acc_stderr\": 0.022899162918445806,\n \"acc_norm\": 0.21604938271604937,\n \"acc_norm_stderr\": 0.022899162918445806\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.23404255319148937,\n \"acc_stderr\": 0.025257861359432417,\n \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.025257861359432417\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2503259452411995,\n \"acc_stderr\": 0.011064151027165443,\n \"acc_norm\": 0.2503259452411995,\n \"acc_norm_stderr\": 0.011064151027165443\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.18382352941176472,\n \"acc_stderr\": 0.023529242185193106,\n \"acc_norm\": 0.18382352941176472,\n \"acc_norm_stderr\": 0.023529242185193106\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.18775510204081633,\n \"acc_stderr\": 0.02500025603954621,\n \"acc_norm\": 0.18775510204081633,\n \"acc_norm_stderr\": 0.02500025603954621\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n \"acc_stderr\": 0.03036049015401465,\n \"acc_norm\": 0.24378109452736318,\n \"acc_norm_stderr\": 0.03036049015401465\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.28313253012048195,\n \"acc_stderr\": 0.03507295431370518,\n \"acc_norm\": 0.28313253012048195,\n \"acc_norm_stderr\": 0.03507295431370518\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3216374269005848,\n \"acc_stderr\": 0.03582529442573122,\n \"acc_norm\": 0.3216374269005848,\n \"acc_norm_stderr\": 0.03582529442573122\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2423500611995104,\n \"mc1_stderr\": 0.01500067437357034,\n \"mc2\": 0.5058224656335896,\n \"mc2_stderr\": 0.016425425630600676\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5090765588003157,\n \"acc_stderr\": 0.014050170094497704\n },\n \"harness|drop|3\": {\n \"em\": 0.00010486577181208053,\n \"em_stderr\": 0.00010486577181208623,\n \"f1\": 0.0029037332214765076,\n \"f1_stderr\": 0.0002952362942135874\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/Sayan01/Llama-Flan-XL2base", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|arc:challenge|25_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|drop|3_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|gsm8k|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hellaswag|10_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T01-29-13.925640.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["**/details_harness|winogrande|5_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-25T01-29-13.925640.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_25T01_29_13.925640", "path": ["results_2023-11-25T01-29-13.925640.parquet"]}, {"split": "latest", "path": ["results_2023-11-25T01-29-13.925640.parquet"]}]}]} | 2023-11-25T01:32:24+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of Sayan01/Llama-Flan-XL2base
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Sayan01/Llama-Flan-XL2base on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-25T01:29:13.925640(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Sayan01/Llama-Flan-XL2base",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sayan01/Llama-Flan-XL2base on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T01:29:13.925640(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Sayan01/Llama-Flan-XL2base",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sayan01/Llama-Flan-XL2base on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T01:29:13.925640(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Sayan01/Llama-Flan-XL2base## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sayan01/Llama-Flan-XL2base on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-25T01:29:13.925640(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
d90a62deedfc822a15de3f35535bb41474e4d3f4 | ## Disclaimer:
The dataset may contain personal information crawled along with the contents of various sources. Please make a filter in pre-processing data before starting your research training.
# SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts
This is the official repository for the ViHealthQA dataset from the paper [SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts](https://arxiv.org/pdf/2206.09600.pdf), which was accepted at the [KSEM-2022](https://ksem22.smart-conf.net/index.html).
# Citation Information
The provided dataset is only used for research purposes!
```
@InProceedings{nguyen2022viheathqa,
author="Nguyen, Nhung Thi-Hong
and Ha, Phuong Phan-Dieu
and Nguyen, Luan Thanh
and Van Nguyen, Kiet
and Nguyen, Ngan Luu-Thuy",
title="SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts",
booktitle="Knowledge Science, Engineering and Management",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="371--382",
isbn="978-3-031-10986-7"
}
```
# Abstract
Question answering (QA) systems have gained explosive attention in recent years. However, QA tasks in Vietnamese do not have many datasets. Significantly, there is mostly no dataset in the medical domain. Therefore, we built a Vietnamese Healthcare Question Answering dataset (ViHealthQA), including 10,015 question-answer passage pairs for this task, in which questions from health-interested users were asked on prestigious health websites and answers from highly qualified experts. This paper proposes a two-stage QA system based on Sentence-BERT (SBERT) using multiple negatives ranking (MNR) loss combined with BM25. Then, we conduct diverse experiments with many bag-of-words models to assess our system’s performance. With the obtained results, this system achieves better performance than traditional methods.
# Dataset
The ViHealthQA dataset is consist of 10,015 question-answer passage pairs. Note that questions are from health-interested users asked on prestigious health websites and answers are from highly qualified experts.
The dataset is divided into three parts as below:
1. Train set: 7.01K question-answer pairs
2. Valid set: 2.01 question-answer pairs
3. Test set: 993 question-answer pairs
# Contact
Please feel free to contact us by email [email protected] if you have any further information! | tarudesu/ViHealthQA | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:vi",
"medical",
"arxiv:2206.09600",
"region:us"
]
| 2023-11-25T01:47:37+00:00 | {"language": ["vi"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "pretty_name": "Vietnamese Healthcare Question Answering Dataset", "tags": ["medical"]} | 2023-11-28T07:21:22+00:00 | [
"2206.09600"
]
| [
"vi"
]
| TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-Vietnamese #medical #arxiv-2206.09600 #region-us
| ## Disclaimer:
The dataset may contain personal information crawled along with the contents of various sources. Please make a filter in pre-processing data before starting your research training.
# SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts
This is the official repository for the ViHealthQA dataset from the paper SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts, which was accepted at the KSEM-2022.
The provided dataset is only used for research purposes!
# Abstract
Question answering (QA) systems have gained explosive attention in recent years. However, QA tasks in Vietnamese do not have many datasets. Significantly, there is mostly no dataset in the medical domain. Therefore, we built a Vietnamese Healthcare Question Answering dataset (ViHealthQA), including 10,015 question-answer passage pairs for this task, in which questions from health-interested users were asked on prestigious health websites and answers from highly qualified experts. This paper proposes a two-stage QA system based on Sentence-BERT (SBERT) using multiple negatives ranking (MNR) loss combined with BM25. Then, we conduct diverse experiments with many bag-of-words models to assess our system’s performance. With the obtained results, this system achieves better performance than traditional methods.
# Dataset
The ViHealthQA dataset is consist of 10,015 question-answer passage pairs. Note that questions are from health-interested users asked on prestigious health websites and answers are from highly qualified experts.
The dataset is divided into three parts as below:
1. Train set: 7.01K question-answer pairs
2. Valid set: 2.01 question-answer pairs
3. Test set: 993 question-answer pairs
# Contact
Please feel free to contact us by email luannt@URL if you have any further information! | [
"## Disclaimer:\nThe dataset may contain personal information crawled along with the contents of various sources. Please make a filter in pre-processing data before starting your research training.",
"# SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts\nThis is the official repository for the ViHealthQA dataset from the paper SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts, which was accepted at the KSEM-2022.\n\n\nThe provided dataset is only used for research purposes!",
"# Abstract\n\nQuestion answering (QA) systems have gained explosive attention in recent years. However, QA tasks in Vietnamese do not have many datasets. Significantly, there is mostly no dataset in the medical domain. Therefore, we built a Vietnamese Healthcare Question Answering dataset (ViHealthQA), including 10,015 question-answer passage pairs for this task, in which questions from health-interested users were asked on prestigious health websites and answers from highly qualified experts. This paper proposes a two-stage QA system based on Sentence-BERT (SBERT) using multiple negatives ranking (MNR) loss combined with BM25. Then, we conduct diverse experiments with many bag-of-words models to assess our system’s performance. With the obtained results, this system achieves better performance than traditional methods.",
"# Dataset\nThe ViHealthQA dataset is consist of 10,015 question-answer passage pairs. Note that questions are from health-interested users asked on prestigious health websites and answers are from highly qualified experts.\n\nThe dataset is divided into three parts as below:\n1. Train set: 7.01K question-answer pairs\n2. Valid set: 2.01 question-answer pairs\n3. Test set: 993 question-answer pairs",
"# Contact\nPlease feel free to contact us by email luannt@URL if you have any further information!"
]
| [
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-Vietnamese #medical #arxiv-2206.09600 #region-us \n",
"## Disclaimer:\nThe dataset may contain personal information crawled along with the contents of various sources. Please make a filter in pre-processing data before starting your research training.",
"# SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts\nThis is the official repository for the ViHealthQA dataset from the paper SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts, which was accepted at the KSEM-2022.\n\n\nThe provided dataset is only used for research purposes!",
"# Abstract\n\nQuestion answering (QA) systems have gained explosive attention in recent years. However, QA tasks in Vietnamese do not have many datasets. Significantly, there is mostly no dataset in the medical domain. Therefore, we built a Vietnamese Healthcare Question Answering dataset (ViHealthQA), including 10,015 question-answer passage pairs for this task, in which questions from health-interested users were asked on prestigious health websites and answers from highly qualified experts. This paper proposes a two-stage QA system based on Sentence-BERT (SBERT) using multiple negatives ranking (MNR) loss combined with BM25. Then, we conduct diverse experiments with many bag-of-words models to assess our system’s performance. With the obtained results, this system achieves better performance than traditional methods.",
"# Dataset\nThe ViHealthQA dataset is consist of 10,015 question-answer passage pairs. Note that questions are from health-interested users asked on prestigious health websites and answers are from highly qualified experts.\n\nThe dataset is divided into three parts as below:\n1. Train set: 7.01K question-answer pairs\n2. Valid set: 2.01 question-answer pairs\n3. Test set: 993 question-answer pairs",
"# Contact\nPlease feel free to contact us by email luannt@URL if you have any further information!"
]
| [
48,
37,
94,
189,
98,
22
]
| [
"passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-Vietnamese #medical #arxiv-2206.09600 #region-us \n## Disclaimer:\nThe dataset may contain personal information crawled along with the contents of various sources. Please make a filter in pre-processing data before starting your research training.# SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts\nThis is the official repository for the ViHealthQA dataset from the paper SPBERTQA: A Two-Stage Question Answering System Based on Sentence Transformers for Medical Texts, which was accepted at the KSEM-2022.\n\n\nThe provided dataset is only used for research purposes!# Abstract\n\nQuestion answering (QA) systems have gained explosive attention in recent years. However, QA tasks in Vietnamese do not have many datasets. Significantly, there is mostly no dataset in the medical domain. Therefore, we built a Vietnamese Healthcare Question Answering dataset (ViHealthQA), including 10,015 question-answer passage pairs for this task, in which questions from health-interested users were asked on prestigious health websites and answers from highly qualified experts. This paper proposes a two-stage QA system based on Sentence-BERT (SBERT) using multiple negatives ranking (MNR) loss combined with BM25. Then, we conduct diverse experiments with many bag-of-words models to assess our system’s performance. With the obtained results, this system achieves better performance than traditional methods.# Dataset\nThe ViHealthQA dataset is consist of 10,015 question-answer passage pairs. Note that questions are from health-interested users asked on prestigious health websites and answers are from highly qualified experts.\n\nThe dataset is divided into three parts as below:\n1. Train set: 7.01K question-answer pairs\n2. Valid set: 2.01 question-answer pairs\n3. Test set: 993 question-answer pairs# Contact\nPlease feel free to contact us by email luannt@URL if you have any further information!"
]
|
8d0a2fb720f87c29d8564e44ebdb3a24860d4a9f |
# Dataset of nagato (Azur Lane)
This is the dataset of nagato (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 520 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 584 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 520 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 520 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 406 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 584 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 584 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/nagato_azurlane | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-25T01:59:38+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-25T02:00:03+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of nagato (Azur Lane)
=============================
This is the dataset of nagato (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
cf892b1f0e2916608f857ef9f031bf1296fa6828 |
# English-Igbo Parallel Corpus
## Description
This dataset is a comprehensive collection of parallel sentences in English and Igbo. It has been compiled from multiple sources to create a rich resource for machine translation, language research, and natural language processing tasks. The dataset is particularly valuable for those focusing on Igbo, a language spoken primarily in Nigeria, which is underrepresented in the field of computational linguistics.
## Composition
The dataset comprises the following components:
1. **iamwille/igbo-translation Dataset:** Originally sourced from [Hugging Face](https://huggingface.co/datasets/iamwille/igbo-translation), this subset includes professionally translated sentences covering various topics.
2. **Igbo-English Machine Translation Dataset:** This subset, also sourced from [Papers with Code](https://paperswithcode.com/dataset/igbonlp-datasets), provides additional translated sentences with a focus on common phrases and expressions.
3. **Custom Text File Translations:** A set of [English-Igbo](https://github.com/masakhane-io/masakhane-community/blob/master/list-of-datasets.md) sentence pairs extracted from a custom text file, offering a diverse range of everyday language usage.
4. **JW300 Text Data:** Sourced from the [JW300 corpus](https://github.com/Niger-Volta-LTI/igbo-text/tree/master/JW300), this part includes religious and educational text, contributing to the diversity of the dataset.
## Dataset Structure
### Data Splits
The dataset is divided into two splits:
- **Training Set:** Contains a majority of the sentences, intended for training machine translation models or other NLP tasks.
- **Test Set:** A smaller set of sentences for evaluating model performance.
### Data Fields
Each entry in the dataset consists of two fields:
- `English`: The sentence or phrase in English.
- `Igbo`: The corresponding translation in Igbo.
### Data Format
The dataset is available in CSV format, with each row representing a parallel sentence pair.
## Usage
This dataset can be used for various tasks in NLP, including but not limited to:
- Machine Translation
- Cross-lingual Transfer Learning
- Linguistic Research
## Source and Acknowledgments
This dataset has been compiled from various sources. We acknowledge the contributions of:
- Hugging Face for hosting the initial datasets.
- The creators of the JW300 corpus.
- Individual contributors of the custom text translations.
## Licensing
This dataset is released under [specify the license, e.g., MIT License]. Please ensure to adhere to the licensing terms of the original sources when using this dataset.
## Contact
For any queries or contributions, please contact Chinemerem Ibe-Ekeocha at [email protected].
| ccibeekeoc42/english_to_igbo | [
"language:en",
"language:ig",
"license:mit",
"machine-translation",
"igbo",
"english",
"nlp",
"region:us"
]
| 2023-11-25T02:05:05+00:00 | {"language": ["en", "ig"], "license": "mit", "pretty_name": "My English to Igbo Dataset", "tags": ["machine-translation", "igbo", "english", "nlp"]} | 2023-11-25T05:36:07+00:00 | []
| [
"en",
"ig"
]
| TAGS
#language-English #language-Igbo #license-mit #machine-translation #igbo #english #nlp #region-us
|
# English-Igbo Parallel Corpus
## Description
This dataset is a comprehensive collection of parallel sentences in English and Igbo. It has been compiled from multiple sources to create a rich resource for machine translation, language research, and natural language processing tasks. The dataset is particularly valuable for those focusing on Igbo, a language spoken primarily in Nigeria, which is underrepresented in the field of computational linguistics.
## Composition
The dataset comprises the following components:
1. iamwille/igbo-translation Dataset: Originally sourced from Hugging Face, this subset includes professionally translated sentences covering various topics.
2. Igbo-English Machine Translation Dataset: This subset, also sourced from Papers with Code, provides additional translated sentences with a focus on common phrases and expressions.
3. Custom Text File Translations: A set of English-Igbo sentence pairs extracted from a custom text file, offering a diverse range of everyday language usage.
4. JW300 Text Data: Sourced from the JW300 corpus, this part includes religious and educational text, contributing to the diversity of the dataset.
## Dataset Structure
### Data Splits
The dataset is divided into two splits:
- Training Set: Contains a majority of the sentences, intended for training machine translation models or other NLP tasks.
- Test Set: A smaller set of sentences for evaluating model performance.
### Data Fields
Each entry in the dataset consists of two fields:
- 'English': The sentence or phrase in English.
- 'Igbo': The corresponding translation in Igbo.
### Data Format
The dataset is available in CSV format, with each row representing a parallel sentence pair.
## Usage
This dataset can be used for various tasks in NLP, including but not limited to:
- Machine Translation
- Cross-lingual Transfer Learning
- Linguistic Research
## Source and Acknowledgments
This dataset has been compiled from various sources. We acknowledge the contributions of:
- Hugging Face for hosting the initial datasets.
- The creators of the JW300 corpus.
- Individual contributors of the custom text translations.
## Licensing
This dataset is released under [specify the license, e.g., MIT License]. Please ensure to adhere to the licensing terms of the original sources when using this dataset.
## Contact
For any queries or contributions, please contact Chinemerem Ibe-Ekeocha at ccibeekeoc42@URL.
| [
"# English-Igbo Parallel Corpus",
"## Description\r\n\r\nThis dataset is a comprehensive collection of parallel sentences in English and Igbo. It has been compiled from multiple sources to create a rich resource for machine translation, language research, and natural language processing tasks. The dataset is particularly valuable for those focusing on Igbo, a language spoken primarily in Nigeria, which is underrepresented in the field of computational linguistics.",
"## Composition\r\n\r\nThe dataset comprises the following components:\r\n\r\n1. iamwille/igbo-translation Dataset: Originally sourced from Hugging Face, this subset includes professionally translated sentences covering various topics.\r\n\r\n2. Igbo-English Machine Translation Dataset: This subset, also sourced from Papers with Code, provides additional translated sentences with a focus on common phrases and expressions.\r\n\r\n3. Custom Text File Translations: A set of English-Igbo sentence pairs extracted from a custom text file, offering a diverse range of everyday language usage.\r\n\r\n4. JW300 Text Data: Sourced from the JW300 corpus, this part includes religious and educational text, contributing to the diversity of the dataset.",
"## Dataset Structure",
"### Data Splits\r\n\r\nThe dataset is divided into two splits:\r\n\r\n- Training Set: Contains a majority of the sentences, intended for training machine translation models or other NLP tasks.\r\n- Test Set: A smaller set of sentences for evaluating model performance.",
"### Data Fields\r\n\r\nEach entry in the dataset consists of two fields:\r\n\r\n- 'English': The sentence or phrase in English.\r\n- 'Igbo': The corresponding translation in Igbo.",
"### Data Format\r\n\r\nThe dataset is available in CSV format, with each row representing a parallel sentence pair.",
"## Usage\r\n\r\nThis dataset can be used for various tasks in NLP, including but not limited to:\r\n\r\n- Machine Translation\r\n- Cross-lingual Transfer Learning\r\n- Linguistic Research",
"## Source and Acknowledgments\r\n\r\nThis dataset has been compiled from various sources. We acknowledge the contributions of:\r\n\r\n- Hugging Face for hosting the initial datasets.\r\n- The creators of the JW300 corpus.\r\n- Individual contributors of the custom text translations.",
"## Licensing\r\n\r\nThis dataset is released under [specify the license, e.g., MIT License]. Please ensure to adhere to the licensing terms of the original sources when using this dataset.",
"## Contact\r\n\r\nFor any queries or contributions, please contact Chinemerem Ibe-Ekeocha at ccibeekeoc42@URL."
]
| [
"TAGS\n#language-English #language-Igbo #license-mit #machine-translation #igbo #english #nlp #region-us \n",
"# English-Igbo Parallel Corpus",
"## Description\r\n\r\nThis dataset is a comprehensive collection of parallel sentences in English and Igbo. It has been compiled from multiple sources to create a rich resource for machine translation, language research, and natural language processing tasks. The dataset is particularly valuable for those focusing on Igbo, a language spoken primarily in Nigeria, which is underrepresented in the field of computational linguistics.",
"## Composition\r\n\r\nThe dataset comprises the following components:\r\n\r\n1. iamwille/igbo-translation Dataset: Originally sourced from Hugging Face, this subset includes professionally translated sentences covering various topics.\r\n\r\n2. Igbo-English Machine Translation Dataset: This subset, also sourced from Papers with Code, provides additional translated sentences with a focus on common phrases and expressions.\r\n\r\n3. Custom Text File Translations: A set of English-Igbo sentence pairs extracted from a custom text file, offering a diverse range of everyday language usage.\r\n\r\n4. JW300 Text Data: Sourced from the JW300 corpus, this part includes religious and educational text, contributing to the diversity of the dataset.",
"## Dataset Structure",
"### Data Splits\r\n\r\nThe dataset is divided into two splits:\r\n\r\n- Training Set: Contains a majority of the sentences, intended for training machine translation models or other NLP tasks.\r\n- Test Set: A smaller set of sentences for evaluating model performance.",
"### Data Fields\r\n\r\nEach entry in the dataset consists of two fields:\r\n\r\n- 'English': The sentence or phrase in English.\r\n- 'Igbo': The corresponding translation in Igbo.",
"### Data Format\r\n\r\nThe dataset is available in CSV format, with each row representing a parallel sentence pair.",
"## Usage\r\n\r\nThis dataset can be used for various tasks in NLP, including but not limited to:\r\n\r\n- Machine Translation\r\n- Cross-lingual Transfer Learning\r\n- Linguistic Research",
"## Source and Acknowledgments\r\n\r\nThis dataset has been compiled from various sources. We acknowledge the contributions of:\r\n\r\n- Hugging Face for hosting the initial datasets.\r\n- The creators of the JW300 corpus.\r\n- Individual contributors of the custom text translations.",
"## Licensing\r\n\r\nThis dataset is released under [specify the license, e.g., MIT License]. Please ensure to adhere to the licensing terms of the original sources when using this dataset.",
"## Contact\r\n\r\nFor any queries or contributions, please contact Chinemerem Ibe-Ekeocha at ccibeekeoc42@URL."
]
| [
35,
8,
86,
161,
6,
58,
45,
25,
38,
60,
46,
32
]
| [
"passage: TAGS\n#language-English #language-Igbo #license-mit #machine-translation #igbo #english #nlp #region-us \n# English-Igbo Parallel Corpus## Description\r\n\r\nThis dataset is a comprehensive collection of parallel sentences in English and Igbo. It has been compiled from multiple sources to create a rich resource for machine translation, language research, and natural language processing tasks. The dataset is particularly valuable for those focusing on Igbo, a language spoken primarily in Nigeria, which is underrepresented in the field of computational linguistics.## Composition\r\n\r\nThe dataset comprises the following components:\r\n\r\n1. iamwille/igbo-translation Dataset: Originally sourced from Hugging Face, this subset includes professionally translated sentences covering various topics.\r\n\r\n2. Igbo-English Machine Translation Dataset: This subset, also sourced from Papers with Code, provides additional translated sentences with a focus on common phrases and expressions.\r\n\r\n3. Custom Text File Translations: A set of English-Igbo sentence pairs extracted from a custom text file, offering a diverse range of everyday language usage.\r\n\r\n4. JW300 Text Data: Sourced from the JW300 corpus, this part includes religious and educational text, contributing to the diversity of the dataset.## Dataset Structure### Data Splits\r\n\r\nThe dataset is divided into two splits:\r\n\r\n- Training Set: Contains a majority of the sentences, intended for training machine translation models or other NLP tasks.\r\n- Test Set: A smaller set of sentences for evaluating model performance.### Data Fields\r\n\r\nEach entry in the dataset consists of two fields:\r\n\r\n- 'English': The sentence or phrase in English.\r\n- 'Igbo': The corresponding translation in Igbo.### Data Format\r\n\r\nThe dataset is available in CSV format, with each row representing a parallel sentence pair.## Usage\r\n\r\nThis dataset can be used for various tasks in NLP, including but not limited to:\r\n\r\n- Machine Translation\r\n- Cross-lingual Transfer Learning\r\n- Linguistic Research"
]
|
f4cd054de228fc8b7c164e7bd8e5540a5f69e818 | # Dataset Card for "undl_zh2en_translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/undl_zh2en_translation | [
"region:us"
]
| 2023-11-25T03:16:58+00:00 | {"dataset_info": {"features": [{"name": "clean_zh", "sequence": "string"}, {"name": "clean_en", "sequence": "string"}, {"name": "record", "dtype": "string"}, {"name": "zh2en", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 13263355893, "num_examples": 165840}], "download_size": 6373670636, "dataset_size": 13263355893}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-25T04:01:46+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "undl_zh2en_translation"
More Information needed | [
"# Dataset Card for \"undl_zh2en_translation\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_zh2en_translation\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_zh2en_translation\"\n\nMore Information needed"
]
|
4dbbb8a345814e34a7f19dc89bad5ba028d87a01 | # Dataset Card for "sentence-alignment-merged-postcorrection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | buddhist-nlp/sentence-alignment-merged-postcorrection | [
"region:us"
]
| 2023-11-25T03:19:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2822540476, "num_examples": 820391}, {"name": "validation", "num_bytes": 2183956, "num_examples": 641}, {"name": "test", "num_bytes": 2364887, "num_examples": 710}], "download_size": 1740831749, "dataset_size": 2827089319}} | 2023-11-26T22:58:48+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sentence-alignment-merged-postcorrection"
More Information needed | [
"# Dataset Card for \"sentence-alignment-merged-postcorrection\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sentence-alignment-merged-postcorrection\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sentence-alignment-merged-postcorrection\"\n\nMore Information needed"
]
|
51fa01c7a8e53ef7399dde2caf494dc9a7314ccf |
# Dataset Card for Evaluation run of TheBloke/Orca-2-13B-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/Orca-2-13B-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/Orca-2-13B-GPTQ](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__Orca-2-13B-GPTQ_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-25T03:42:21.410226](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Orca-2-13B-GPTQ_public/blob/main/results_2023-11-25T03-42-21.410226.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5887851314518572,
"acc_stderr": 0.032958137391722146,
"acc_norm": 0.5969185976587905,
"acc_norm_stderr": 0.03368773395313244,
"mc1": 0.38555691554467564,
"mc1_stderr": 0.01703883901059167,
"mc2": 0.5514098320774886,
"mc2_stderr": 0.0160327733300155,
"em": 0.42606963087248323,
"em_stderr": 0.0050641847856105855,
"f1": 0.5302139261744996,
"f1_stderr": 0.004659796001509701
},
"harness|arc:challenge|25": {
"acc": 0.5614334470989761,
"acc_stderr": 0.01450068261821286,
"acc_norm": 0.5981228668941979,
"acc_norm_stderr": 0.014327268614578274
},
"harness|hellaswag|10": {
"acc": 0.6037641904003187,
"acc_stderr": 0.004881148866874181,
"acc_norm": 0.7911770563632743,
"acc_norm_stderr": 0.004056369096954941
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7302631578947368,
"acc_stderr": 0.03611780560284898,
"acc_norm": 0.7302631578947368,
"acc_norm_stderr": 0.03611780560284898
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6150943396226415,
"acc_stderr": 0.029946498567699948,
"acc_norm": 0.6150943396226415,
"acc_norm_stderr": 0.029946498567699948
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.040166600304512336,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.040166600304512336
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5375722543352601,
"acc_stderr": 0.03801685104524458,
"acc_norm": 0.5375722543352601,
"acc_norm_stderr": 0.03801685104524458
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.04784060704105655,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.04784060704105655
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.548936170212766,
"acc_stderr": 0.032529096196131965,
"acc_norm": 0.548936170212766,
"acc_norm_stderr": 0.032529096196131965
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.30701754385964913,
"acc_stderr": 0.0433913832257986,
"acc_norm": 0.30701754385964913,
"acc_norm_stderr": 0.0433913832257986
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.0248708152510571,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.0248708152510571
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3253968253968254,
"acc_stderr": 0.041905964388711366,
"acc_norm": 0.3253968253968254,
"acc_norm_stderr": 0.041905964388711366
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7161290322580646,
"acc_stderr": 0.02564938106302926,
"acc_norm": 0.7161290322580646,
"acc_norm_stderr": 0.02564938106302926
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4482758620689655,
"acc_stderr": 0.034991131376767445,
"acc_norm": 0.4482758620689655,
"acc_norm_stderr": 0.034991131376767445
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7171717171717171,
"acc_stderr": 0.032087795587867514,
"acc_norm": 0.7171717171717171,
"acc_norm_stderr": 0.032087795587867514
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.026499057701397443,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.026499057701397443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6025641025641025,
"acc_stderr": 0.024811920017903836,
"acc_norm": 0.6025641025641025,
"acc_norm_stderr": 0.024811920017903836
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.02831753349606648,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.02831753349606648
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6050420168067226,
"acc_stderr": 0.031753678460966245,
"acc_norm": 0.6050420168067226,
"acc_norm_stderr": 0.031753678460966245
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8036697247706422,
"acc_stderr": 0.017030719339154343,
"acc_norm": 0.8036697247706422,
"acc_norm_stderr": 0.017030719339154343
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4675925925925926,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.4675925925925926,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849313,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849313
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.02616056824660146,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.02616056824660146
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.672645739910314,
"acc_stderr": 0.031493846709941306,
"acc_norm": 0.672645739910314,
"acc_norm_stderr": 0.031493846709941306
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7175572519083969,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.7175572519083969,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.03984979653302872,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.03984979653302872
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.03487825168497892,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.03487825168497892
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04547960999764377,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04547960999764377
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.02280138253459753,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.02280138253459753
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.014866821664709595,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.014866821664709595
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6791907514450867,
"acc_stderr": 0.025131000233647897,
"acc_norm": 0.6791907514450867,
"acc_norm_stderr": 0.025131000233647897
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.20782122905027933,
"acc_stderr": 0.013570248325081347,
"acc_norm": 0.20782122905027933,
"acc_norm_stderr": 0.013570248325081347
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6339869281045751,
"acc_stderr": 0.02758281141515961,
"acc_norm": 0.6339869281045751,
"acc_norm_stderr": 0.02758281141515961
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.662379421221865,
"acc_stderr": 0.026858825879488544,
"acc_norm": 0.662379421221865,
"acc_norm_stderr": 0.026858825879488544
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6697530864197531,
"acc_stderr": 0.026168298456732846,
"acc_norm": 0.6697530864197531,
"acc_norm_stderr": 0.026168298456732846
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.450354609929078,
"acc_stderr": 0.029680105565029036,
"acc_norm": 0.450354609929078,
"acc_norm_stderr": 0.029680105565029036
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4348109517601043,
"acc_stderr": 0.012661233805616302,
"acc_norm": 0.4348109517601043,
"acc_norm_stderr": 0.012661233805616302
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5625,
"acc_stderr": 0.030134614954403924,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.030134614954403924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.019780465954777508,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.019780465954777508
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.0289205832206756,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.0289205832206756
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7761194029850746,
"acc_stderr": 0.029475250236017204,
"acc_norm": 0.7761194029850746,
"acc_norm_stderr": 0.029475250236017204
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03188578017686398,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03188578017686398
},
"harness|truthfulqa:mc|0": {
"mc1": 0.38555691554467564,
"mc1_stderr": 0.01703883901059167,
"mc2": 0.5514098320774886,
"mc2_stderr": 0.0160327733300155
},
"harness|winogrande|5": {
"acc": 0.7663772691397001,
"acc_stderr": 0.011892194477183525
},
"harness|drop|3": {
"em": 0.42606963087248323,
"em_stderr": 0.0050641847856105855,
"f1": 0.5302139261744996,
"f1_stderr": 0.004659796001509701
},
"harness|gsm8k|5": {
"acc": 0.155420773313116,
"acc_stderr": 0.009979689409499152
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_TheBloke__Orca-2-13B-GPTQ | [
"region:us"
]
| 2023-11-25T03:45:31+00:00 | {"pretty_name": "Evaluation run of TheBloke/Orca-2-13B-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/Orca-2-13B-GPTQ](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__Orca-2-13B-GPTQ_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-25T03:42:21.410226](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Orca-2-13B-GPTQ_public/blob/main/results_2023-11-25T03-42-21.410226.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5887851314518572,\n \"acc_stderr\": 0.032958137391722146,\n \"acc_norm\": 0.5969185976587905,\n \"acc_norm_stderr\": 0.03368773395313244,\n \"mc1\": 0.38555691554467564,\n \"mc1_stderr\": 0.01703883901059167,\n \"mc2\": 0.5514098320774886,\n \"mc2_stderr\": 0.0160327733300155,\n \"em\": 0.42606963087248323,\n \"em_stderr\": 0.0050641847856105855,\n \"f1\": 0.5302139261744996,\n \"f1_stderr\": 0.004659796001509701\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5614334470989761,\n \"acc_stderr\": 0.01450068261821286,\n \"acc_norm\": 0.5981228668941979,\n \"acc_norm_stderr\": 0.014327268614578274\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6037641904003187,\n \"acc_stderr\": 0.004881148866874181,\n \"acc_norm\": 0.7911770563632743,\n \"acc_norm_stderr\": 0.004056369096954941\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7302631578947368,\n \"acc_stderr\": 0.03611780560284898,\n \"acc_norm\": 0.7302631578947368,\n \"acc_norm_stderr\": 0.03611780560284898\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6150943396226415,\n \"acc_stderr\": 0.029946498567699948,\n \"acc_norm\": 0.6150943396226415,\n \"acc_norm_stderr\": 0.029946498567699948\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.040166600304512336,\n \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.040166600304512336\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5375722543352601,\n \"acc_stderr\": 0.03801685104524458,\n \"acc_norm\": 0.5375722543352601,\n \"acc_norm_stderr\": 0.03801685104524458\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.04784060704105655,\n \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.04784060704105655\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.548936170212766,\n \"acc_stderr\": 0.032529096196131965,\n \"acc_norm\": 0.548936170212766,\n \"acc_norm_stderr\": 0.032529096196131965\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n \"acc_stderr\": 0.0433913832257986,\n \"acc_norm\": 0.30701754385964913,\n \"acc_norm_stderr\": 0.0433913832257986\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.37037037037037035,\n \"acc_stderr\": 0.0248708152510571,\n \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.0248708152510571\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3253968253968254,\n \"acc_stderr\": 0.041905964388711366,\n \"acc_norm\": 0.3253968253968254,\n \"acc_norm_stderr\": 0.041905964388711366\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7161290322580646,\n \"acc_stderr\": 0.02564938106302926,\n \"acc_norm\": 0.7161290322580646,\n \"acc_norm_stderr\": 0.02564938106302926\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4482758620689655,\n \"acc_stderr\": 0.034991131376767445,\n \"acc_norm\": 0.4482758620689655,\n \"acc_norm_stderr\": 0.034991131376767445\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7171717171717171,\n \"acc_stderr\": 0.032087795587867514,\n \"acc_norm\": 0.7171717171717171,\n \"acc_norm_stderr\": 0.032087795587867514\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.026499057701397443,\n \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.026499057701397443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6025641025641025,\n \"acc_stderr\": 0.024811920017903836,\n \"acc_norm\": 0.6025641025641025,\n \"acc_norm_stderr\": 0.024811920017903836\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3148148148148148,\n \"acc_stderr\": 0.02831753349606648,\n \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.02831753349606648\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6050420168067226,\n \"acc_stderr\": 0.031753678460966245,\n \"acc_norm\": 0.6050420168067226,\n \"acc_norm_stderr\": 0.031753678460966245\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8036697247706422,\n \"acc_stderr\": 0.017030719339154343,\n \"acc_norm\": 0.8036697247706422,\n \"acc_norm_stderr\": 0.017030719339154343\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849313,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849313\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.02616056824660146,\n \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.02616056824660146\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.672645739910314,\n \"acc_stderr\": 0.031493846709941306,\n \"acc_norm\": 0.672645739910314,\n \"acc_norm_stderr\": 0.031493846709941306\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.743801652892562,\n \"acc_stderr\": 0.03984979653302872,\n \"acc_norm\": 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302872\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.03487825168497892,\n \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.03487825168497892\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.35714285714285715,\n \"acc_stderr\": 0.04547960999764377,\n \"acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.04547960999764377\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.02280138253459753,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.02280138253459753\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.014866821664709595,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.014866821664709595\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6791907514450867,\n \"acc_stderr\": 0.025131000233647897,\n \"acc_norm\": 0.6791907514450867,\n \"acc_norm_stderr\": 0.025131000233647897\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.20782122905027933,\n \"acc_stderr\": 0.013570248325081347,\n \"acc_norm\": 0.20782122905027933,\n \"acc_norm_stderr\": 0.013570248325081347\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6339869281045751,\n \"acc_stderr\": 0.02758281141515961,\n \"acc_norm\": 0.6339869281045751,\n \"acc_norm_stderr\": 0.02758281141515961\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.662379421221865,\n \"acc_stderr\": 0.026858825879488544,\n \"acc_norm\": 0.662379421221865,\n \"acc_norm_stderr\": 0.026858825879488544\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6697530864197531,\n \"acc_stderr\": 0.026168298456732846,\n \"acc_norm\": 0.6697530864197531,\n \"acc_norm_stderr\": 0.026168298456732846\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.450354609929078,\n \"acc_stderr\": 0.029680105565029036,\n \"acc_norm\": 0.450354609929078,\n \"acc_norm_stderr\": 0.029680105565029036\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4348109517601043,\n \"acc_stderr\": 0.012661233805616302,\n \"acc_norm\": 0.4348109517601043,\n \"acc_norm_stderr\": 0.012661233805616302\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5625,\n \"acc_stderr\": 0.030134614954403924,\n \"acc_norm\": 0.5625,\n \"acc_norm_stderr\": 0.030134614954403924\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6045751633986928,\n \"acc_stderr\": 0.019780465954777508,\n \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.019780465954777508\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7761194029850746,\n \"acc_stderr\": 0.029475250236017204,\n \"acc_norm\": 0.7761194029850746,\n \"acc_norm_stderr\": 0.029475250236017204\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.38555691554467564,\n \"mc1_stderr\": 0.01703883901059167,\n \"mc2\": 0.5514098320774886,\n \"mc2_stderr\": 0.0160327733300155\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183525\n },\n \"harness|drop|3\": {\n \"em\": 0.42606963087248323,\n \"em_stderr\": 0.0050641847856105855,\n \"f1\": 0.5302139261744996,\n \"f1_stderr\": 0.004659796001509701\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.155420773313116,\n \"acc_stderr\": 0.009979689409499152\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/Orca-2-13B-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|arc:challenge|25_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|drop|3_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|gsm8k|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hellaswag|10_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T03-42-21.410226.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["**/details_harness|winogrande|5_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-25T03-42-21.410226.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_25T03_42_21.410226", "path": ["results_2023-11-25T03-42-21.410226.parquet"]}, {"split": "latest", "path": ["results_2023-11-25T03-42-21.410226.parquet"]}]}]} | 2023-11-25T03:46:14+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of TheBloke/Orca-2-13B-GPTQ
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheBloke/Orca-2-13B-GPTQ on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-25T03:42:21.410226(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of TheBloke/Orca-2-13B-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/Orca-2-13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T03:42:21.410226(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheBloke/Orca-2-13B-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/Orca-2-13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T03:42:21.410226(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/Orca-2-13B-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/Orca-2-13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-25T03:42:21.410226(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
fea4df952b66020009cd600a535c85a3a82f0a6a |
# Mosaic format for combine all dataset to train Malaysian LLM
This repository is to store dataset shards using mosaic format.
1. prepared at https://github.com/malaysia-ai/dedup-text-dataset/blob/main/pretrain-llm/combine-all.ipynb
2. using tokenizer https://huggingface.co/malaysia-ai/bpe-tokenizer
3. 4096 context length.
## how-to
1. git clone,
```bash
git lfs clone https://huggingface.co/datasets/malaysia-ai/mosaic-combine-all
```
2. load it,
```python
from streaming import LocalDataset
import numpy as np
from streaming.base.format.mds.encodings import Encoding, _encodings
class UInt16(Encoding):
def encode(self, obj) -> bytes:
return obj.tobytes()
def decode(self, data: bytes):
return np.frombuffer(data, np.uint16)
_encodings['uint16'] = UInt16
dataset = LocalDataset('mosaic-combine-all')
len(dataset)
``` | malaysia-ai/mosaic-combine-all | [
"language:ms",
"region:us"
]
| 2023-11-25T03:53:14+00:00 | {"language": ["ms"]} | 2023-11-28T08:18:52+00:00 | []
| [
"ms"
]
| TAGS
#language-Malay (macrolanguage) #region-us
|
# Mosaic format for combine all dataset to train Malaysian LLM
This repository is to store dataset shards using mosaic format.
1. prepared at URL
2. using tokenizer URL
3. 4096 context length.
## how-to
1. git clone,
2. load it,
| [
"# Mosaic format for combine all dataset to train Malaysian LLM\n\nThis repository is to store dataset shards using mosaic format.\n\n1. prepared at URL\n2. using tokenizer URL\n3. 4096 context length.",
"## how-to\n\n1. git clone,\n\n\n\n2. load it,"
]
| [
"TAGS\n#language-Malay (macrolanguage) #region-us \n",
"# Mosaic format for combine all dataset to train Malaysian LLM\n\nThis repository is to store dataset shards using mosaic format.\n\n1. prepared at URL\n2. using tokenizer URL\n3. 4096 context length.",
"## how-to\n\n1. git clone,\n\n\n\n2. load it,"
]
| [
16,
47,
13
]
| [
"passage: TAGS\n#language-Malay (macrolanguage) #region-us \n# Mosaic format for combine all dataset to train Malaysian LLM\n\nThis repository is to store dataset shards using mosaic format.\n\n1. prepared at URL\n2. using tokenizer URL\n3. 4096 context length.## how-to\n\n1. git clone,\n\n\n\n2. load it,"
]
|
3550669c9885b8863d7d17e78245514f889021b7 |
# Bangumi Image Base of Made In Abyss
This is the image base of bangumi Made in Abyss, we detected 35 characters, 3476 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 95 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 81 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 46 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 25 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 116 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 374 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 38 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 37 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 1042 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 77 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 17 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 34 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 8 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 64 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 15 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 5 | [Download](15/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 16 | 26 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 731 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 40 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 133 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 23 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 6 | [Download](21/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 22 | 64 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 28 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 21 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 16 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 14 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 9 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 15 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 9 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 12 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 19 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 45 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 5 | [Download](33/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 186 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/madeinabyss | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-25T05:19:02+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-25T08:04:16+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Made In Abyss
===================================
This is the image base of bangumi Made in Abyss, we detected 35 characters, 3476 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
5323dada77c6adf4e3be0bd0cb5656b5b3099ddc | # Dataset Card for "omcs_dataset_of_commonsense_facts"
When people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. Official github page: https://github.com/commonsense/omcs | dutta18/omcs_dataset_of_commonsense_facts | [
"region:us"
]
| 2023-11-25T05:20:02+00:00 | {"dataset_info": {"features": [{"name": "fact", "dtype": "string"}, {"name": "count", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 96649051, "num_examples": 1578238}], "download_size": 59984051, "dataset_size": 96649051}} | 2023-11-25T06:15:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "omcs_dataset_of_commonsense_facts"
When people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. Official github page: URL | [
"# Dataset Card for \"omcs_dataset_of_commonsense_facts\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. Official github page: URL"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"omcs_dataset_of_commonsense_facts\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. Official github page: URL"
]
| [
6,
118
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"omcs_dataset_of_commonsense_facts\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. Official github page: URL"
]
|
c506682b4807d9208a4e2fbd58fbb6a58a9bf4fa | # Dataset Card for "special_samsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pvisnrt/special_samsum | [
"region:us"
]
| 2023-11-25T05:24:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "dialogue", "sequence": "string"}, {"name": "summary", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "tag_ids", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 20587448, "num_examples": 14732}, {"name": "test", "num_bytes": 1153897, "num_examples": 819}, {"name": "validation", "num_bytes": 1126310, "num_examples": 818}], "download_size": 5893445, "dataset_size": 22867655}} | 2023-11-25T05:24:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "special_samsum"
More Information needed | [
"# Dataset Card for \"special_samsum\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"special_samsum\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"special_samsum\"\n\nMore Information needed"
]
|
0271d60b7dcf05a4d4e647dd233cd62ce299e6b8 | # Dataset Card for "GlitchBench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | glitchbench/GlitchBench | [
"region:us"
]
| 2023-11-25T05:24:26+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "string"}, {"name": "reddit", "dtype": "string"}, {"name": "glitch-type", "dtype": "string"}, {"name": "game", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "validation", "num_bytes": 686309290.0, "num_examples": 607}], "download_size": 686303027, "dataset_size": 686309290.0}} | 2023-11-25T05:25:03+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GlitchBench"
More Information needed | [
"# Dataset Card for \"GlitchBench\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GlitchBench\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GlitchBench\"\n\nMore Information needed"
]
|
ebf2eb5ce08761e765b9005e877a5d570295c3bd |
# Dataset Card for Evaluation run of uukuguy/Orca-2-7b-f16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/Orca-2-7b-f16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/Orca-2-7b-f16](https://huggingface.co/uukuguy/Orca-2-7b-f16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__Orca-2-7b-f16_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-25T05:57:22.285671](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__Orca-2-7b-f16_public/blob/main/results_2023-11-25T05-57-22.285671.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2657693278278556,
"acc_stderr": 0.03135662339817443,
"acc_norm": 0.2672828870617276,
"acc_norm_stderr": 0.032198017213766285,
"mc1": 0.2350061199510404,
"mc1_stderr": 0.0148430615077316,
"mc2": 0.4836424685770379,
"mc2_stderr": 0.017011052216455772,
"em": 0.0,
"em_stderr": 0.0,
"f1": 4.718959731543626e-05,
"f1_stderr": 1.3131442946208309e-05
},
"harness|arc:challenge|25": {
"acc": 0.23378839590443687,
"acc_stderr": 0.01236822537850714,
"acc_norm": 0.2960750853242321,
"acc_norm_stderr": 0.013340916085246263
},
"harness|hellaswag|10": {
"acc": 0.2548297151961761,
"acc_stderr": 0.0043487487305299355,
"acc_norm": 0.2562238597888867,
"acc_norm_stderr": 0.004356547185847041
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.035914440841969694,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.035914440841969694
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3026315789473684,
"acc_stderr": 0.03738520676119669,
"acc_norm": 0.3026315789473684,
"acc_norm_stderr": 0.03738520676119669
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.30566037735849055,
"acc_stderr": 0.028353298073322666,
"acc_norm": 0.30566037735849055,
"acc_norm_stderr": 0.028353298073322666
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.03745554791462457,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.03745554791462457
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.27167630057803466,
"acc_stderr": 0.03391750322321659,
"acc_norm": 0.27167630057803466,
"acc_norm_stderr": 0.03391750322321659
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.04280105837364395,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.04280105837364395
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768077,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768077
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.225531914893617,
"acc_stderr": 0.02732107841738754,
"acc_norm": 0.225531914893617,
"acc_norm_stderr": 0.02732107841738754
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.038351539543994194,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.038351539543994194
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2896551724137931,
"acc_stderr": 0.037800192304380156,
"acc_norm": 0.2896551724137931,
"acc_norm_stderr": 0.037800192304380156
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.22486772486772486,
"acc_stderr": 0.021502096078229147,
"acc_norm": 0.22486772486772486,
"acc_norm_stderr": 0.021502096078229147
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.04343525428949098,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.04343525428949098
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3161290322580645,
"acc_stderr": 0.02645087448904277,
"acc_norm": 0.3161290322580645,
"acc_norm_stderr": 0.02645087448904277
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.22167487684729065,
"acc_stderr": 0.029225575892489614,
"acc_norm": 0.22167487684729065,
"acc_norm_stderr": 0.029225575892489614
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.2,
"acc_stderr": 0.031234752377721175,
"acc_norm": 0.2,
"acc_norm_stderr": 0.031234752377721175
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.0347327959083696,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.0347327959083696
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.24870466321243523,
"acc_stderr": 0.031195840877700293,
"acc_norm": 0.24870466321243523,
"acc_norm_stderr": 0.031195840877700293
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2948717948717949,
"acc_stderr": 0.023119362758232273,
"acc_norm": 0.2948717948717949,
"acc_norm_stderr": 0.023119362758232273
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.02592887613276611,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.02592887613276611
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.29831932773109243,
"acc_stderr": 0.029719142876342846,
"acc_norm": 0.29831932773109243,
"acc_norm_stderr": 0.029719142876342846
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.23178807947019867,
"acc_stderr": 0.034454062719870546,
"acc_norm": 0.23178807947019867,
"acc_norm_stderr": 0.034454062719870546
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.30091743119266057,
"acc_stderr": 0.019664751366802114,
"acc_norm": 0.30091743119266057,
"acc_norm_stderr": 0.019664751366802114
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.030546745264953178,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.030546745264953178
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.22058823529411764,
"acc_stderr": 0.02910225438967407,
"acc_norm": 0.22058823529411764,
"acc_norm_stderr": 0.02910225438967407
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2320675105485232,
"acc_stderr": 0.02747974455080852,
"acc_norm": 0.2320675105485232,
"acc_norm_stderr": 0.02747974455080852
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.15246636771300448,
"acc_stderr": 0.024126204813252863,
"acc_norm": 0.15246636771300448,
"acc_norm_stderr": 0.024126204813252863
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2824427480916031,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.2824427480916031,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2809917355371901,
"acc_stderr": 0.04103203830514511,
"acc_norm": 0.2809917355371901,
"acc_norm_stderr": 0.04103203830514511
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.19444444444444445,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.19444444444444445,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.25766871165644173,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.25766871165644173,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.25892857142857145,
"acc_stderr": 0.04157751539865629,
"acc_norm": 0.25892857142857145,
"acc_norm_stderr": 0.04157751539865629
},
"harness|hendrycksTest-management|5": {
"acc": 0.39805825242718446,
"acc_stderr": 0.04846748253977239,
"acc_norm": 0.39805825242718446,
"acc_norm_stderr": 0.04846748253977239
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.24786324786324787,
"acc_stderr": 0.0282863240755644,
"acc_norm": 0.24786324786324787,
"acc_norm_stderr": 0.0282863240755644
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.22988505747126436,
"acc_stderr": 0.01504630184669182,
"acc_norm": 0.22988505747126436,
"acc_norm_stderr": 0.01504630184669182
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.23121387283236994,
"acc_stderr": 0.022698657167855713,
"acc_norm": 0.23121387283236994,
"acc_norm_stderr": 0.022698657167855713
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.22793296089385476,
"acc_stderr": 0.014030149950805097,
"acc_norm": 0.22793296089385476,
"acc_norm_stderr": 0.014030149950805097
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2908496732026144,
"acc_stderr": 0.02600480036395211,
"acc_norm": 0.2908496732026144,
"acc_norm_stderr": 0.02600480036395211
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.24115755627009647,
"acc_stderr": 0.024296594034763426,
"acc_norm": 0.24115755627009647,
"acc_norm_stderr": 0.024296594034763426
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.25617283950617287,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.25617283950617287,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2553191489361702,
"acc_stderr": 0.026011992930902006,
"acc_norm": 0.2553191489361702,
"acc_norm_stderr": 0.026011992930902006
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2438070404172099,
"acc_stderr": 0.01096650797217848,
"acc_norm": 0.2438070404172099,
"acc_norm_stderr": 0.01096650797217848
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.27941176470588236,
"acc_stderr": 0.02725720260611494,
"acc_norm": 0.27941176470588236,
"acc_norm_stderr": 0.02725720260611494
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.24673202614379086,
"acc_stderr": 0.0174408203674025,
"acc_norm": 0.24673202614379086,
"acc_norm_stderr": 0.0174408203674025
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.32653061224489793,
"acc_stderr": 0.030021056238440317,
"acc_norm": 0.32653061224489793,
"acc_norm_stderr": 0.030021056238440317
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.27860696517412936,
"acc_stderr": 0.031700561834973086,
"acc_norm": 0.27860696517412936,
"acc_norm_stderr": 0.031700561834973086
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-virology|5": {
"acc": 0.24096385542168675,
"acc_stderr": 0.03329394119073528,
"acc_norm": 0.24096385542168675,
"acc_norm_stderr": 0.03329394119073528
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.29239766081871343,
"acc_stderr": 0.034886477134579215,
"acc_norm": 0.29239766081871343,
"acc_norm_stderr": 0.034886477134579215
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2350061199510404,
"mc1_stderr": 0.0148430615077316,
"mc2": 0.4836424685770379,
"mc2_stderr": 0.017011052216455772
},
"harness|winogrande|5": {
"acc": 0.5059194948697711,
"acc_stderr": 0.014051500838485807
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 4.718959731543626e-05,
"f1_stderr": 1.3131442946208309e-05
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_uukuguy__Orca-2-7b-f16 | [
"region:us"
]
| 2023-11-25T06:00:32+00:00 | {"pretty_name": "Evaluation run of uukuguy/Orca-2-7b-f16", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/Orca-2-7b-f16](https://huggingface.co/uukuguy/Orca-2-7b-f16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__Orca-2-7b-f16_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-25T05:57:22.285671](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__Orca-2-7b-f16_public/blob/main/results_2023-11-25T05-57-22.285671.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2657693278278556,\n \"acc_stderr\": 0.03135662339817443,\n \"acc_norm\": 0.2672828870617276,\n \"acc_norm_stderr\": 0.032198017213766285,\n \"mc1\": 0.2350061199510404,\n \"mc1_stderr\": 0.0148430615077316,\n \"mc2\": 0.4836424685770379,\n \"mc2_stderr\": 0.017011052216455772,\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 4.718959731543626e-05,\n \"f1_stderr\": 1.3131442946208309e-05\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.23378839590443687,\n \"acc_stderr\": 0.01236822537850714,\n \"acc_norm\": 0.2960750853242321,\n \"acc_norm_stderr\": 0.013340916085246263\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2548297151961761,\n \"acc_stderr\": 0.0043487487305299355,\n \"acc_norm\": 0.2562238597888867,\n \"acc_norm_stderr\": 0.004356547185847041\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.035914440841969694,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.035914440841969694\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.3026315789473684,\n \"acc_stderr\": 0.03738520676119669,\n \"acc_norm\": 0.3026315789473684,\n \"acc_norm_stderr\": 0.03738520676119669\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.30566037735849055,\n \"acc_stderr\": 0.028353298073322666,\n \"acc_norm\": 0.30566037735849055,\n \"acc_norm_stderr\": 0.028353298073322666\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.03745554791462457,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.03745554791462457\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.27167630057803466,\n \"acc_stderr\": 0.03391750322321659,\n \"acc_norm\": 0.27167630057803466,\n \"acc_norm_stderr\": 0.03391750322321659\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.04280105837364395,\n \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.04280105837364395\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768077,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768077\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.225531914893617,\n \"acc_stderr\": 0.02732107841738754,\n \"acc_norm\": 0.225531914893617,\n \"acc_norm_stderr\": 0.02732107841738754\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.038351539543994194,\n \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.038351539543994194\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2896551724137931,\n \"acc_stderr\": 0.037800192304380156,\n \"acc_norm\": 0.2896551724137931,\n \"acc_norm_stderr\": 0.037800192304380156\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.22486772486772486,\n \"acc_stderr\": 0.021502096078229147,\n \"acc_norm\": 0.22486772486772486,\n \"acc_norm_stderr\": 0.021502096078229147\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.04343525428949098,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.04343525428949098\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.3161290322580645,\n \"acc_stderr\": 0.02645087448904277,\n \"acc_norm\": 0.3161290322580645,\n \"acc_norm_stderr\": 0.02645087448904277\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.22167487684729065,\n \"acc_stderr\": 0.029225575892489614,\n \"acc_norm\": 0.22167487684729065,\n \"acc_norm_stderr\": 0.029225575892489614\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.031234752377721175,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.031234752377721175\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.0347327959083696,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.0347327959083696\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.24870466321243523,\n \"acc_stderr\": 0.031195840877700293,\n \"acc_norm\": 0.24870466321243523,\n \"acc_norm_stderr\": 0.031195840877700293\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2948717948717949,\n \"acc_stderr\": 0.023119362758232273,\n \"acc_norm\": 0.2948717948717949,\n \"acc_norm_stderr\": 0.023119362758232273\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.23703703703703705,\n \"acc_stderr\": 0.02592887613276611,\n \"acc_norm\": 0.23703703703703705,\n \"acc_norm_stderr\": 0.02592887613276611\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.29831932773109243,\n \"acc_stderr\": 0.029719142876342846,\n \"acc_norm\": 0.29831932773109243,\n \"acc_norm_stderr\": 0.029719142876342846\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.23178807947019867,\n \"acc_stderr\": 0.034454062719870546,\n \"acc_norm\": 0.23178807947019867,\n \"acc_norm_stderr\": 0.034454062719870546\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.30091743119266057,\n \"acc_stderr\": 0.019664751366802114,\n \"acc_norm\": 0.30091743119266057,\n \"acc_norm_stderr\": 0.019664751366802114\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.030546745264953178,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.030546745264953178\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.22058823529411764,\n \"acc_stderr\": 0.02910225438967407,\n \"acc_norm\": 0.22058823529411764,\n \"acc_norm_stderr\": 0.02910225438967407\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2320675105485232,\n \"acc_stderr\": 0.02747974455080852,\n \"acc_norm\": 0.2320675105485232,\n \"acc_norm_stderr\": 0.02747974455080852\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.15246636771300448,\n \"acc_stderr\": 0.024126204813252863,\n \"acc_norm\": 0.15246636771300448,\n \"acc_norm_stderr\": 0.024126204813252863\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2824427480916031,\n \"acc_stderr\": 0.03948406125768361,\n \"acc_norm\": 0.2824427480916031,\n \"acc_norm_stderr\": 0.03948406125768361\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.2809917355371901,\n \"acc_stderr\": 0.04103203830514511,\n \"acc_norm\": 0.2809917355371901,\n \"acc_norm_stderr\": 0.04103203830514511\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.19444444444444445,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.19444444444444445,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.25766871165644173,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.25766871165644173,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.25892857142857145,\n \"acc_stderr\": 0.04157751539865629,\n \"acc_norm\": 0.25892857142857145,\n \"acc_norm_stderr\": 0.04157751539865629\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.39805825242718446,\n \"acc_stderr\": 0.04846748253977239,\n \"acc_norm\": 0.39805825242718446,\n \"acc_norm_stderr\": 0.04846748253977239\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.24786324786324787,\n \"acc_stderr\": 0.0282863240755644,\n \"acc_norm\": 0.24786324786324787,\n \"acc_norm_stderr\": 0.0282863240755644\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.22988505747126436,\n \"acc_stderr\": 0.01504630184669182,\n \"acc_norm\": 0.22988505747126436,\n \"acc_norm_stderr\": 0.01504630184669182\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.23121387283236994,\n \"acc_stderr\": 0.022698657167855713,\n \"acc_norm\": 0.23121387283236994,\n \"acc_norm_stderr\": 0.022698657167855713\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.22793296089385476,\n \"acc_stderr\": 0.014030149950805097,\n \"acc_norm\": 0.22793296089385476,\n \"acc_norm_stderr\": 0.014030149950805097\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2908496732026144,\n \"acc_stderr\": 0.02600480036395211,\n \"acc_norm\": 0.2908496732026144,\n \"acc_norm_stderr\": 0.02600480036395211\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.24115755627009647,\n \"acc_stderr\": 0.024296594034763426,\n \"acc_norm\": 0.24115755627009647,\n \"acc_norm_stderr\": 0.024296594034763426\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.25617283950617287,\n \"acc_stderr\": 0.024288533637726095,\n \"acc_norm\": 0.25617283950617287,\n \"acc_norm_stderr\": 0.024288533637726095\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2553191489361702,\n \"acc_stderr\": 0.026011992930902006,\n \"acc_norm\": 0.2553191489361702,\n \"acc_norm_stderr\": 0.026011992930902006\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2438070404172099,\n \"acc_stderr\": 0.01096650797217848,\n \"acc_norm\": 0.2438070404172099,\n \"acc_norm_stderr\": 0.01096650797217848\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.27941176470588236,\n \"acc_stderr\": 0.02725720260611494,\n \"acc_norm\": 0.27941176470588236,\n \"acc_norm_stderr\": 0.02725720260611494\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.24673202614379086,\n \"acc_stderr\": 0.0174408203674025,\n \"acc_norm\": 0.24673202614379086,\n \"acc_norm_stderr\": 0.0174408203674025\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.32653061224489793,\n \"acc_stderr\": 0.030021056238440317,\n \"acc_norm\": 0.32653061224489793,\n \"acc_norm_stderr\": 0.030021056238440317\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.27860696517412936,\n \"acc_stderr\": 0.031700561834973086,\n \"acc_norm\": 0.27860696517412936,\n \"acc_norm_stderr\": 0.031700561834973086\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.24096385542168675,\n \"acc_stderr\": 0.03329394119073528,\n \"acc_norm\": 0.24096385542168675,\n \"acc_norm_stderr\": 0.03329394119073528\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.29239766081871343,\n \"acc_stderr\": 0.034886477134579215,\n \"acc_norm\": 0.29239766081871343,\n \"acc_norm_stderr\": 0.034886477134579215\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2350061199510404,\n \"mc1_stderr\": 0.0148430615077316,\n \"mc2\": 0.4836424685770379,\n \"mc2_stderr\": 0.017011052216455772\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5059194948697711,\n \"acc_stderr\": 0.014051500838485807\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 4.718959731543626e-05,\n \"f1_stderr\": 1.3131442946208309e-05\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/Orca-2-7b-f16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|arc:challenge|25_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|drop|3_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|gsm8k|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hellaswag|10_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T05-57-22.285671.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["**/details_harness|winogrande|5_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-25T05-57-22.285671.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_25T05_57_22.285671", "path": ["results_2023-11-25T05-57-22.285671.parquet"]}, {"split": "latest", "path": ["results_2023-11-25T05-57-22.285671.parquet"]}]}]} | 2023-11-25T06:01:17+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/Orca-2-7b-f16
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/Orca-2-7b-f16 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-25T05:57:22.285671(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of uukuguy/Orca-2-7b-f16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/Orca-2-7b-f16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T05:57:22.285671(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/Orca-2-7b-f16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/Orca-2-7b-f16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T05:57:22.285671(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/Orca-2-7b-f16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/Orca-2-7b-f16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-25T05:57:22.285671(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
169ea69d02b933f79716bbf273d35dd8cc3e9752 | # Dataset Card for "omcs_50k_small"
When people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. This is a small set of 50k facts from omcs to test your models ability | dutta18/omcs_50k_small | [
"region:us"
]
| 2023-11-25T06:06:35+00:00 | {"dataset_info": {"features": [{"name": "count", "dtype": "int64"}, {"name": "fact", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3233742, "num_examples": 50000}], "download_size": 1982700, "dataset_size": 3233742}} | 2023-11-25T06:14:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "omcs_50k_small"
When people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. This is a small set of 50k facts from omcs to test your models ability | [
"# Dataset Card for \"omcs_50k_small\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. This is a small set of 50k facts from omcs to test your models ability"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"omcs_50k_small\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. This is a small set of 50k facts from omcs to test your models ability"
]
| [
6,
124
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"omcs_50k_small\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows. This is a small set of 50k facts from omcs to test your models ability"
]
|
9ff6aa709feaacd1770043c2c583350213ce803b |
# Dataset of shimakaze (Azur Lane)
This is the dataset of shimakaze (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 555 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 602 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 555 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 555 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 474 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 602 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 602 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/shimakaze_azurlane | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-25T06:34:37+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-25T06:34:54+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of shimakaze (Azur Lane)
================================
This is the dataset of shimakaze (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
edba439c736de713f2154ee15ba436412e9711a1 | # Dataset Card for "omcs_50k_with_FAISS"
When people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows.
The embedding for implementing FAISS indexing is given in the dataset as the 'embedding' column.
To implement FAISS indexing:
dataset.add_faiss_index(column='embeddings')
The above code needed to be executed. Then FAISS indexing can be verified. | dutta18/omcs_50k_with_FAISS | [
"region:us"
]
| 2023-11-25T07:27:50+00:00 | {"dataset_info": {"features": [{"name": "count", "dtype": "int64"}, {"name": "fact", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 157033742, "num_examples": 50000}], "download_size": 186812200, "dataset_size": 157033742}} | 2023-11-25T09:55:32+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "omcs_50k_with_FAISS"
When people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows.
The embedding for implementing FAISS indexing is given in the dataset as the 'embedding' column.
To implement FAISS indexing:
dataset.add_faiss_index(column='embeddings')
The above code needed to be executed. Then FAISS indexing can be verified. | [
"# Dataset Card for \"omcs_50k_with_FAISS\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows.\n\nThe embedding for implementing FAISS indexing is given in the dataset as the 'embedding' column.\n\nTo implement FAISS indexing:\n\ndataset.add_faiss_index(column='embeddings')\n\nThe above code needed to be executed. Then FAISS indexing can be verified."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"omcs_50k_with_FAISS\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows.\n\nThe embedding for implementing FAISS indexing is given in the dataset as the 'embedding' column.\n\nTo implement FAISS indexing:\n\ndataset.add_faiss_index(column='embeddings')\n\nThe above code needed to be executed. Then FAISS indexing can be verified."
]
| [
6,
183
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"omcs_50k_with_FAISS\"\n\nWhen people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows.\n\nThe embedding for implementing FAISS indexing is given in the dataset as the 'embedding' column.\n\nTo implement FAISS indexing:\n\ndataset.add_faiss_index(column='embeddings')\n\nThe above code needed to be executed. Then FAISS indexing can be verified."
]
|
fcfcbb7cb9263b490a2a886e041750be1514b1a5 | # Dataset Card for "MC-ViMath"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | longhoang06/MC-ViMath | [
"region:us"
]
| 2023-11-25T07:40:36+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6086303, "num_examples": 9328}], "download_size": 3016997, "dataset_size": 6086303}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-25T07:40:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "MC-ViMath"
More Information needed | [
"# Dataset Card for \"MC-ViMath\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"MC-ViMath\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"MC-ViMath\"\n\nMore Information needed"
]
|
c47673c2255dbb2b551657323b66329266503d0e |
# Dataset of yoshimi (Blue Archive)
This is the dataset of yoshimi (Blue Archive), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 564 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 660 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 564 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 564 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 529 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 660 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 660 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/yoshimi_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-25T07:53:46+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-25T07:54:02+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of yoshimi (Blue Archive)
=================================
This is the dataset of yoshimi (Blue Archive), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
9d7f4524d9c3e9c94101ab787130e9ac743970f8 |
# Dataset of tsurugi (Blue Archive)
This is the dataset of tsurugi (Blue Archive), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 531 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 667 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 531 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 531 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 485 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 667 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 667 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/tsurugi_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
]
| 2023-11-25T08:08:12+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-11-25T08:08:32+00:00 | []
| []
| TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of tsurugi (Blue Archive)
=================================
This is the dataset of tsurugi (Blue Archive), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).(LittleAppleWebUI)
| []
| [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
]
|
bb4ed3d76fc8ebc04182e99bdd64c10d88d9e61f | This is Wikidedia passages dataset for ODQA retriever.
Each passages have 256~ tokens splitteed by gpt-4 tokenizer using tiktoken.
Token count
```ts
{'~128': 1415068, '128~256': 1290011,
'256~512': 18756476, '512~1024': 667,
'1024~2048': 12, '2048~4096': 0, '4096~8192': 0,
'8192~16384': 0, '16384~32768': 0, '32768~65536': 0,
'65536~128000': 0, '128000~': 0}
```
Text count
```ts
{'~512': 1556876,'512~1024': 6074975, '1024~2048': 13830329,
'2048~4096': 49, '4096~8192': 2, '8192~16384': 3, '16384~32768': 0,
'32768~65536': 0, '65536~': 0}
```
Token percent
```ts
{'~128': '6.59%', '128~256': '6.01%', '256~512': '87.39%',
'512~1024': '0.00%', '1024~2048': '0.00%', '2048~4096': '0.00%',
'4096~8192': '0.00%', '8192~16384': '0.00%', '16384~32768': '0.00%',
'32768~65536': '0.00%', '65536~128000': '0.00%', '128000~': '0.00%'}
```
Text percent
```ts
{'~512': '7.25%', '512~1024': '28.31%', '1024~2048': '64.44%',
'2048~4096': '0.00%', '4096~8192': '0.00%', '8192~16384': '0.00%',
'16384~32768': '0.00%', '32768~65536': '0.00%', '65536~': '0.00%'}
```
| seonglae/wikipedia-256 | [
"task_categories:question-answering",
"language:en",
"wikipedia",
"region:us"
]
| 2023-11-25T08:10:11+00:00 | {"language": ["en"], "task_categories": ["question-answering"], "dataset_info": {"config_name": "gpt-4", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24166736905, "num_examples": 21462234}], "download_size": 12274801108, "dataset_size": 24166736905}, "configs": [{"config_name": "gpt-4", "data_files": [{"split": "train", "path": "gpt-4/train-*"}]}], "tags": ["wikipedia"]} | 2023-11-26T15:41:22+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #language-English #wikipedia #region-us
| This is Wikidedia passages dataset for ODQA retriever.
Each passages have 256~ tokens splitteed by gpt-4 tokenizer using tiktoken.
Token count
Text count
Token percent
Text percent
| []
| [
"TAGS\n#task_categories-question-answering #language-English #wikipedia #region-us \n"
]
| [
24
]
| [
"passage: TAGS\n#task_categories-question-answering #language-English #wikipedia #region-us \n"
]
|
2f028fc664ce99bd19ae0c88b9a3033f1c861f65 |
# English Hinglish
English to Hinglish Dataset processed from [findnitai/english-to-hinglish](https://huggingface.co/datasets/findnitai/english-to-hinglish).
Sources:
1. Hinglish TOP Dataset
2. CMU English Dog
3. HinGE
4. PHINC | rvv-karma/English-Hinglish | [
"task_categories:translation",
"task_categories:text-generation",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"language:en",
"language:hi",
"license:apache-2.0",
"region:us"
]
| 2023-11-25T09:13:41+00:00 | {"language": ["en", "hi"], "license": "apache-2.0", "multilinguality": ["multilingual", "translation"], "size_categories": ["10K<n<100K"], "task_categories": ["translation", "text-generation"], "pretty_name": "English Hinglish", "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "hi_en", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12698467, "num_examples": 132371}, {"name": "test", "num_bytes": 5431064, "num_examples": 56731}], "download_size": 11695921, "dataset_size": 18129531}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-11-25T10:14:40+00:00 | []
| [
"en",
"hi"
]
| TAGS
#task_categories-translation #task_categories-text-generation #multilinguality-multilingual #multilinguality-translation #size_categories-10K<n<100K #language-English #language-Hindi #license-apache-2.0 #region-us
|
# English Hinglish
English to Hinglish Dataset processed from findnitai/english-to-hinglish.
Sources:
1. Hinglish TOP Dataset
2. CMU English Dog
3. HinGE
4. PHINC | [
"# English Hinglish\n\nEnglish to Hinglish Dataset processed from findnitai/english-to-hinglish.\n\nSources:\n1. Hinglish TOP Dataset\n2. CMU English Dog\n3. HinGE\n4. PHINC"
]
| [
"TAGS\n#task_categories-translation #task_categories-text-generation #multilinguality-multilingual #multilinguality-translation #size_categories-10K<n<100K #language-English #language-Hindi #license-apache-2.0 #region-us \n",
"# English Hinglish\n\nEnglish to Hinglish Dataset processed from findnitai/english-to-hinglish.\n\nSources:\n1. Hinglish TOP Dataset\n2. CMU English Dog\n3. HinGE\n4. PHINC"
]
| [
69,
49
]
| [
"passage: TAGS\n#task_categories-translation #task_categories-text-generation #multilinguality-multilingual #multilinguality-translation #size_categories-10K<n<100K #language-English #language-Hindi #license-apache-2.0 #region-us \n# English Hinglish\n\nEnglish to Hinglish Dataset processed from findnitai/english-to-hinglish.\n\nSources:\n1. Hinglish TOP Dataset\n2. CMU English Dog\n3. HinGE\n4. PHINC"
]
|
a3dd06923c13536b1e2ea05d3724062ff6bdf2e6 |
# Bangumi Image Base of Your Lie In April
This is the image base of bangumi Your Lie in April, we detected 26 characters, 2374 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 609 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 135 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 82 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 45 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 64 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 25 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 89 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 32 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 108 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 118 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 15 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 30 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 86 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 28 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 38 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 27 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 75 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 86 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 83 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 112 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 60 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 13 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 7 | [Download](22/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 23 | 6 | [Download](23/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 24 | 7 | [Download](24/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 394 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/yourlieinapril | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-25T09:23:04+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-25T11:18:34+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Your Lie In April
=======================================
This is the image base of bangumi Your Lie in April, we detected 26 characters, 2374 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
38d7974f1b0fc406c7a11dde4f0659e91fcbd12e |
# Bangumi Image Base of Natsume's Book Of Friends
This is the image base of bangumi Natsume's Book of Friends, we detected 60 characters, 6311 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 2720 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 274 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 199 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 233 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 102 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 52 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 89 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 110 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 373 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 74 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 58 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 48 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 150 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 39 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 31 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 89 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 37 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 82 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 87 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 163 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 123 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 43 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 84 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 33 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 16 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 18 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 33 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 23 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 20 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 21 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 34 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 26 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 20 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 22 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 20 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 10 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 27 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 9 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 16 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 104 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 22 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 61 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 11 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 26 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 42 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 8 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 9 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 21 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 8 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 17 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 17 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 10 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 28 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 15 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 102 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 19 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 15 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 8 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 9 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 151 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/natsumesbookoffriends | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-25T09:23:26+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-25T13:44:22+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Natsume's Book Of Friends
===============================================
This is the image base of bangumi Natsume's Book of Friends, we detected 60 characters, 6311 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
a8e12409f699ff77d66380cba5bbb583e7ae8455 | # Dataset Card for "123"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | manojpatil/123 | [
"region:us"
]
| 2023-11-25T09:48:09+00:00 | {"dataset_info": {"features": [{"name": "r", "dtype": "int64"}, {"name": "theta", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 173, "num_examples": 7}], "download_size": 1415, "dataset_size": 173}} | 2023-11-25T09:59:34+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "123"
More Information needed | [
"# Dataset Card for \"123\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"123\"\n\nMore Information needed"
]
| [
6,
11
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"123\"\n\nMore Information needed"
]
|
ce2d06fdee1f7cbd9cb37d889f0b13d9fcf7f6d4 |
# Bangumi Image Base of Danshi Koukousei No Nichijou
This is the image base of bangumi Danshi Koukousei no Nichijou, we detected 25 characters, 1831 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 320 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 127 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 364 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 29 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 75 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 106 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 20 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 54 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 61 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 69 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 21 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 21 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 54 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 9 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 46 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 229 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 29 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 36 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 56 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 7 | [Download](19/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 20 | 12 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 28 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 7 | [Download](22/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 23 | 7 | [Download](23/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 44 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/danshikoukouseinonichijou | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-25T10:04:03+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-25T11:12:24+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Danshi Koukousei No Nichijou
==================================================
This is the image base of bangumi Danshi Koukousei no Nichijou, we detected 25 characters, 1831 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
5ab10562888b4dd9735e434496f7e6112859329d |
# English Hinglish (TOP Dataset)
This dataset is generated from [Hinglish-TOP Dataset](https://github.com/google-research-datasets/hinglish-top-dataset).
Data distribution:
1. Train
a. Human Generated - 6513
b. Synthetically generated - 170083
2. Validation
a. Human Generated - 1390
b. Synthetically generated - 0
3. Test
a. Human Generated - 6513
b. Synthetically generated - 0
| rvv-karma/English-Hinglish-TOP | [
"task_categories:translation",
"task_categories:text-generation",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"language:en",
"language:hi",
"license:apache-2.0",
"region:us"
]
| 2023-11-25T10:12:31+00:00 | {"language": ["en", "hi"], "license": "apache-2.0", "multilinguality": ["multilingual", "translation"], "size_categories": ["10K<n<100K"], "task_categories": ["translation", "text-generation"], "pretty_name": "English Hinglish", "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "hi_en", "dtype": "string"}, {"name": "en_parse", "dtype": "string"}, {"name": "hi_en_parse", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "generated_by", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56585917, "num_examples": 176596}, {"name": "val", "num_bytes": 423297, "num_examples": 1390}, {"name": "test", "num_bytes": 2056405, "num_examples": 6513}], "download_size": 26490229, "dataset_size": 59065619}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-11-26T17:18:53+00:00 | []
| [
"en",
"hi"
]
| TAGS
#task_categories-translation #task_categories-text-generation #multilinguality-multilingual #multilinguality-translation #size_categories-10K<n<100K #language-English #language-Hindi #license-apache-2.0 #region-us
|
# English Hinglish (TOP Dataset)
This dataset is generated from Hinglish-TOP Dataset.
Data distribution:
1. Train
a. Human Generated - 6513
b. Synthetically generated - 170083
2. Validation
a. Human Generated - 1390
b. Synthetically generated - 0
3. Test
a. Human Generated - 6513
b. Synthetically generated - 0
| [
"# English Hinglish (TOP Dataset)\n\nThis dataset is generated from Hinglish-TOP Dataset.\n\nData distribution:\n1. Train \n a. Human Generated - 6513 \n b. Synthetically generated - 170083 \n2. Validation \n a. Human Generated - 1390 \n b. Synthetically generated - 0 \n3. Test \n a. Human Generated - 6513 \n b. Synthetically generated - 0"
]
| [
"TAGS\n#task_categories-translation #task_categories-text-generation #multilinguality-multilingual #multilinguality-translation #size_categories-10K<n<100K #language-English #language-Hindi #license-apache-2.0 #region-us \n",
"# English Hinglish (TOP Dataset)\n\nThis dataset is generated from Hinglish-TOP Dataset.\n\nData distribution:\n1. Train \n a. Human Generated - 6513 \n b. Synthetically generated - 170083 \n2. Validation \n a. Human Generated - 1390 \n b. Synthetically generated - 0 \n3. Test \n a. Human Generated - 6513 \n b. Synthetically generated - 0"
]
| [
69,
91
]
| [
"passage: TAGS\n#task_categories-translation #task_categories-text-generation #multilinguality-multilingual #multilinguality-translation #size_categories-10K<n<100K #language-English #language-Hindi #license-apache-2.0 #region-us \n# English Hinglish (TOP Dataset)\n\nThis dataset is generated from Hinglish-TOP Dataset.\n\nData distribution:\n1. Train \n a. Human Generated - 6513 \n b. Synthetically generated - 170083 \n2. Validation \n a. Human Generated - 1390 \n b. Synthetically generated - 0 \n3. Test \n a. Human Generated - 6513 \n b. Synthetically generated - 0"
]
|
7f06d9b2bd9313d5c326a49e9876dc4bfd59a4bb | # Dataset Card for "metamath-prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aops02/metamath-prompt | [
"region:us"
]
| 2023-11-25T10:21:24+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2461676, "num_examples": 1399}], "download_size": 0, "dataset_size": 2461676}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-27T20:47:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "metamath-prompt"
More Information needed | [
"# Dataset Card for \"metamath-prompt\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"metamath-prompt\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"metamath-prompt\"\n\nMore Information needed"
]
|
2384ce648a09feb87b69e409e8f253cf0ccf5895 | # Dataset Card for "undl_zh2en_aligned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/undl_zh2en_aligned | [
"region:us"
]
| 2023-11-25T10:38:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "record", "dtype": "string"}, {"name": "clean_para_index_set_pair", "dtype": "string"}, {"name": "src", "dtype": "string"}, {"name": "dst", "dtype": "string"}, {"name": "src_text", "dtype": "string"}, {"name": "dst_text", "dtype": "string"}, {"name": "src_rate", "dtype": "float64"}, {"name": "dst_rate", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 8884444751, "num_examples": 15331650}], "download_size": 2443622169, "dataset_size": 8884444751}} | 2023-11-25T11:39:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "undl_zh2en_aligned"
More Information needed | [
"# Dataset Card for \"undl_zh2en_aligned\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_zh2en_aligned\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_zh2en_aligned\"\n\nMore Information needed"
]
|
214452e089abc76d0dbfcce83d7f942f46dc46f0 | # Dataset Card for "rework_undl_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bot-yaya/rework_undl_text | [
"region:us"
]
| 2023-11-25T10:39:24+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "ar", "dtype": "string"}, {"name": "zh", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "fr", "dtype": "string"}, {"name": "ru", "dtype": "string"}, {"name": "es", "dtype": "string"}, {"name": "de", "dtype": "string"}, {"name": "record", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48622457871, "num_examples": 165840}], "download_size": 3906189450, "dataset_size": 48622457871}} | 2023-11-25T16:29:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "rework_undl_text"
More Information needed | [
"# Dataset Card for \"rework_undl_text\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"rework_undl_text\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"rework_undl_text\"\n\nMore Information needed"
]
|
aef543bd1daefd48b2e2f914fd10ecb847446920 |
# Bangumi Image Base of Nana
This is the image base of bangumi NANA, we detected 38 characters, 4462 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 102 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 885 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 60 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 72 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 33 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 19 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 36 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 979 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 105 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 390 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 25 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 60 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 143 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 122 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 76 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 25 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 20 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 50 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 416 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 18 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 83 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 31 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 16 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 29 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 58 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 52 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 39 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 40 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 189 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 38 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 34 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 35 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 60 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 7 | [Download](33/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 34 | 18 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 13 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 6 | [Download](36/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 78 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/nana | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-25T10:45:08+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-25T13:26:05+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Nana
==========================
This is the image base of bangumi NANA, we detected 38 characters, 4462 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
9952469be8bef43a715683721d743ad2e29f5683 | # Dataset Card for "gpt2-winogrande_base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | automated-research-group/gpt2-winogrande_base | [
"region:us"
]
| 2023-11-25T11:07:33+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "request", "dtype": "string"}, {"name": "input_perplexity", "dtype": "float64"}, {"name": "input_likelihood", "dtype": "float64"}, {"name": "output_perplexity", "dtype": "float64"}, {"name": "output_likelihood", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 357278, "num_examples": 1267}], "download_size": 162691, "dataset_size": 357278}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-25T11:07:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gpt2-winogrande_base"
More Information needed | [
"# Dataset Card for \"gpt2-winogrande_base\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gpt2-winogrande_base\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gpt2-winogrande_base\"\n\nMore Information needed"
]
|
aa29d310b3846d2920155af0eea93b58bb87ece8 | # Dataset Card for "winogrande_inverted_option"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | automated-research-group/winogrande_inverted_option | [
"region:us"
]
| 2023-11-25T11:18:36+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "request", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 316710, "num_examples": 1267}], "download_size": 123029, "dataset_size": 316710}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-25T11:18:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "winogrande_inverted_option"
More Information needed | [
"# Dataset Card for \"winogrande_inverted_option\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"winogrande_inverted_option\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"winogrande_inverted_option\"\n\nMore Information needed"
]
|
60b411525461e920a04c8bc465c431e08b985c6e | # Dataset Card for "gpt2-winogrande_inverted_option"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | automated-research-group/gpt2-winogrande_inverted_option | [
"region:us"
]
| 2023-11-25T11:19:23+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "request", "dtype": "string"}, {"name": "input_perplexity", "dtype": "float64"}, {"name": "input_likelihood", "dtype": "float64"}, {"name": "output_perplexity", "dtype": "float64"}, {"name": "output_likelihood", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 357254, "num_examples": 1267}], "download_size": 162698, "dataset_size": 357254}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-25T11:19:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gpt2-winogrande_inverted_option"
More Information needed | [
"# Dataset Card for \"gpt2-winogrande_inverted_option\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gpt2-winogrande_inverted_option\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gpt2-winogrande_inverted_option\"\n\nMore Information needed"
]
|
9682a81bd21e51212ee9227a2b484d30e887fe0a | # Dataset Card for "phi-winogrande_base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | automated-research-group/phi-winogrande_base | [
"region:us"
]
| 2023-11-25T11:46:11+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "request", "dtype": "string"}, {"name": "input_perplexity", "dtype": "float64"}, {"name": "input_likelihood", "dtype": "float64"}, {"name": "output_perplexity", "dtype": "float64"}, {"name": "output_likelihood", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 357278, "num_examples": 1267}], "download_size": 162624, "dataset_size": 357278}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-25T11:46:13+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "phi-winogrande_base"
More Information needed | [
"# Dataset Card for \"phi-winogrande_base\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"phi-winogrande_base\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"phi-winogrande_base\"\n\nMore Information needed"
]
|
ee8291ab7539676766639596d677c469619bf3a5 | # Dataset Card for "phi-winogrande_inverted_option"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | automated-research-group/phi-winogrande_inverted_option | [
"region:us"
]
| 2023-11-25T11:57:38+00:00 | {"dataset_info": [{"config_name": "default", "features": [{"name": "id", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "request", "dtype": "string"}, {"name": "input_perplexity", "dtype": "float64"}, {"name": "input_likelihood", "dtype": "float64"}, {"name": "output_perplexity", "dtype": "float64"}, {"name": "output_likelihood", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 35815, "num_examples": 127}], "download_size": 21040, "dataset_size": 35815}, {"config_name": "shard_0_0_10", "features": [{"name": "id", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "request", "dtype": "string"}, {"name": "input_perplexity", "dtype": "float64"}, {"name": "input_likelihood", "dtype": "float64"}, {"name": "output_perplexity", "dtype": "float64"}, {"name": "output_likelihood", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 35815, "num_examples": 127}], "download_size": 21040, "dataset_size": 35815}], "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}, {"config_name": "shard_0_0_10", "data_files": [{"split": "validation", "path": "shard_0_0_10/validation-*"}]}]} | 2023-11-25T15:26:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "phi-winogrande_inverted_option"
More Information needed | [
"# Dataset Card for \"phi-winogrande_inverted_option\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"phi-winogrande_inverted_option\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"phi-winogrande_inverted_option\"\n\nMore Information needed"
]
|
496e38f114252e487c83566b8603aaaab009e103 | # Dataset Card for "AIPD_nlp_sentence_dataset_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | patent/AIPD_nlp_sentence_dataset_v2 | [
"region:us"
]
| 2023-11-25T12:36:31+00:00 | {"dataset_info": {"features": [{"name": "patent_num", "dtype": "int64"}, {"name": "claim_num1", "dtype": "int64"}, {"name": "claim_num2", "dtype": "int64"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1141724170.7014475, "num_examples": 453043}, {"name": "test", "num_bytes": 63431500.71087167, "num_examples": 25170}, {"name": "valid", "num_bytes": 63428980.58768093, "num_examples": 25169}], "download_size": 481158714, "dataset_size": 1268584652.0}} | 2023-11-25T12:37:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "AIPD_nlp_sentence_dataset_v2"
More Information needed | [
"# Dataset Card for \"AIPD_nlp_sentence_dataset_v2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"AIPD_nlp_sentence_dataset_v2\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"AIPD_nlp_sentence_dataset_v2\"\n\nMore Information needed"
]
|
ed3d69abb765fd304ccdcc646ec6ca3c49740d15 | # Dataset Card for "english_dialects"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Crowdsourced high-quality UK and Ireland English Dialect speech data set.](https://www.openslr.org/83/)
- **Repository:** [Google Language Resources and Tools](https://github.com/google/language-resources)
- **Paper:** [Open-source Multi-speaker Corpora of the English Accents in the British Isles](https://aclanthology.org/2020.lrec-1.804/)
### Dataset Summary
This dataset consists of 31 hours of transcribed high-quality audio of English sentences recorded by 120 volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The speakers self-identified as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.
The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage.
The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words.
Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the [CSTR VCTK corpus](https://huggingface.co/datasets/vctk) and the Speech Accent Archive to allow for easy comparison of personal and regional accents.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/83) to make it easier to stream.
### Supported Tasks
- `text-to-speech`, `text-to-audio`: The dataset can be used to train a model for Text-To-Speech (TTS).
- `automatic-speech-recognition`, `speaker-identification`: The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Irish male config, simply specify the corresponding language config name (i.e., "irish_male" for Irish male speakers):
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/english_dialects", "irish_male", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/english_dialects", "irish_male", split="train", streaming=True)
print(next(iter(dataset)))
```
#### *Bonus*
You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
**Local:**
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
dataset =load_dataset("ylacombe/english_dialects", "irish_male", split="train")
batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
```
**Streaming:**
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset =load_dataset("ylacombe/english_dialects", "irish_male", split="train", streaming=True)
dataloader = DataLoader(dataset, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file called `audio` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'line_id': 'BI0057', 'audio': {'path': 'irm_02484_00388340153.wav', 'array': array([-1.22070312e-04, -1.52587891e-04, -1.22070312e-04, ...,
1.52587891e-04, 9.15527344e-05, 1.83105469e-04]), 'sampling_rate': 48000}, 'text': 'It is thirteen degrees with drizzle in Exeter', 'speaker_id': 2484}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- line_id: unique id of the transcription. The same line id can be found for multiple speakers.
### Data Statistics

## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
License: ([CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en))
### Citation Information
```
@inproceedings{demirsahin-etal-2020-open,
title = "Open-source Multi-speaker Corpora of the {E}nglish Accents in the {B}ritish Isles",
author = "Demirsahin, Isin and
Kjartansson, Oddur and
Gutkin, Alexander and
Rivera, Clara",
editor = "Calzolari, Nicoletta and
B{\'e}chet, Fr{\'e}d{\'e}ric and
Blache, Philippe and
Choukri, Khalid and
Cieri, Christopher and
Declerck, Thierry and
Goggi, Sara and
Isahara, Hitoshi and
Maegaard, Bente and
Mariani, Joseph and
Mazo, H{\'e}l{\`e}ne and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.804",
pages = "6532--6541",
abstract = "This paper presents a dataset of transcribed high-quality audio of English sentences recorded by volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage. The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. The resulting corpora include over 31 hours of recordings from 120 volunteers who self-identify as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
### Contributions
Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.
| ylacombe/english_dialects | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-11-25T12:40:07+00:00 | {"language": ["en"], "license": "cc-by-sa-4.0", "task_categories": ["text-to-speech", "text-to-audio"], "pretty_name": "Google English Dialects", "dataset_info": [{"config_name": "irish_male", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 247383069, "num_examples": 450}], "download_size": 202720287, "dataset_size": 247383069}, {"config_name": "midlands_female", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 162542037, "num_examples": 246}], "download_size": 132978651, "dataset_size": 162542037}, {"config_name": "midlands_male", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 253069802, "num_examples": 450}], "download_size": 206197835, "dataset_size": 253069802}, {"config_name": "northern_female", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 473568497, "num_examples": 750}], "download_size": 394563149, "dataset_size": 473568497}, {"config_name": "northern_male", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1248889021.568, "num_examples": 2097}], "download_size": 1018089994, "dataset_size": 1248889021.568}, {"config_name": "scottish_female", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 547825387, "num_examples": 894}], "download_size": 444335278, "dataset_size": 547825387}, {"config_name": "scottish_male", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 957274572.368, "num_examples": 1649}], "download_size": 771585437, "dataset_size": 957274572.368}, {"config_name": "southern_female", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2500285879.784, "num_examples": 4161}], "download_size": 2043363777, "dataset_size": 2500285879.784}, {"config_name": "southern_male", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2566139827.568, "num_examples": 4331}], "download_size": 2105363890, "dataset_size": 2566139827.568}, {"config_name": "welsh_female", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 852961200.976, "num_examples": 1199}], "download_size": 737774228, "dataset_size": 852961200.976}, {"config_name": "welsh_male", "features": [{"name": "line_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1026953293.4, "num_examples": 1650}], "download_size": 926205900, "dataset_size": 1026953293.4}], "configs": [{"config_name": "irish_male", "data_files": [{"split": "train", "path": "irish_male/train-*"}]}, {"config_name": "midlands_female", "data_files": [{"split": "train", "path": "midlands_female/train-*"}]}, {"config_name": "midlands_male", "data_files": [{"split": "train", "path": "midlands_male/train-*"}]}, {"config_name": "northern_female", "data_files": [{"split": "train", "path": "northern_female/train-*"}]}, {"config_name": "northern_male", "data_files": [{"split": "train", "path": "northern_male/train-*"}]}, {"config_name": "scottish_female", "data_files": [{"split": "train", "path": "scottish_female/train-*"}]}, {"config_name": "scottish_male", "data_files": [{"split": "train", "path": "scottish_male/train-*"}]}, {"config_name": "southern_female", "data_files": [{"split": "train", "path": "southern_female/train-*"}]}, {"config_name": "southern_male", "data_files": [{"split": "train", "path": "southern_male/train-*"}]}, {"config_name": "welsh_female", "data_files": [{"split": "train", "path": "welsh_female/train-*"}]}, {"config_name": "welsh_male", "data_files": [{"split": "train", "path": "welsh_male/train-*"}]}]} | 2023-11-27T10:32:58+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-to-speech #task_categories-text-to-audio #language-English #license-cc-by-sa-4.0 #region-us
| # Dataset Card for "english_dialects"
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- How to use
- Dataset Structure
- Data Instances
- Data Fields
- Data Statistics
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: Crowdsourced high-quality UK and Ireland English Dialect speech data set.
- Repository: Google Language Resources and Tools
- Paper: Open-source Multi-speaker Corpora of the English Accents in the British Isles
### Dataset Summary
This dataset consists of 31 hours of transcribed high-quality audio of English sentences recorded by 120 volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The speakers self-identified as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.
The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage.
The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words.
Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents.
The data archives were restructured from the original ones from OpenSLR to make it easier to stream.
### Supported Tasks
- 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).
- 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.
For example, to download the Irish male config, simply specify the corresponding language config name (i.e., "irish_male" for Irish male speakers):
Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
#### *Bonus*
You can create a PyTorch dataloader directly with your own datasets (local/streamed).
Local:
Streaming:
To find out more about loading and preparing audio datasets, head over to URL
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- line_id: unique id of the transcription. The same line id can be found for multiple speakers.
### Data Statistics
!image/png
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
License: (CC BY-SA 4.0 DEED)
### Contributions
Thanks to @ylacombe for adding this dataset.
| [
"# Dataset Card for \"english_dialects\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Statistics\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Crowdsourced high-quality UK and Ireland English Dialect speech data set.\n- Repository: Google Language Resources and Tools\n- Paper: Open-source Multi-speaker Corpora of the English Accents in the British Isles",
"### Dataset Summary\n\nThis dataset consists of 31 hours of transcribed high-quality audio of English sentences recorded by 120 volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The speakers self-identified as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.\n\nThe recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage.\nThe scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. \nOverlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. \n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.",
"### Supported Tasks\n\n- 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n- 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).",
"### How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Irish male config, simply specify the corresponding language config name (i.e., \"irish_male\" for Irish male speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.",
"#### *Bonus*\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\nLocal:\n\n\n\nStreaming:\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.",
"### Data Fields\n\n- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n- text: the transcription of the audio file.\n\n- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.\n\n- line_id: unique id of the transcription. The same line id can be found for multiple speakers.",
"### Data Statistics\n\n\n!image/png",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nLicense: (CC BY-SA 4.0 DEED)",
"### Contributions\n\nThanks to @ylacombe for adding this dataset."
]
| [
"TAGS\n#task_categories-text-to-speech #task_categories-text-to-audio #language-English #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for \"english_dialects\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Statistics\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Crowdsourced high-quality UK and Ireland English Dialect speech data set.\n- Repository: Google Language Resources and Tools\n- Paper: Open-source Multi-speaker Corpora of the English Accents in the British Isles",
"### Dataset Summary\n\nThis dataset consists of 31 hours of transcribed high-quality audio of English sentences recorded by 120 volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The speakers self-identified as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.\n\nThe recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage.\nThe scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. \nOverlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. \n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.",
"### Supported Tasks\n\n- 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n- 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).",
"### How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Irish male config, simply specify the corresponding language config name (i.e., \"irish_male\" for Irish male speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.",
"#### *Bonus*\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\nLocal:\n\n\n\nStreaming:\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.",
"### Data Fields\n\n- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n- text: the transcription of the audio file.\n\n- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.\n\n- line_id: unique id of the transcription. The same line id can be found for multiple speakers.",
"### Data Statistics\n\n\n!image/png",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nLicense: (CC BY-SA 4.0 DEED)",
"### Contributions\n\nThanks to @ylacombe for adding this dataset."
]
| [
47,
13,
117,
58,
239,
125,
172,
52,
6,
51,
232,
9,
5,
7,
4,
10,
10,
5,
5,
9,
40,
8,
7,
8,
7,
5,
6,
17,
17
]
| [
"passage: TAGS\n#task_categories-text-to-speech #task_categories-text-to-audio #language-English #license-cc-by-sa-4.0 #region-us \n# Dataset Card for \"english_dialects\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Statistics\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Crowdsourced high-quality UK and Ireland English Dialect speech data set.\n- Repository: Google Language Resources and Tools\n- Paper: Open-source Multi-speaker Corpora of the English Accents in the British Isles### Dataset Summary\n\nThis dataset consists of 31 hours of transcribed high-quality audio of English sentences recorded by 120 volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The speakers self-identified as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.\n\nThe recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage.\nThe scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. \nOverlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. \n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.",
"passage: ### Supported Tasks\n\n- 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n- 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).### How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Irish male config, simply specify the corresponding language config name (i.e., \"irish_male\" for Irish male speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.#### *Bonus*\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\nLocal:\n\n\n\nStreaming:\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL## Dataset Structure### Data Instances\n\nA typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided."
]
|
ebc2d745a2a2331b2c1458cce00dd0482eb82bbf | # Dataset Card for Tamil Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Crowdsourced high-quality Tamil multi-speaker speech data set.](https://www.openslr.org/65/)
- **Repository:** [Google Language Resources and Tools](https://github.com/google/language-resources)
- **Paper:** [Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems](https://aclanthology.org/2020.lrec-1.804/)
### Dataset Summary
This dataset consists of 7 hours of transcribed high-quality audio of Tamil sentences recorded by 50 volunteers. The dataset is intended for speech technologies.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/65/) to make it easier to stream.
### Supported Tasks
- `text-to-speech`, `text-to-audio`: The dataset can be used to train a model for Text-To-Speech (TTS).
- `automatic-speech-recognition`, `speaker-identification`: The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the female config, simply specify the corresponding language config name (i.e., "female" for female speakers):
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/google-tamil", "female", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/google-tamil", "female", split="train", streaming=True)
print(next(iter(dataset)))
```
#### *Bonus*
You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
**Local:**
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
dataset =load_dataset("ylacombe/google-tamil", "female", split="train")
batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
```
**Streaming:**
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset =load_dataset("ylacombe/google-tamil", "female", split="train", streaming=True)
dataloader = DataLoader(dataset, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file called `audio` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'audio': {'path': 'taf_02345_00348037167.wav', 'array': array([-9.15527344e-05, -9.15527344e-05, -1.22070312e-04, ...,
-3.05175781e-05, 0.00000000e+00, 3.05175781e-05]), 'sampling_rate': 48000}, 'text': 'ஆஸ்த்ரேலியப் பெண்ணுக்கு முப்பத்தி மூன்று ஆண்டுகளுக்குப் பின்னர் இந்தியா இழப்பீடு வழங்கியது', 'speaker_id': 2345}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Statistics
| | Total duration (h) | Average duration (s) | # speakers | # sentences | # total words | # unique words | # total syllables | # unique syllables | # total phonemes | # unique phonemes |
|--------|--------------------|----------------------|------------|-------------|---------------|----------------|-------------------|--------------------|------------------|-------------------|
| Female | 4.01 | 6.18 | 25 | 2,335 | 15,880 | 6,620 | 56,607 | 1,696 | 126,659 | 37 |
| Male | 3.07 | 5.66 | 25 | 1,956 | 13,545 | 6,159 | 48,049 | 1,642 | 107,570 | 37 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
License: ([CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en))
### Citation Information
```
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
```
### Contributions
Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset. | ylacombe/google-tamil | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"language:ta",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-11-25T12:59:49+00:00 | {"language": ["ta"], "license": "cc-by-sa-4.0", "task_categories": ["text-to-speech", "text-to-audio"], "pretty_name": "Tamil Speech", "dataset_info": [{"config_name": "female", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1364555763.88, "num_examples": 2335}], "download_size": 1006094564, "dataset_size": 1364555763.88}, {"config_name": "male", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1064641765.528, "num_examples": 1956}], "download_size": 781072069, "dataset_size": 1064641765.528}], "configs": [{"config_name": "female", "data_files": [{"split": "train", "path": "female/train-*"}]}, {"config_name": "male", "data_files": [{"split": "train", "path": "male/train-*"}]}]} | 2023-11-27T11:37:22+00:00 | []
| [
"ta"
]
| TAGS
#task_categories-text-to-speech #task_categories-text-to-audio #language-Tamil #license-cc-by-sa-4.0 #region-us
| Dataset Card for Tamil Speech
=============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ How to use
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Statistics
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: Crowdsourced high-quality Tamil multi-speaker speech data set.
* Repository: Google Language Resources and Tools
* Paper: Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems
### Dataset Summary
This dataset consists of 7 hours of transcribed high-quality audio of Tamil sentences recorded by 50 volunteers. The dataset is intended for speech technologies.
The data archives were restructured from the original ones from OpenSLR to make it easier to stream.
### Supported Tasks
* 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).
* 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load\_dataset' function.
For example, to download the female config, simply specify the corresponding language config name (i.e., "female" for female speakers):
Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load\_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
#### *Bonus*
You can create a PyTorch dataloader directly with your own datasets (local/streamed).
Local:
Streaming:
To find out more about loading and preparing audio datasets, head over to URL
Dataset Structure
-----------------
### Data Instances
A typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.
### Data Fields
* audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
* text: the transcription of the audio file.
* speaker\_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Statistics
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
License: (CC BY-SA 4.0 DEED)
### Contributions
Thanks to @ylacombe for adding this dataset.
| [
"### Dataset Summary\n\n\nThis dataset consists of 7 hours of transcribed high-quality audio of Tamil sentences recorded by 50 volunteers. The dataset is intended for speech technologies.\n\n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.",
"### Supported Tasks\n\n\n* 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n* 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).",
"### How to use\n\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load\\_dataset' function.\n\n\nFor example, to download the female config, simply specify the corresponding language config name (i.e., \"female\" for female speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load\\_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.",
"#### *Bonus*\n\n\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\n\nLocal:\n\n\nStreaming:\n\n\nTo find out more about loading and preparing audio datasets, head over to URL\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.",
"### Data Fields\n\n\n* audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* speaker\\_id: unique id of the speaker. The same speaker id can be found for multiple data samples.",
"### Data Statistics\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nLicense: (CC BY-SA 4.0 DEED)",
"### Contributions\n\n\nThanks to @ylacombe for adding this dataset."
]
| [
"TAGS\n#task_categories-text-to-speech #task_categories-text-to-audio #language-Tamil #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nThis dataset consists of 7 hours of transcribed high-quality audio of Tamil sentences recorded by 50 volunteers. The dataset is intended for speech technologies.\n\n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.",
"### Supported Tasks\n\n\n* 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n* 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).",
"### How to use\n\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load\\_dataset' function.\n\n\nFor example, to download the female config, simply specify the corresponding language config name (i.e., \"female\" for female speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load\\_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.",
"#### *Bonus*\n\n\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\n\nLocal:\n\n\nStreaming:\n\n\nTo find out more about loading and preparing audio datasets, head over to URL\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.",
"### Data Fields\n\n\n* audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* speaker\\_id: unique id of the speaker. The same speaker id can be found for multiple data samples.",
"### Data Statistics\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nLicense: (CC BY-SA 4.0 DEED)",
"### Contributions\n\n\nThanks to @ylacombe for adding this dataset."
]
| [
47,
65,
125,
170,
59,
51,
210,
11,
7,
4,
10,
10,
5,
5,
9,
50,
7,
8,
14,
6,
17,
17
]
| [
"passage: TAGS\n#task_categories-text-to-speech #task_categories-text-to-audio #language-Tamil #license-cc-by-sa-4.0 #region-us \n### Dataset Summary\n\n\nThis dataset consists of 7 hours of transcribed high-quality audio of Tamil sentences recorded by 50 volunteers. The dataset is intended for speech technologies.\n\n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.### Supported Tasks\n\n\n* 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n* 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).### How to use\n\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load\\_dataset' function.\n\n\nFor example, to download the female config, simply specify the corresponding language config name (i.e., \"female\" for female speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load\\_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.#### *Bonus*\n\n\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\n\nLocal:\n\n\nStreaming:\n\n\nTo find out more about loading and preparing audio datasets, head over to URL\n\n\nDataset Structure\n-----------------"
]
|
082619e99a37715ab8db45b0bc8188ba98a80130 | # Dataset Card for Tamil Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Crowdsourced high-quality Chilean Spanish speech data set.](https://www.openslr.org/71/)
- **Repository:** [Google Language Resources and Tools](https://github.com/google/language-resources)
- **Paper:** [Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech](https://aclanthology.org/2020.lrec-1.801/)
### Dataset Summary
This dataset consists of 7 hours of transcribed high-quality audio of Chilean Spanish sentences recorded by 31 volunteers. The dataset is intended for speech technologies.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/71/) to make it easier to stream.
### Supported Tasks
- `text-to-speech`, `text-to-audio`: The dataset can be used to train a model for Text-To-Speech (TTS).
- `automatic-speech-recognition`, `speaker-identification`: The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the female config, simply specify the corresponding language config name (i.e., "female" for female speakers):
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/google-chilean-spanish", "female", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/google-chilean-spanish", "female", split="train", streaming=True)
print(next(iter(dataset)))
```
#### *Bonus*
You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
**Local:**
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
dataset =load_dataset("ylacombe/google-chilean-spanish", "female", split="train")
batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
```
**Streaming:**
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset =load_dataset("ylacombe/google-chilean-spanish", "female", split="train", streaming=True)
dataloader = DataLoader(dataset, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file called `audio` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'audio': {'path': 'clf_09334_01278378087.wav', 'array': array([-9.15527344e-05, -4.57763672e-04, -4.88281250e-04, ...,
1.86157227e-03, 2.10571289e-03, 2.31933594e-03]), 'sampling_rate': 48000}, 'text': 'La vigencia de tu tarjeta es de ocho meses', 'speaker_id': 9334}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Statistics
| | Total duration (h) | # speakers | # sentences | # total words | # unique words |
|--------|--------------------|------------|-------------|---------------|----------------|
| Female | 2.84 | 13 | 1738 | 16591 | 3279 |
| Male | 4.31 | 18 | 2636 | 25168 | 4171 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
License: ([CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en))
### Citation Information
```
@inproceedings{guevara-rukoz-etal-2020-crowdsourcing,
title = {{Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech}},
author = {Guevara-Rukoz, Adriana and Demirsahin, Isin and He, Fei and Chu, Shan-Hui Cathy and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Gutkin, Alexander and Butryna, Alena and Kjartansson, Oddur},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
year = {2020},
month = may,
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.lrec-1.801},
pages = {6504--6513},
ISBN = {979-10-95546-34-4},
}
```
### Contributions
Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset. | ylacombe/google-chilean-spanish | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"language:es",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-11-25T13:05:49+00:00 | {"language": ["es"], "license": "cc-by-sa-4.0", "task_categories": ["text-to-speech", "text-to-audio"], "pretty_name": "Chilean Spanish Speech", "dataset_info": [{"config_name": "female", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 974926631.856, "num_examples": 1738}], "download_size": 762982190, "dataset_size": 974926631.856}, {"config_name": "male", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1472568181.048, "num_examples": 2636}], "download_size": 1133624286, "dataset_size": 1472568181.048}], "configs": [{"config_name": "female", "data_files": [{"split": "train", "path": "female/train-*"}]}, {"config_name": "male", "data_files": [{"split": "train", "path": "male/train-*"}]}]} | 2023-11-27T11:42:55+00:00 | []
| [
"es"
]
| TAGS
#task_categories-text-to-speech #task_categories-text-to-audio #language-Spanish #license-cc-by-sa-4.0 #region-us
| Dataset Card for Tamil Speech
=============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ How to use
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Statistics
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: Crowdsourced high-quality Chilean Spanish speech data set.
* Repository: Google Language Resources and Tools
* Paper: Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech
### Dataset Summary
This dataset consists of 7 hours of transcribed high-quality audio of Chilean Spanish sentences recorded by 31 volunteers. The dataset is intended for speech technologies.
The data archives were restructured from the original ones from OpenSLR to make it easier to stream.
### Supported Tasks
* 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).
* 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load\_dataset' function.
For example, to download the female config, simply specify the corresponding language config name (i.e., "female" for female speakers):
Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load\_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
#### *Bonus*
You can create a PyTorch dataloader directly with your own datasets (local/streamed).
Local:
Streaming:
To find out more about loading and preparing audio datasets, head over to URL
Dataset Structure
-----------------
### Data Instances
A typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.
### Data Fields
* audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
* text: the transcription of the audio file.
* speaker\_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Statistics
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
License: (CC BY-SA 4.0 DEED)
### Contributions
Thanks to @ylacombe for adding this dataset.
| [
"### Dataset Summary\n\n\nThis dataset consists of 7 hours of transcribed high-quality audio of Chilean Spanish sentences recorded by 31 volunteers. The dataset is intended for speech technologies.\n\n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.",
"### Supported Tasks\n\n\n* 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n* 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).",
"### How to use\n\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load\\_dataset' function.\n\n\nFor example, to download the female config, simply specify the corresponding language config name (i.e., \"female\" for female speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load\\_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.",
"#### *Bonus*\n\n\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\n\nLocal:\n\n\nStreaming:\n\n\nTo find out more about loading and preparing audio datasets, head over to URL\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.",
"### Data Fields\n\n\n* audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* speaker\\_id: unique id of the speaker. The same speaker id can be found for multiple data samples.",
"### Data Statistics\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nLicense: (CC BY-SA 4.0 DEED)",
"### Contributions\n\n\nThanks to @ylacombe for adding this dataset."
]
| [
"TAGS\n#task_categories-text-to-speech #task_categories-text-to-audio #language-Spanish #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nThis dataset consists of 7 hours of transcribed high-quality audio of Chilean Spanish sentences recorded by 31 volunteers. The dataset is intended for speech technologies.\n\n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.",
"### Supported Tasks\n\n\n* 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n* 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).",
"### How to use\n\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load\\_dataset' function.\n\n\nFor example, to download the female config, simply specify the corresponding language config name (i.e., \"female\" for female speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load\\_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.",
"#### *Bonus*\n\n\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\n\nLocal:\n\n\nStreaming:\n\n\nTo find out more about loading and preparing audio datasets, head over to URL\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the path to the audio file called 'audio' and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.",
"### Data Fields\n\n\n* audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* speaker\\_id: unique id of the speaker. The same speaker id can be found for multiple data samples.",
"### Data Statistics\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nLicense: (CC BY-SA 4.0 DEED)",
"### Contributions\n\n\nThanks to @ylacombe for adding this dataset."
]
| [
48,
67,
125,
170,
59,
51,
210,
11,
7,
4,
10,
10,
5,
5,
9,
50,
7,
8,
14,
6,
17,
17
]
| [
"passage: TAGS\n#task_categories-text-to-speech #task_categories-text-to-audio #language-Spanish #license-cc-by-sa-4.0 #region-us \n### Dataset Summary\n\n\nThis dataset consists of 7 hours of transcribed high-quality audio of Chilean Spanish sentences recorded by 31 volunteers. The dataset is intended for speech technologies.\n\n\nThe data archives were restructured from the original ones from OpenSLR to make it easier to stream.### Supported Tasks\n\n\n* 'text-to-speech', 'text-to-audio': The dataset can be used to train a model for Text-To-Speech (TTS).\n* 'automatic-speech-recognition', 'speaker-identification': The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).### How to use\n\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load\\_dataset' function.\n\n\nFor example, to download the female config, simply specify the corresponding language config name (i.e., \"female\" for female speakers):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load\\_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.#### *Bonus*\n\n\nYou can create a PyTorch dataloader directly with your own datasets (local/streamed).\n\n\nLocal:\n\n\nStreaming:\n\n\nTo find out more about loading and preparing audio datasets, head over to URL\n\n\nDataset Structure\n-----------------"
]
|
3dd3486622bdd9a284c08f17ffd70f0a368f068f | # | casualdatauser/neet-dataset-mini | [
"language:en",
"license:mit",
"region:us"
]
| 2023-11-25T13:32:17+00:00 | {"language": ["en"], "license": "mit", "pretty_name": "Mini NEET Dataset"} | 2023-11-25T13:34:52+00:00 | []
| [
"en"
]
| TAGS
#language-English #license-mit #region-us
| # | []
| [
"TAGS\n#language-English #license-mit #region-us \n"
]
| [
15
]
| [
"passage: TAGS\n#language-English #license-mit #region-us \n"
]
|
72d11790cfcefbdb879e2b065d3ed9664927777d | # Dataset Card for "fake_review_hedi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | astrosbd/fake_review_hedi | [
"region:us"
]
| 2023-11-25T13:57:45+00:00 | {"dataset_info": {"features": [{"name": "cat", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "review", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15867393, "num_examples": 40432}], "download_size": 8285372, "dataset_size": 15867393}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-25T13:57:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fake_review_hedi"
More Information needed | [
"# Dataset Card for \"fake_review_hedi\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fake_review_hedi\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fake_review_hedi\"\n\nMore Information needed"
]
|
d8f4a5a0a3fb50f7e73764fe27bdc281e074a014 |
# Dataset Card for Evaluation run of NurtureAI/Orca-2-13B-16k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NurtureAI/Orca-2-13B-16k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NurtureAI/Orca-2-13B-16k](https://huggingface.co/NurtureAI/Orca-2-13B-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NurtureAI__Orca-2-13B-16k_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-25T14:56:50.761859](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__Orca-2-13B-16k_public/blob/main/results_2023-11-25T14-56-50.761859.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4096720745858261,
"acc_stderr": 0.034203032603114795,
"acc_norm": 0.41715801816297365,
"acc_norm_stderr": 0.03505952667633131,
"mc1": 0.29253365973072215,
"mc1_stderr": 0.015925597445286165,
"mc2": 0.45298090995110557,
"mc2_stderr": 0.015831655887070334,
"em": 0.2791526845637584,
"em_stderr": 0.004593906993460012,
"f1": 0.3252799916107391,
"f1_stderr": 0.004576434040922838
},
"harness|arc:challenge|25": {
"acc": 0.48464163822525597,
"acc_stderr": 0.014604496129394911,
"acc_norm": 0.5366894197952219,
"acc_norm_stderr": 0.01457200052775699
},
"harness|hellaswag|10": {
"acc": 0.5056761601274646,
"acc_stderr": 0.004989459871609183,
"acc_norm": 0.6947819159529974,
"acc_norm_stderr": 0.004595586027583791
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.04171654161354543,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.04171654161354543
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4868421052631579,
"acc_stderr": 0.04067533136309174,
"acc_norm": 0.4868421052631579,
"acc_norm_stderr": 0.04067533136309174
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4528301886792453,
"acc_stderr": 0.03063562795796182,
"acc_norm": 0.4528301886792453,
"acc_norm_stderr": 0.03063562795796182
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4097222222222222,
"acc_stderr": 0.04112490974670787,
"acc_norm": 0.4097222222222222,
"acc_norm_stderr": 0.04112490974670787
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117317,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117317
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3815028901734104,
"acc_stderr": 0.037038511930995215,
"acc_norm": 0.3815028901734104,
"acc_norm_stderr": 0.037038511930995215
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.04488482852329017,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.04488482852329017
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.34893617021276596,
"acc_stderr": 0.03115852213135778,
"acc_norm": 0.34893617021276596,
"acc_norm_stderr": 0.03115852213135778
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.30701754385964913,
"acc_stderr": 0.0433913832257986,
"acc_norm": 0.30701754385964913,
"acc_norm_stderr": 0.0433913832257986
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.3931034482758621,
"acc_stderr": 0.040703290137070705,
"acc_norm": 0.3931034482758621,
"acc_norm_stderr": 0.040703290137070705
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2830687830687831,
"acc_stderr": 0.023201392938194974,
"acc_norm": 0.2830687830687831,
"acc_norm_stderr": 0.023201392938194974
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.21428571428571427,
"acc_stderr": 0.03670066451047181,
"acc_norm": 0.21428571428571427,
"acc_norm_stderr": 0.03670066451047181
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.41935483870967744,
"acc_stderr": 0.028071588901091845,
"acc_norm": 0.41935483870967744,
"acc_norm_stderr": 0.028071588901091845
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.270935960591133,
"acc_stderr": 0.031270907132976984,
"acc_norm": 0.270935960591133,
"acc_norm_stderr": 0.031270907132976984
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.593939393939394,
"acc_stderr": 0.03834816355401181,
"acc_norm": 0.593939393939394,
"acc_norm_stderr": 0.03834816355401181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.48484848484848486,
"acc_stderr": 0.0356071651653106,
"acc_norm": 0.48484848484848486,
"acc_norm_stderr": 0.0356071651653106
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.538860103626943,
"acc_stderr": 0.035975244117345775,
"acc_norm": 0.538860103626943,
"acc_norm_stderr": 0.035975244117345775
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.3153846153846154,
"acc_stderr": 0.02355964698318994,
"acc_norm": 0.3153846153846154,
"acc_norm_stderr": 0.02355964698318994
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.025348097468097856,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.025348097468097856
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.37815126050420167,
"acc_stderr": 0.03149930577784906,
"acc_norm": 0.37815126050420167,
"acc_norm_stderr": 0.03149930577784906
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.271523178807947,
"acc_stderr": 0.03631329803969653,
"acc_norm": 0.271523178807947,
"acc_norm_stderr": 0.03631329803969653
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5082568807339449,
"acc_stderr": 0.021434399918214338,
"acc_norm": 0.5082568807339449,
"acc_norm_stderr": 0.021434399918214338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.26851851851851855,
"acc_stderr": 0.030225226160012383,
"acc_norm": 0.26851851851851855,
"acc_norm_stderr": 0.030225226160012383
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.034849415144292316,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.034849415144292316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6329113924050633,
"acc_stderr": 0.031376240725616185,
"acc_norm": 0.6329113924050633,
"acc_norm_stderr": 0.031376240725616185
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.47085201793721976,
"acc_stderr": 0.03350073248773403,
"acc_norm": 0.47085201793721976,
"acc_norm_stderr": 0.03350073248773403
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.4580152671755725,
"acc_stderr": 0.04369802690578757,
"acc_norm": 0.4580152671755725,
"acc_norm_stderr": 0.04369802690578757
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5619834710743802,
"acc_stderr": 0.04529146804435792,
"acc_norm": 0.5619834710743802,
"acc_norm_stderr": 0.04529146804435792
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.04826217294139892,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.04826217294139892
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.36809815950920244,
"acc_stderr": 0.03789213935838396,
"acc_norm": 0.36809815950920244,
"acc_norm_stderr": 0.03789213935838396
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.33035714285714285,
"acc_stderr": 0.04464285714285715,
"acc_norm": 0.33035714285714285,
"acc_norm_stderr": 0.04464285714285715
},
"harness|hendrycksTest-management|5": {
"acc": 0.4174757281553398,
"acc_stderr": 0.04882840548212238,
"acc_norm": 0.4174757281553398,
"acc_norm_stderr": 0.04882840548212238
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6282051282051282,
"acc_stderr": 0.031660988918880785,
"acc_norm": 0.6282051282051282,
"acc_norm_stderr": 0.031660988918880785
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.4878671775223499,
"acc_stderr": 0.01787469866749134,
"acc_norm": 0.4878671775223499,
"acc_norm_stderr": 0.01787469866749134
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4653179190751445,
"acc_stderr": 0.026854257928258893,
"acc_norm": 0.4653179190751445,
"acc_norm_stderr": 0.026854257928258893
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.30502793296089387,
"acc_stderr": 0.015398723510916715,
"acc_norm": 0.30502793296089387,
"acc_norm_stderr": 0.015398723510916715
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.3954248366013072,
"acc_stderr": 0.027996723180631455,
"acc_norm": 0.3954248366013072,
"acc_norm_stderr": 0.027996723180631455
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.40514469453376206,
"acc_stderr": 0.02788238379132595,
"acc_norm": 0.40514469453376206,
"acc_norm_stderr": 0.02788238379132595
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.027648477877413327,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.027648477877413327
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3120567375886525,
"acc_stderr": 0.02764012054516993,
"acc_norm": 0.3120567375886525,
"acc_norm_stderr": 0.02764012054516993
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3455019556714472,
"acc_stderr": 0.012145303004087206,
"acc_norm": 0.3455019556714472,
"acc_norm_stderr": 0.012145303004087206
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.3713235294117647,
"acc_stderr": 0.02934980313976587,
"acc_norm": 0.3713235294117647,
"acc_norm_stderr": 0.02934980313976587
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.41830065359477125,
"acc_stderr": 0.01995597514583554,
"acc_norm": 0.41830065359477125,
"acc_norm_stderr": 0.01995597514583554
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.4727272727272727,
"acc_stderr": 0.04782001791380063,
"acc_norm": 0.4727272727272727,
"acc_norm_stderr": 0.04782001791380063
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5591836734693878,
"acc_stderr": 0.03178419114175363,
"acc_norm": 0.5591836734693878,
"acc_norm_stderr": 0.03178419114175363
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.527363184079602,
"acc_stderr": 0.035302355173346824,
"acc_norm": 0.527363184079602,
"acc_norm_stderr": 0.035302355173346824
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-virology|5": {
"acc": 0.40963855421686746,
"acc_stderr": 0.03828401115079022,
"acc_norm": 0.40963855421686746,
"acc_norm_stderr": 0.03828401115079022
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.43859649122807015,
"acc_stderr": 0.038057975055904594,
"acc_norm": 0.43859649122807015,
"acc_norm_stderr": 0.038057975055904594
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29253365973072215,
"mc1_stderr": 0.015925597445286165,
"mc2": 0.45298090995110557,
"mc2_stderr": 0.015831655887070334
},
"harness|winogrande|5": {
"acc": 0.6006314127861089,
"acc_stderr": 0.013764933546717614
},
"harness|drop|3": {
"em": 0.2791526845637584,
"em_stderr": 0.004593906993460012,
"f1": 0.3252799916107391,
"f1_stderr": 0.004576434040922838
},
"harness|gsm8k|5": {
"acc": 0.01819560272934041,
"acc_stderr": 0.0036816118940738727
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_NurtureAI__Orca-2-13B-16k | [
"region:us"
]
| 2023-11-25T14:59:55+00:00 | {"pretty_name": "Evaluation run of NurtureAI/Orca-2-13B-16k", "dataset_summary": "Dataset automatically created during the evaluation run of model [NurtureAI/Orca-2-13B-16k](https://huggingface.co/NurtureAI/Orca-2-13B-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NurtureAI__Orca-2-13B-16k_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-25T14:56:50.761859](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__Orca-2-13B-16k_public/blob/main/results_2023-11-25T14-56-50.761859.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4096720745858261,\n \"acc_stderr\": 0.034203032603114795,\n \"acc_norm\": 0.41715801816297365,\n \"acc_norm_stderr\": 0.03505952667633131,\n \"mc1\": 0.29253365973072215,\n \"mc1_stderr\": 0.015925597445286165,\n \"mc2\": 0.45298090995110557,\n \"mc2_stderr\": 0.015831655887070334,\n \"em\": 0.2791526845637584,\n \"em_stderr\": 0.004593906993460012,\n \"f1\": 0.3252799916107391,\n \"f1_stderr\": 0.004576434040922838\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.48464163822525597,\n \"acc_stderr\": 0.014604496129394911,\n \"acc_norm\": 0.5366894197952219,\n \"acc_norm_stderr\": 0.01457200052775699\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5056761601274646,\n \"acc_stderr\": 0.004989459871609183,\n \"acc_norm\": 0.6947819159529974,\n \"acc_norm_stderr\": 0.004595586027583791\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.37037037037037035,\n \"acc_stderr\": 0.04171654161354543,\n \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.04171654161354543\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4868421052631579,\n \"acc_stderr\": 0.04067533136309174,\n \"acc_norm\": 0.4868421052631579,\n \"acc_norm_stderr\": 0.04067533136309174\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.4528301886792453,\n \"acc_stderr\": 0.03063562795796182,\n \"acc_norm\": 0.4528301886792453,\n \"acc_norm_stderr\": 0.03063562795796182\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4097222222222222,\n \"acc_stderr\": 0.04112490974670787,\n \"acc_norm\": 0.4097222222222222,\n \"acc_norm_stderr\": 0.04112490974670787\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117317,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117317\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3815028901734104,\n \"acc_stderr\": 0.037038511930995215,\n \"acc_norm\": 0.3815028901734104,\n \"acc_norm_stderr\": 0.037038511930995215\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.04488482852329017,\n \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.04488482852329017\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.34893617021276596,\n \"acc_stderr\": 0.03115852213135778,\n \"acc_norm\": 0.34893617021276596,\n \"acc_norm_stderr\": 0.03115852213135778\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n \"acc_stderr\": 0.0433913832257986,\n \"acc_norm\": 0.30701754385964913,\n \"acc_norm_stderr\": 0.0433913832257986\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.3931034482758621,\n \"acc_stderr\": 0.040703290137070705,\n \"acc_norm\": 0.3931034482758621,\n \"acc_norm_stderr\": 0.040703290137070705\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2830687830687831,\n \"acc_stderr\": 0.023201392938194974,\n \"acc_norm\": 0.2830687830687831,\n \"acc_norm_stderr\": 0.023201392938194974\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.21428571428571427,\n \"acc_stderr\": 0.03670066451047181,\n \"acc_norm\": 0.21428571428571427,\n \"acc_norm_stderr\": 0.03670066451047181\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.41935483870967744,\n \"acc_stderr\": 0.028071588901091845,\n \"acc_norm\": 0.41935483870967744,\n \"acc_norm_stderr\": 0.028071588901091845\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.270935960591133,\n \"acc_stderr\": 0.031270907132976984,\n \"acc_norm\": 0.270935960591133,\n \"acc_norm_stderr\": 0.031270907132976984\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.593939393939394,\n \"acc_stderr\": 0.03834816355401181,\n \"acc_norm\": 0.593939393939394,\n \"acc_norm_stderr\": 0.03834816355401181\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.48484848484848486,\n \"acc_stderr\": 0.0356071651653106,\n \"acc_norm\": 0.48484848484848486,\n \"acc_norm_stderr\": 0.0356071651653106\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.538860103626943,\n \"acc_stderr\": 0.035975244117345775,\n \"acc_norm\": 0.538860103626943,\n \"acc_norm_stderr\": 0.035975244117345775\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.3153846153846154,\n \"acc_stderr\": 0.02355964698318994,\n \"acc_norm\": 0.3153846153846154,\n \"acc_norm_stderr\": 0.02355964698318994\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.025348097468097856,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.025348097468097856\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.37815126050420167,\n \"acc_stderr\": 0.03149930577784906,\n \"acc_norm\": 0.37815126050420167,\n \"acc_norm_stderr\": 0.03149930577784906\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.271523178807947,\n \"acc_stderr\": 0.03631329803969653,\n \"acc_norm\": 0.271523178807947,\n \"acc_norm_stderr\": 0.03631329803969653\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5082568807339449,\n \"acc_stderr\": 0.021434399918214338,\n \"acc_norm\": 0.5082568807339449,\n \"acc_norm_stderr\": 0.021434399918214338\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.26851851851851855,\n \"acc_stderr\": 0.030225226160012383,\n \"acc_norm\": 0.26851851851851855,\n \"acc_norm_stderr\": 0.030225226160012383\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.034849415144292316,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.034849415144292316\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6329113924050633,\n \"acc_stderr\": 0.031376240725616185,\n \"acc_norm\": 0.6329113924050633,\n \"acc_norm_stderr\": 0.031376240725616185\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.47085201793721976,\n \"acc_stderr\": 0.03350073248773403,\n \"acc_norm\": 0.47085201793721976,\n \"acc_norm_stderr\": 0.03350073248773403\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.4580152671755725,\n \"acc_stderr\": 0.04369802690578757,\n \"acc_norm\": 0.4580152671755725,\n \"acc_norm_stderr\": 0.04369802690578757\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.5619834710743802,\n \"acc_stderr\": 0.04529146804435792,\n \"acc_norm\": 0.5619834710743802,\n \"acc_norm_stderr\": 0.04529146804435792\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.04826217294139892,\n \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.04826217294139892\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.36809815950920244,\n \"acc_stderr\": 0.03789213935838396,\n \"acc_norm\": 0.36809815950920244,\n \"acc_norm_stderr\": 0.03789213935838396\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n \"acc_stderr\": 0.04464285714285715,\n \"acc_norm\": 0.33035714285714285,\n \"acc_norm_stderr\": 0.04464285714285715\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.4174757281553398,\n \"acc_stderr\": 0.04882840548212238,\n \"acc_norm\": 0.4174757281553398,\n \"acc_norm_stderr\": 0.04882840548212238\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6282051282051282,\n \"acc_stderr\": 0.031660988918880785,\n \"acc_norm\": 0.6282051282051282,\n \"acc_norm_stderr\": 0.031660988918880785\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.4878671775223499,\n \"acc_stderr\": 0.01787469866749134,\n \"acc_norm\": 0.4878671775223499,\n \"acc_norm_stderr\": 0.01787469866749134\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.4653179190751445,\n \"acc_stderr\": 0.026854257928258893,\n \"acc_norm\": 0.4653179190751445,\n \"acc_norm_stderr\": 0.026854257928258893\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30502793296089387,\n \"acc_stderr\": 0.015398723510916715,\n \"acc_norm\": 0.30502793296089387,\n \"acc_norm_stderr\": 0.015398723510916715\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.3954248366013072,\n \"acc_stderr\": 0.027996723180631455,\n \"acc_norm\": 0.3954248366013072,\n \"acc_norm_stderr\": 0.027996723180631455\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.40514469453376206,\n \"acc_stderr\": 0.02788238379132595,\n \"acc_norm\": 0.40514469453376206,\n \"acc_norm_stderr\": 0.02788238379132595\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.027648477877413327,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.027648477877413327\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3120567375886525,\n \"acc_stderr\": 0.02764012054516993,\n \"acc_norm\": 0.3120567375886525,\n \"acc_norm_stderr\": 0.02764012054516993\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3455019556714472,\n \"acc_stderr\": 0.012145303004087206,\n \"acc_norm\": 0.3455019556714472,\n \"acc_norm_stderr\": 0.012145303004087206\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3713235294117647,\n \"acc_stderr\": 0.02934980313976587,\n \"acc_norm\": 0.3713235294117647,\n \"acc_norm_stderr\": 0.02934980313976587\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.41830065359477125,\n \"acc_stderr\": 0.01995597514583554,\n \"acc_norm\": 0.41830065359477125,\n \"acc_norm_stderr\": 0.01995597514583554\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4727272727272727,\n \"acc_stderr\": 0.04782001791380063,\n \"acc_norm\": 0.4727272727272727,\n \"acc_norm_stderr\": 0.04782001791380063\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5591836734693878,\n \"acc_stderr\": 0.03178419114175363,\n \"acc_norm\": 0.5591836734693878,\n \"acc_norm_stderr\": 0.03178419114175363\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.527363184079602,\n \"acc_stderr\": 0.035302355173346824,\n \"acc_norm\": 0.527363184079602,\n \"acc_norm_stderr\": 0.035302355173346824\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.40963855421686746,\n \"acc_stderr\": 0.03828401115079022,\n \"acc_norm\": 0.40963855421686746,\n \"acc_norm_stderr\": 0.03828401115079022\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.43859649122807015,\n \"acc_stderr\": 0.038057975055904594,\n \"acc_norm\": 0.43859649122807015,\n \"acc_norm_stderr\": 0.038057975055904594\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29253365973072215,\n \"mc1_stderr\": 0.015925597445286165,\n \"mc2\": 0.45298090995110557,\n \"mc2_stderr\": 0.015831655887070334\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6006314127861089,\n \"acc_stderr\": 0.013764933546717614\n },\n \"harness|drop|3\": {\n \"em\": 0.2791526845637584,\n \"em_stderr\": 0.004593906993460012,\n \"f1\": 0.3252799916107391,\n \"f1_stderr\": 0.004576434040922838\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01819560272934041,\n \"acc_stderr\": 0.0036816118940738727\n }\n}\n```", "repo_url": "https://huggingface.co/NurtureAI/Orca-2-13B-16k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|arc:challenge|25_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|drop|3_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|gsm8k|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hellaswag|10_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T14-56-50.761859.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["**/details_harness|winogrande|5_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-25T14-56-50.761859.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_25T14_56_50.761859", "path": ["results_2023-11-25T14-56-50.761859.parquet"]}, {"split": "latest", "path": ["results_2023-11-25T14-56-50.761859.parquet"]}]}]} | 2023-11-25T15:00:43+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of NurtureAI/Orca-2-13B-16k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NurtureAI/Orca-2-13B-16k on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-25T14:56:50.761859(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of NurtureAI/Orca-2-13B-16k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Orca-2-13B-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T14:56:50.761859(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NurtureAI/Orca-2-13B-16k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Orca-2-13B-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T14:56:50.761859(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NurtureAI/Orca-2-13B-16k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Orca-2-13B-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-25T14:56:50.761859(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
50fd5ee546a571cb455ffb8e93fd22906797505d |
# Ordalie - French STS Benchmark
- 30k sentence pairs
- Score either 0 or 1 | OrdalieTech/Ordalie-FR-STS-benchmark | [
"task_categories:feature-extraction",
"size_categories:10K<n<100K",
"language:fr",
"license:apache-2.0",
"region:us"
]
| 2023-11-25T15:01:49+00:00 | {"language": ["fr"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["feature-extraction"], "pretty_name": "ordalie-fr-sts-benchmark", "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 14934570, "num_examples": 10000}], "download_size": 9328832, "dataset_size": 14934570}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-11-27T17:32:55+00:00 | []
| [
"fr"
]
| TAGS
#task_categories-feature-extraction #size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us
|
# Ordalie - French STS Benchmark
- 30k sentence pairs
- Score either 0 or 1 | [
"# Ordalie - French STS Benchmark\n\n- 30k sentence pairs\n- Score either 0 or 1"
]
| [
"TAGS\n#task_categories-feature-extraction #size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us \n",
"# Ordalie - French STS Benchmark\n\n- 30k sentence pairs\n- Score either 0 or 1"
]
| [
44,
23
]
| [
"passage: TAGS\n#task_categories-feature-extraction #size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us \n# Ordalie - French STS Benchmark\n\n- 30k sentence pairs\n- Score either 0 or 1"
]
|
a7b6d0115fdd9386b4a78d382a60af4e0238abed |
This is Wikidedia passages dataset for ODQA retriever.
Each passages have 256~ tokens splitteed by gpt-4 tokenizer using tiktoken.
Token count
```ts
{'~128': 1415068, '128~256': 1290011,
'256~512': 18756476, '512~1024': 667,
'1024~2048': 12, '2048~4096': 0, '4096~8192': 0,
'8192~16384': 0, '16384~32768': 0, '32768~65536': 0,
'65536~128000': 0, '128000~': 0}
```
Text count
```ts
{'~512': 1556876,'512~1024': 6074975, '1024~2048': 13830329,
'2048~4096': 49, '4096~8192': 2, '8192~16384': 3, '16384~32768': 0,
'32768~65536': 0, '65536~': 0}
```
Token percent
```ts
{'~128': '6.59%', '128~256': '6.01%', '256~512': '87.39%',
'512~1024': '0.00%', '1024~2048': '0.00%', '2048~4096': '0.00%',
'4096~8192': '0.00%', '8192~16384': '0.00%', '16384~32768': '0.00%',
'32768~65536': '0.00%', '65536~128000': '0.00%', '128000~': '0.00%'}
```
Text percent
```ts
{'~512': '7.25%', '512~1024': '28.31%', '1024~2048': '64.44%',
'2048~4096': '0.00%', '4096~8192': '0.00%', '8192~16384': '0.00%',
'16384~32768': '0.00%', '32768~65536': '0.00%', '65536~': '0.00%'}
``` | seonglae/wikipedia-256-token | [
"region:us"
]
| 2023-11-25T15:09:11+00:00 | {"dataset_info": {"config_name": "gpt-4", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "token_length", "dtype": "int64"}, {"name": "text_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 23230980331, "num_examples": 21462234}], "download_size": 12219882718, "dataset_size": 23230980331}, "configs": [{"config_name": "gpt-4", "data_files": [{"split": "train", "path": "gpt-4/train-*"}]}]} | 2023-11-26T15:57:51+00:00 | []
| []
| TAGS
#region-us
|
This is Wikidedia passages dataset for ODQA retriever.
Each passages have 256~ tokens splitteed by gpt-4 tokenizer using tiktoken.
Token count
Text count
Token percent
Text percent
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
dd4582f1904d726cf8b5ba9d29759d83eb704e78 | # speechocean762: A non-native English corpus for pronunciation scoring task
## Introduction
Pronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.
This corpus aims to provide a free public dataset for the pronunciation scoring task.
Key features:
* It is available for free download for both commercial and non-commercial purposes.
* The speaker variety encompasses young children and adults.
* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.
This corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.
Five experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.
## Uses
```python
>>> from datasets import load_dataset
>>> test_set = load_dataset("mispeech/speechocean762", split="test")
>>> len(test_set)
2500
>>> next(iter(test_set))
{'accuracy': 9,
'completeness': 10.0,
'fluency': 9,
'prosodic': 9,
'text': 'MARK IS GOING TO SEE ELEPHANT',
'total': 9,
'words': [{'accuracy': 10,
'phones': ['M', 'AA0', 'R', 'K'],
'phones-accuracy': [2.0, 2.0, 1.8, 2.0],
'stress': 10,
'text': 'MARK',
'total': 10,
'mispronunciations': []},
{'accuracy': 10,
'phones': ['IH0', 'Z'],
'phones-accuracy': [2.0, 1.8],
'stress': 10,
'text': 'IS',
'total': 10,
'mispronunciations': []},
{'accuracy': 10,
'phones': ['G', 'OW0', 'IH0', 'NG'],
'phones-accuracy': [2.0, 2.0, 2.0, 2.0],
'stress': 10,
'text': 'GOING',
'total': 10,
'mispronunciations': []},
{'accuracy': 10,
'phones': ['T', 'UW0'],
'phones-accuracy': [2.0, 2.0],
'stress': 10,
'text': 'TO',
'total': 10,
'mispronunciations': []},
{'accuracy': 10,
'phones': ['S', 'IY0'],
'phones-accuracy': [2.0, 2.0],
'stress': 10,
'text': 'SEE',
'total': 10,
'mispronunciations': []},
{'accuracy': 10,
'phones': ['EH1', 'L', 'IH0', 'F', 'AH0', 'N', 'T'],
'phones-accuracy': [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0],
'stress': 10,
'text': 'ELEPHANT',
'total': 10,
'mispronunciations': []}],
'speaker': '0003',
'gender': 'm',
'age': 6,
'audio': {'path': '000030012.wav',
'array': array([-0.00119019, -0.00500488, -0.00283813, ..., 0.00274658,
0. , 0.00125122]),
'sampling_rate': 16000}}
```
## The scoring metric
The experts score at three levels: phoneme-level, word-level, and sentence-level.
### Sentence level
Score the accuracy, fluency, completeness and prosodic at the sentence level.
#### Accuracy
Score range: 0 - 10
* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes
* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes
* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings
* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes
* 0-2: Extremely poor pronunciation and only one or two words are recognizable
#### Completeness
Score range: 0.0 - 1.0
The percentage of the words with good pronunciation.
#### Fluency
Score range: 0 - 10
* 8-10: Fluent without noticeable pauses or stammering
* 6-7: Fluent in general, with a few pauses, repetition, and stammering
* 4-5: the speech is a little influent, with many pauses, repetition, and stammering
* 0-3: intermittent, very influent speech, with lots of pauses, repetition, and stammering
#### Prosodic
Score range: 0 - 10
* 9-10: Correct intonation at a stable speaking speed, speak with cadence, and can speak like a native
* 7-8: Nearly correct intonation at a stable speaking speed, nearly smooth and coherent, but with little stammering and few pauses
* 5-6: Unstable speech speed, many stammering and pauses with a poor sense of rhythm
* 3-4: Unstable speech speed, speak too fast or too slow, without the sense of rhythm
* 0-2: Poor intonation and lots of stammering and pauses, unable to read a complete sentence
### Word level
Score the accuracy and stress of each word's pronunciation.
#### Accuracy
Score range: 0 - 10
* 10: The pronunciation of the word is perfect
* 7-9: Most phones in this word are pronounced correctly but have accents
* 4-6: Less than 30% of phones in this word are wrongly pronounced
* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word "bag" as "bike"
* 1: The pronunciation is hard to distinguish
* 0: no voice
#### Stress
Score range: {5, 10}
* 10: The stress is correct, or this is a mono-syllable word
* 5: The stress is wrong
### Phoneme level
Score the pronunciation goodness of each phoneme within the words.
Score range: 0-2
* 2: pronunciation is correct
* 1: pronunciation is right but has a heavy accent
* 0: pronunciation is incorrect or missed
For the phones with an accuracy score lower than 0.5, an extra "mispronunciations" indicates which is the most likely phoneme that the current phone was actually pronounced.
An example:
```json
{
"text": "LISA",
"accuracy": 5,
"phones": ["L", "IY1", "S", "AH0"],
"phones-accuracy": [0.4, 2, 2, 1.2],
"mispronunciations": [
{
"canonical-phone": "L",
"index": 0,
"pronounced-phone": "D"
}
],
"stress": 10,
"total": 6
}
```
## Citation
Please cite our paper if you find this work useful:
```bibtext
@inproceedings{speechocean762,
title={speechocean762: An Open-Source Non-native English Speech Corpus For Pronunciation Assessment},
booktitle={Proc. Interspeech 2021},
year=2021,
author={Junbo Zhang, Zhiwen Zhang, Yongqing Wang, Zhiyong Yan, Qiong Song, Yukai Huang, Ke Li, Daniel Povey, Yujun Wang}
}
```
| mispeech/speechocean762 | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"pronunciation-scoring",
"region:us"
]
| 2023-11-25T15:50:48+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "speechocean762", "tags": ["pronunciation-scoring"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "accuracy", "dtype": "int64"}, {"name": "completeness", "dtype": "float64"}, {"name": "fluency", "dtype": "int64"}, {"name": "prosodic", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "total", "dtype": "int64"}, {"name": "words", "list": [{"name": "accuracy", "dtype": "int64"}, {"name": "phones", "sequence": "string"}, {"name": "phones-accuracy", "sequence": "float64"}, {"name": "stress", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "total", "dtype": "int64"}, {"name": "mispronunciations", "list": [{"name": "canonical-phone", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "pronounced-phone", "dtype": "string"}]}]}, {"name": "speaker", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "age", "dtype": "int64"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 291617098.0, "num_examples": 2500}, {"name": "test", "num_bytes": 289610485.0, "num_examples": 2500}], "download_size": 611820406, "dataset_size": 581227583.0}} | 2024-01-17T03:52:45+00:00 | []
| [
"en"
]
| TAGS
#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #pronunciation-scoring #region-us
| # speechocean762: A non-native English corpus for pronunciation scoring task
## Introduction
Pronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.
This corpus aims to provide a free public dataset for the pronunciation scoring task.
Key features:
* It is available for free download for both commercial and non-commercial purposes.
* The speaker variety encompasses young children and adults.
* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.
This corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.
Five experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.
## Uses
## The scoring metric
The experts score at three levels: phoneme-level, word-level, and sentence-level.
### Sentence level
Score the accuracy, fluency, completeness and prosodic at the sentence level.
#### Accuracy
Score range: 0 - 10
* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes
* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes
* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings
* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes
* 0-2: Extremely poor pronunciation and only one or two words are recognizable
#### Completeness
Score range: 0.0 - 1.0
The percentage of the words with good pronunciation.
#### Fluency
Score range: 0 - 10
* 8-10: Fluent without noticeable pauses or stammering
* 6-7: Fluent in general, with a few pauses, repetition, and stammering
* 4-5: the speech is a little influent, with many pauses, repetition, and stammering
* 0-3: intermittent, very influent speech, with lots of pauses, repetition, and stammering
#### Prosodic
Score range: 0 - 10
* 9-10: Correct intonation at a stable speaking speed, speak with cadence, and can speak like a native
* 7-8: Nearly correct intonation at a stable speaking speed, nearly smooth and coherent, but with little stammering and few pauses
* 5-6: Unstable speech speed, many stammering and pauses with a poor sense of rhythm
* 3-4: Unstable speech speed, speak too fast or too slow, without the sense of rhythm
* 0-2: Poor intonation and lots of stammering and pauses, unable to read a complete sentence
### Word level
Score the accuracy and stress of each word's pronunciation.
#### Accuracy
Score range: 0 - 10
* 10: The pronunciation of the word is perfect
* 7-9: Most phones in this word are pronounced correctly but have accents
* 4-6: Less than 30% of phones in this word are wrongly pronounced
* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word "bag" as "bike"
* 1: The pronunciation is hard to distinguish
* 0: no voice
#### Stress
Score range: {5, 10}
* 10: The stress is correct, or this is a mono-syllable word
* 5: The stress is wrong
### Phoneme level
Score the pronunciation goodness of each phoneme within the words.
Score range: 0-2
* 2: pronunciation is correct
* 1: pronunciation is right but has a heavy accent
* 0: pronunciation is incorrect or missed
For the phones with an accuracy score lower than 0.5, an extra "mispronunciations" indicates which is the most likely phoneme that the current phone was actually pronounced.
An example:
Please cite our paper if you find this work useful:
| [
"# speechocean762: A non-native English corpus for pronunciation scoring task",
"## Introduction\nPronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.\n\nThis corpus aims to provide a free public dataset for the pronunciation scoring task.\nKey features:\n* It is available for free download for both commercial and non-commercial purposes.\n* The speaker variety encompasses young children and adults.\n* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.\n\nThis corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.\n\nFive experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.",
"## Uses",
"## The scoring metric\nThe experts score at three levels: phoneme-level, word-level, and sentence-level.",
"### Sentence level\nScore the accuracy, fluency, completeness and prosodic at the sentence level.",
"#### Accuracy\nScore range: 0 - 10\n* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes\n* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes\n* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings\n* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes\n* 0-2: Extremely poor pronunciation and only one or two words are recognizable",
"#### Completeness\nScore range: 0.0 - 1.0\nThe percentage of the words with good pronunciation.",
"#### Fluency\nScore range: 0 - 10\n* 8-10: Fluent without noticeable pauses or stammering\n* 6-7: Fluent in general, with a few pauses, repetition, and stammering\n* 4-5: the speech is a little influent, with many pauses, repetition, and stammering\n* 0-3: intermittent, very influent speech, with lots of pauses, repetition, and stammering",
"#### Prosodic\nScore range: 0 - 10\n* 9-10: Correct intonation at a stable speaking speed, speak with cadence, and can speak like a native\n* 7-8: Nearly correct intonation at a stable speaking speed, nearly smooth and coherent, but with little stammering and few pauses\n* 5-6: Unstable speech speed, many stammering and pauses with a poor sense of rhythm\n* 3-4: Unstable speech speed, speak too fast or too slow, without the sense of rhythm\n* 0-2: Poor intonation and lots of stammering and pauses, unable to read a complete sentence",
"### Word level\nScore the accuracy and stress of each word's pronunciation.",
"#### Accuracy\nScore range: 0 - 10\n* 10: The pronunciation of the word is perfect\n* 7-9: Most phones in this word are pronounced correctly but have accents\n* 4-6: Less than 30% of phones in this word are wrongly pronounced\n* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word \"bag\" as \"bike\"\n* 1: The pronunciation is hard to distinguish\n* 0: no voice",
"#### Stress\nScore range: {5, 10}\n* 10: The stress is correct, or this is a mono-syllable word\n* 5: The stress is wrong",
"### Phoneme level\nScore the pronunciation goodness of each phoneme within the words.\n\nScore range: 0-2\n* 2: pronunciation is correct\n* 1: pronunciation is right but has a heavy accent\n* 0: pronunciation is incorrect or missed\n\nFor the phones with an accuracy score lower than 0.5, an extra \"mispronunciations\" indicates which is the most likely phoneme that the current phone was actually pronounced.\nAn example:\n\n\n\nPlease cite our paper if you find this work useful:"
]
| [
"TAGS\n#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #pronunciation-scoring #region-us \n",
"# speechocean762: A non-native English corpus for pronunciation scoring task",
"## Introduction\nPronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.\n\nThis corpus aims to provide a free public dataset for the pronunciation scoring task.\nKey features:\n* It is available for free download for both commercial and non-commercial purposes.\n* The speaker variety encompasses young children and adults.\n* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.\n\nThis corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.\n\nFive experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.",
"## Uses",
"## The scoring metric\nThe experts score at three levels: phoneme-level, word-level, and sentence-level.",
"### Sentence level\nScore the accuracy, fluency, completeness and prosodic at the sentence level.",
"#### Accuracy\nScore range: 0 - 10\n* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes\n* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes\n* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings\n* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes\n* 0-2: Extremely poor pronunciation and only one or two words are recognizable",
"#### Completeness\nScore range: 0.0 - 1.0\nThe percentage of the words with good pronunciation.",
"#### Fluency\nScore range: 0 - 10\n* 8-10: Fluent without noticeable pauses or stammering\n* 6-7: Fluent in general, with a few pauses, repetition, and stammering\n* 4-5: the speech is a little influent, with many pauses, repetition, and stammering\n* 0-3: intermittent, very influent speech, with lots of pauses, repetition, and stammering",
"#### Prosodic\nScore range: 0 - 10\n* 9-10: Correct intonation at a stable speaking speed, speak with cadence, and can speak like a native\n* 7-8: Nearly correct intonation at a stable speaking speed, nearly smooth and coherent, but with little stammering and few pauses\n* 5-6: Unstable speech speed, many stammering and pauses with a poor sense of rhythm\n* 3-4: Unstable speech speed, speak too fast or too slow, without the sense of rhythm\n* 0-2: Poor intonation and lots of stammering and pauses, unable to read a complete sentence",
"### Word level\nScore the accuracy and stress of each word's pronunciation.",
"#### Accuracy\nScore range: 0 - 10\n* 10: The pronunciation of the word is perfect\n* 7-9: Most phones in this word are pronounced correctly but have accents\n* 4-6: Less than 30% of phones in this word are wrongly pronounced\n* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word \"bag\" as \"bike\"\n* 1: The pronunciation is hard to distinguish\n* 0: no voice",
"#### Stress\nScore range: {5, 10}\n* 10: The stress is correct, or this is a mono-syllable word\n* 5: The stress is wrong",
"### Phoneme level\nScore the pronunciation goodness of each phoneme within the words.\n\nScore range: 0-2\n* 2: pronunciation is correct\n* 1: pronunciation is right but has a heavy accent\n* 0: pronunciation is incorrect or missed\n\nFor the phones with an accuracy score lower than 0.5, an extra \"mispronunciations\" indicates which is the most likely phoneme that the current phone was actually pronounced.\nAn example:\n\n\n\nPlease cite our paper if you find this work useful:"
]
| [
54,
19,
213,
3,
27,
26,
133,
20,
94,
137,
19,
129,
36,
105
]
| [
"passage: TAGS\n#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #pronunciation-scoring #region-us \n# speechocean762: A non-native English corpus for pronunciation scoring task## Introduction\nPronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.\n\nThis corpus aims to provide a free public dataset for the pronunciation scoring task.\nKey features:\n* It is available for free download for both commercial and non-commercial purposes.\n* The speaker variety encompasses young children and adults.\n* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.\n\nThis corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.\n\nFive experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.## Uses## The scoring metric\nThe experts score at three levels: phoneme-level, word-level, and sentence-level.### Sentence level\nScore the accuracy, fluency, completeness and prosodic at the sentence level.#### Accuracy\nScore range: 0 - 10\n* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes\n* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes\n* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings\n* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes\n* 0-2: Extremely poor pronunciation and only one or two words are recognizable#### Completeness\nScore range: 0.0 - 1.0\nThe percentage of the words with good pronunciation."
]
|
4dd27ec5437106ac72a6304a5203e2827403b4dc | # This dataset was moved to a new repo ✈️
The data and its repository have relocated to [Silly-Machine/TuPyE-Dataset](https://huggingface.co/datasets/Silly-Machine/TuPyE-Dataset) – they needed a change of scenery! Feel free to explore our other organizational projects while you're there. | FpOliveira/TuPi-Portuguese-Hate-Speech-Dataset | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:10K<n<100K",
"source_datasets:crowdsourced",
"language:pt",
"license:mit",
"hate-speech-detection",
"brazilian-portuguese",
"doi:10.57967/hf/1419",
"region:us"
]
| 2023-11-25T16:40:48+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["pt"], "license": "mit", "size_categories": ["10K<n<100K"], "source_datasets": ["crowdsourced"], "task_categories": ["text-classification"], "pretty_name": "TuPiHateSpeech", "language_bcp47": ["pt-BR"], "tags": ["hate-speech-detection", "brazilian-portuguese"]} | 2023-12-28T19:34:19+00:00 | []
| [
"pt"
]
| TAGS
#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-10K<n<100K #source_datasets-crowdsourced #language-Portuguese #license-mit #hate-speech-detection #brazilian-portuguese #doi-10.57967/hf/1419 #region-us
| # This dataset was moved to a new repo ️
The data and its repository have relocated to Silly-Machine/TuPyE-Dataset – they needed a change of scenery! Feel free to explore our other organizational projects while you're there. | [
"# This dataset was moved to a new repo ️\nThe data and its repository have relocated to Silly-Machine/TuPyE-Dataset – they needed a change of scenery! Feel free to explore our other organizational projects while you're there."
]
| [
"TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-10K<n<100K #source_datasets-crowdsourced #language-Portuguese #license-mit #hate-speech-detection #brazilian-portuguese #doi-10.57967/hf/1419 #region-us \n",
"# This dataset was moved to a new repo ️\nThe data and its repository have relocated to Silly-Machine/TuPyE-Dataset – they needed a change of scenery! Feel free to explore our other organizational projects while you're there."
]
| [
104,
61
]
| [
"passage: TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-10K<n<100K #source_datasets-crowdsourced #language-Portuguese #license-mit #hate-speech-detection #brazilian-portuguese #doi-10.57967/hf/1419 #region-us \n# This dataset was moved to a new repo ️\nThe data and its repository have relocated to Silly-Machine/TuPyE-Dataset – they needed a change of scenery! Feel free to explore our other organizational projects while you're there."
]
|
12ae9e0781a235f0f744c8b66595ef51008a7422 | # Dataset Card for "MesogenesTask1Parquet_ALL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | InfernoDeep/MesogenesTask1Parquet_ALL | [
"region:us"
]
| 2023-11-25T16:52:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "sequence": {"sequence": "int8"}}], "splits": [{"name": "train", "num_bytes": 5903889992, "num_examples": 12551}, {"name": "test", "num_bytes": 959599680, "num_examples": 2040}, {"name": "validation", "num_bytes": 1537241056, "num_examples": 3268}], "download_size": 4679673366, "dataset_size": 8400730728}} | 2023-11-25T16:55:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "MesogenesTask1Parquet_ALL"
More Information needed | [
"# Dataset Card for \"MesogenesTask1Parquet_ALL\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"MesogenesTask1Parquet_ALL\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"MesogenesTask1Parquet_ALL\"\n\nMore Information needed"
]
|
e75ecdc9d5aabf096880b732d60a73ca09d41c31 | # Dataset Card for "sdu_es_train_topics_LDA_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tomashs/sdu_es_train_topics_LDA_2 | [
"region:us"
]
| 2023-11-25T16:52:48+00:00 | {"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "acronym", "dtype": "string"}, {"name": "ID", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "text_prep", "dtype": "string"}, {"name": "topic_vector_LDA", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 7314738, "num_examples": 6267}], "download_size": 2704460, "dataset_size": 7314738}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-25T16:52:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sdu_es_train_topics_LDA_2"
More Information needed | [
"# Dataset Card for \"sdu_es_train_topics_LDA_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sdu_es_train_topics_LDA_2\"\n\nMore Information needed"
]
| [
6,
25
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sdu_es_train_topics_LDA_2\"\n\nMore Information needed"
]
|
5fbcbfd2e72f317bdaf1301c69291a99f83d015f | # Dataset Card for "sdu_es_dev_topics_LDA_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tomashs/sdu_es_dev_topics_LDA_2 | [
"region:us"
]
| 2023-11-25T16:52:52+00:00 | {"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "acronym", "dtype": "string"}, {"name": "ID", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "text_prep", "dtype": "string"}, {"name": "topic_vector_LDA", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 948444, "num_examples": 818}], "download_size": 343413, "dataset_size": 948444}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-25T16:52:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sdu_es_dev_topics_LDA_2"
More Information needed | [
"# Dataset Card for \"sdu_es_dev_topics_LDA_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sdu_es_dev_topics_LDA_2\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sdu_es_dev_topics_LDA_2\"\n\nMore Information needed"
]
|
d8ced93c7c77b85900872fd9816f78957a94c9a4 | # Dataset Card for "byt-mal-minpro"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bgspaditya/byt-mal-minpro | [
"region:us"
]
| 2023-11-25T17:12:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "type_code", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 43302335.10276401, "num_examples": 520952}, {"name": "val", "num_bytes": 5412791.887845501, "num_examples": 65119}, {"name": "test", "num_bytes": 5412875.009390486, "num_examples": 65120}], "download_size": 32733332, "dataset_size": 54128002.0}} | 2023-11-25T18:08:34+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "byt-mal-minpro"
More Information needed | [
"# Dataset Card for \"byt-mal-minpro\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"byt-mal-minpro\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"byt-mal-minpro\"\n\nMore Information needed"
]
|
ed9d414ead893b67ff15f5ce06eb3208aa965168 |
# SignalTrain LA2A Dataset (v.1.1)
> Downloadable from https://zenodo.org/records/3824876
20 GB of audio in & audio out for a LA-2A compressor unit, conditioned on knob variations.
LA-2A Compressor data to accompany the paper "SignalTrain: Profiling Audio Compressors with Deep Neural Networks," 147th Audio Engineering Society Convention (AES), 2019. https://arxiv.org/abs/1905.11928
Accompanying computer code: https://github.com/drscotthawley/signaltrain
A collection of recorded data from an analog Teletronix LA-2A opto-electronic compressor, for various settings of the Peak Reduction knob. Other knobs were kept constant.
Audio samples present in these files are either 'randomly generated', or downloaded audio clips with Create Commons licenses, or are property of Scott Hawley freely distributed as part of this dataset.
Data taken by Ben Colburn, supervised by Scott Hawley
## Revisions in v.1.1 of dataset:
Made the following corrections to discrepancies in original dataset:
Only one of file: 235, 236
$ rm Train/target_235_LA2A_2c__0__70.wav
$ rm Val/input_236_.wav
In wrong directory: 245
$ mv Train/input_245_.wav Val/
Mismatched length and time alignment: 148, 148, 149, 150, 152
All were had targets delayed by 8583 samples relative to inputs, and were shorter.
Truncated beginning of inputs to make them the same as targets. Used new script check_dataset.py to fix & overwrite earlier version:
$ signaltrain/utils/check_dataset.py --fix SignalTrain_LA2A_Dataset/
## Papers dataset was used in:
"Efficient neural networks for real-time analog audio effect modeling" by C. Steinmetz & J. Reiss, 2021. https://arxiv.org/abs/2102.06200
“Exploring quality and generalizability in parameterized neural audio effects," by W. Mitchell and S. H. Hawley, 149th Audio Engineering Society Convention (AES), 2020. https://arxiv.org/abs/2006.05584
"SignalTrain: Profiling Audio Compressors with Deep Neural Networks," 147th Audio Engineering Society Convention (AES), 2019. https://arxiv.org/abs/1905.11928 | drscotthawley/SignalTrain-LA2A | [
"license:cc-by-4.0",
"arxiv:1905.11928",
"arxiv:2102.06200",
"arxiv:2006.05584",
"region:us"
]
| 2023-11-25T17:48:25+00:00 | {"license": "cc-by-4.0"} | 2023-11-25T17:59:40+00:00 | [
"1905.11928",
"2102.06200",
"2006.05584"
]
| []
| TAGS
#license-cc-by-4.0 #arxiv-1905.11928 #arxiv-2102.06200 #arxiv-2006.05584 #region-us
|
# SignalTrain LA2A Dataset (v.1.1)
> Downloadable from URL
20 GB of audio in & audio out for a LA-2A compressor unit, conditioned on knob variations.
LA-2A Compressor data to accompany the paper "SignalTrain: Profiling Audio Compressors with Deep Neural Networks," 147th Audio Engineering Society Convention (AES), 2019. URL
Accompanying computer code: URL
A collection of recorded data from an analog Teletronix LA-2A opto-electronic compressor, for various settings of the Peak Reduction knob. Other knobs were kept constant.
Audio samples present in these files are either 'randomly generated', or downloaded audio clips with Create Commons licenses, or are property of Scott Hawley freely distributed as part of this dataset.
Data taken by Ben Colburn, supervised by Scott Hawley
## Revisions in v.1.1 of dataset:
Made the following corrections to discrepancies in original dataset:
Only one of file: 235, 236
$ rm Train/target_235_LA2A_2c__0__70.wav
$ rm Val/input_236_.wav
In wrong directory: 245
$ mv Train/input_245_.wav Val/
Mismatched length and time alignment: 148, 148, 149, 150, 152
All were had targets delayed by 8583 samples relative to inputs, and were shorter.
Truncated beginning of inputs to make them the same as targets. Used new script check_dataset.py to fix & overwrite earlier version:
$ signaltrain/utils/check_dataset.py --fix SignalTrain_LA2A_Dataset/
## Papers dataset was used in:
"Efficient neural networks for real-time analog audio effect modeling" by C. Steinmetz & J. Reiss, 2021. URL
“Exploring quality and generalizability in parameterized neural audio effects," by W. Mitchell and S. H. Hawley, 149th Audio Engineering Society Convention (AES), 2020. URL
"SignalTrain: Profiling Audio Compressors with Deep Neural Networks," 147th Audio Engineering Society Convention (AES), 2019. URL | [
"# SignalTrain LA2A Dataset (v.1.1)\n\n> Downloadable from URL \n\n20 GB of audio in & audio out for a LA-2A compressor unit, conditioned on knob variations.\n\nLA-2A Compressor data to accompany the paper \"SignalTrain: Profiling Audio Compressors with Deep Neural Networks,\" 147th Audio Engineering Society Convention (AES), 2019. URL \n\n\nAccompanying computer code: URL\n\nA collection of recorded data from an analog Teletronix LA-2A opto-electronic compressor, for various settings of the Peak Reduction knob. Other knobs were kept constant. \n\nAudio samples present in these files are either 'randomly generated', or downloaded audio clips with Create Commons licenses, or are property of Scott Hawley freely distributed as part of this dataset. \n\nData taken by Ben Colburn, supervised by Scott Hawley",
"## Revisions in v.1.1 of dataset: \n\nMade the following corrections to discrepancies in original dataset:\n\nOnly one of file: 235, 236\n\n$ rm Train/target_235_LA2A_2c__0__70.wav\n\n$ rm Val/input_236_.wav\n\nIn wrong directory: 245\n\n$ mv Train/input_245_.wav Val/\n\nMismatched length and time alignment: 148, 148, 149, 150, 152\n\nAll were had targets delayed by 8583 samples relative to inputs, and were shorter.\n\nTruncated beginning of inputs to make them the same as targets. Used new script check_dataset.py to fix & overwrite earlier version:\n\n$ signaltrain/utils/check_dataset.py --fix SignalTrain_LA2A_Dataset/",
"## Papers dataset was used in:\n\n\"Efficient neural networks for real-time analog audio effect modeling\" by C. Steinmetz & J. Reiss, 2021. URL\n\n“Exploring quality and generalizability in parameterized neural audio effects,\" by W. Mitchell and S. H. Hawley, 149th Audio Engineering Society Convention (AES), 2020. URL\n\n\"SignalTrain: Profiling Audio Compressors with Deep Neural Networks,\" 147th Audio Engineering Society Convention (AES), 2019. URL"
]
| [
"TAGS\n#license-cc-by-4.0 #arxiv-1905.11928 #arxiv-2102.06200 #arxiv-2006.05584 #region-us \n",
"# SignalTrain LA2A Dataset (v.1.1)\n\n> Downloadable from URL \n\n20 GB of audio in & audio out for a LA-2A compressor unit, conditioned on knob variations.\n\nLA-2A Compressor data to accompany the paper \"SignalTrain: Profiling Audio Compressors with Deep Neural Networks,\" 147th Audio Engineering Society Convention (AES), 2019. URL \n\n\nAccompanying computer code: URL\n\nA collection of recorded data from an analog Teletronix LA-2A opto-electronic compressor, for various settings of the Peak Reduction knob. Other knobs were kept constant. \n\nAudio samples present in these files are either 'randomly generated', or downloaded audio clips with Create Commons licenses, or are property of Scott Hawley freely distributed as part of this dataset. \n\nData taken by Ben Colburn, supervised by Scott Hawley",
"## Revisions in v.1.1 of dataset: \n\nMade the following corrections to discrepancies in original dataset:\n\nOnly one of file: 235, 236\n\n$ rm Train/target_235_LA2A_2c__0__70.wav\n\n$ rm Val/input_236_.wav\n\nIn wrong directory: 245\n\n$ mv Train/input_245_.wav Val/\n\nMismatched length and time alignment: 148, 148, 149, 150, 152\n\nAll were had targets delayed by 8583 samples relative to inputs, and were shorter.\n\nTruncated beginning of inputs to make them the same as targets. Used new script check_dataset.py to fix & overwrite earlier version:\n\n$ signaltrain/utils/check_dataset.py --fix SignalTrain_LA2A_Dataset/",
"## Papers dataset was used in:\n\n\"Efficient neural networks for real-time analog audio effect modeling\" by C. Steinmetz & J. Reiss, 2021. URL\n\n“Exploring quality and generalizability in parameterized neural audio effects,\" by W. Mitchell and S. H. Hawley, 149th Audio Engineering Society Convention (AES), 2020. URL\n\n\"SignalTrain: Profiling Audio Compressors with Deep Neural Networks,\" 147th Audio Engineering Society Convention (AES), 2019. URL"
]
| [
41,
200,
196,
116
]
| [
"passage: TAGS\n#license-cc-by-4.0 #arxiv-1905.11928 #arxiv-2102.06200 #arxiv-2006.05584 #region-us \n# SignalTrain LA2A Dataset (v.1.1)\n\n> Downloadable from URL \n\n20 GB of audio in & audio out for a LA-2A compressor unit, conditioned on knob variations.\n\nLA-2A Compressor data to accompany the paper \"SignalTrain: Profiling Audio Compressors with Deep Neural Networks,\" 147th Audio Engineering Society Convention (AES), 2019. URL \n\n\nAccompanying computer code: URL\n\nA collection of recorded data from an analog Teletronix LA-2A opto-electronic compressor, for various settings of the Peak Reduction knob. Other knobs were kept constant. \n\nAudio samples present in these files are either 'randomly generated', or downloaded audio clips with Create Commons licenses, or are property of Scott Hawley freely distributed as part of this dataset. \n\nData taken by Ben Colburn, supervised by Scott Hawley## Revisions in v.1.1 of dataset: \n\nMade the following corrections to discrepancies in original dataset:\n\nOnly one of file: 235, 236\n\n$ rm Train/target_235_LA2A_2c__0__70.wav\n\n$ rm Val/input_236_.wav\n\nIn wrong directory: 245\n\n$ mv Train/input_245_.wav Val/\n\nMismatched length and time alignment: 148, 148, 149, 150, 152\n\nAll were had targets delayed by 8583 samples relative to inputs, and were shorter.\n\nTruncated beginning of inputs to make them the same as targets. Used new script check_dataset.py to fix & overwrite earlier version:\n\n$ signaltrain/utils/check_dataset.py --fix SignalTrain_LA2A_Dataset/"
]
|
9fdb29105cedd4eac99e2b7c43fbbeb0911e15ea |
## HOW TO WRANGLING THIS DATASET TO DPO & CHATML FORMAT
```
def return_prompt_and_responses(samples) -> dict[str, str, str]:
return {
"prompt": [
"<|im_start|>user\n" + i + "<|im_end|>\n"
for i in samples["PROMPT"]
],
"chosen": [
"<|im_start|>assistant\n" + j + "<|im_end|>"
for j in samples["CHOSEN"]
],
"rejected": [
"<|im_start|>assistant\n" + k + "<|im_end|>"
for k in samples["REJECTED"]
],
}
dataset = load_dataset(
"Ichsan2895/DPO_ID-Wiki_10kTesting",
)
original_columns = dataset.column_names
dataset.map(
return_prompt_and_responses,
batched=True,
remove_columns=original_columns
)
```
## HOW TO USE DPO
```
dpo_trainer = DPOTrainer(
model, # base model from SFT pipeline
model_ref, # typically a copy of the SFT trained base model
beta=0.1, # temperature hyperparameter of DPO
train_dataset=dataset['train'], # dataset prepared above
tokenizer=tokenizer, # tokenizer
args=training_args, # training arguments e.g. batch size, lr, etc.
)
```
## CITATION
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
@misc{vonwerra2022trl,
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang},
title = {TRL: Transformer Reinforcement Learning},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` | Ichsan2895/DPO_ID-Wiki_10kTesting | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2023-11-25T17:55:48+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2023-11-25T18:19:29+00:00 | []
| []
| TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
## HOW TO WRANGLING THIS DATASET TO DPO & CHATML FORMAT
## HOW TO USE DPO
## CITATION
| [
"## HOW TO WRANGLING THIS DATASET TO DPO & CHATML FORMAT",
"## HOW TO USE DPO",
"## CITATION"
]
| [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"## HOW TO WRANGLING THIS DATASET TO DPO & CHATML FORMAT",
"## HOW TO USE DPO",
"## CITATION"
]
| [
19,
22,
8,
4
]
| [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## HOW TO WRANGLING THIS DATASET TO DPO & CHATML FORMAT## HOW TO USE DPO## CITATION"
]
|
d087cec7c56df6c9c18c0fb28f9709847700ecef |
The dataset repo contains the data for C-VQA-Real dataset, for complete data and evaluating your model on our dataset, please refer to https://github.com/Letian2003/C-VQA.
| tennant/C-VQA | [
"region:us"
]
| 2023-11-25T18:31:51+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "C-VQA-Real_questions.csv"}]}]} | 2023-11-27T16:25:56+00:00 | []
| []
| TAGS
#region-us
|
The dataset repo contains the data for C-VQA-Real dataset, for complete data and evaluating your model on our dataset, please refer to URL
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
6b8a0b9b9d78c52c2f7b46ce59c5e8485daa152d |
# 🚫🤖 Language Model Offensive Text Exploration Dataset
## 🌐 Introduction
This dataset is created based on selected prompts from Table 9 and Table 10 of [Ethan Perez et al.'s paper](https://arxiv.org/abs/2202.03286) "Red Teaming Language Models with Language Models". It is designed to explore the propensity of language models to generate offensive text.
## 📋 Dataset Composition
- **Table 9-Based Prompts**: These prompts are derived from a 280B parameter language model's test cases, focusing on understanding the model's behavior in generating offensive content.
- **Table 10-Based Prompts**: Sourced from test cases created by a 7B parameter Gopher LM and the BAD dataset, these prompts help compare and contrast different models' tendencies to produce offensive language.
## 🎯 Objective
The aim is to examine how language models respond to various prompts that have the potential to elicit offensive text. This exploration seeks to identify and understand the triggers and patterns in language model responses that lead to the generation of such content.
## 🔍 Methodology
By riffing on the selected prompts from the paper, I aim to test these against a language model to observe and analyze the generated responses. This method provides insights into how certain prompts influence the language model's output, particularly in terms of offensiveness.
## 🌍 Usage and Contribution
This dataset can be used by researchers and developers to test their own language models for offensive text generation. The findings from such tests can contribute to improving the ethical and responsible development of AI technologies.
## 🎖️ Goal
The ultimate goal is to enhance our understanding of offensive text generation in language models and to contribute to the development of more nuanced and socially responsible AI systems.
| harpreetsahota/elicit-offensive-language-prompts | [
"arxiv:2202.03286",
"region:us"
]
| 2023-11-25T19:19:34+00:00 | {"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4621, "num_examples": 73}], "download_size": 3313, "dataset_size": 4621}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-29T19:33:16+00:00 | [
"2202.03286"
]
| []
| TAGS
#arxiv-2202.03286 #region-us
|
# Language Model Offensive Text Exploration Dataset
## Introduction
This dataset is created based on selected prompts from Table 9 and Table 10 of Ethan Perez et al.'s paper "Red Teaming Language Models with Language Models". It is designed to explore the propensity of language models to generate offensive text.
## Dataset Composition
- Table 9-Based Prompts: These prompts are derived from a 280B parameter language model's test cases, focusing on understanding the model's behavior in generating offensive content.
- Table 10-Based Prompts: Sourced from test cases created by a 7B parameter Gopher LM and the BAD dataset, these prompts help compare and contrast different models' tendencies to produce offensive language.
## Objective
The aim is to examine how language models respond to various prompts that have the potential to elicit offensive text. This exploration seeks to identify and understand the triggers and patterns in language model responses that lead to the generation of such content.
## Methodology
By riffing on the selected prompts from the paper, I aim to test these against a language model to observe and analyze the generated responses. This method provides insights into how certain prompts influence the language model's output, particularly in terms of offensiveness.
## Usage and Contribution
This dataset can be used by researchers and developers to test their own language models for offensive text generation. The findings from such tests can contribute to improving the ethical and responsible development of AI technologies.
## ️ Goal
The ultimate goal is to enhance our understanding of offensive text generation in language models and to contribute to the development of more nuanced and socially responsible AI systems.
| [
"# Language Model Offensive Text Exploration Dataset",
"## Introduction \nThis dataset is created based on selected prompts from Table 9 and Table 10 of Ethan Perez et al.'s paper \"Red Teaming Language Models with Language Models\". It is designed to explore the propensity of language models to generate offensive text.",
"## Dataset Composition \n- Table 9-Based Prompts: These prompts are derived from a 280B parameter language model's test cases, focusing on understanding the model's behavior in generating offensive content.\n- Table 10-Based Prompts: Sourced from test cases created by a 7B parameter Gopher LM and the BAD dataset, these prompts help compare and contrast different models' tendencies to produce offensive language.",
"## Objective \nThe aim is to examine how language models respond to various prompts that have the potential to elicit offensive text. This exploration seeks to identify and understand the triggers and patterns in language model responses that lead to the generation of such content.",
"## Methodology \nBy riffing on the selected prompts from the paper, I aim to test these against a language model to observe and analyze the generated responses. This method provides insights into how certain prompts influence the language model's output, particularly in terms of offensiveness.",
"## Usage and Contribution \nThis dataset can be used by researchers and developers to test their own language models for offensive text generation. The findings from such tests can contribute to improving the ethical and responsible development of AI technologies.",
"## ️ Goal \nThe ultimate goal is to enhance our understanding of offensive text generation in language models and to contribute to the development of more nuanced and socially responsible AI systems."
]
| [
"TAGS\n#arxiv-2202.03286 #region-us \n",
"# Language Model Offensive Text Exploration Dataset",
"## Introduction \nThis dataset is created based on selected prompts from Table 9 and Table 10 of Ethan Perez et al.'s paper \"Red Teaming Language Models with Language Models\". It is designed to explore the propensity of language models to generate offensive text.",
"## Dataset Composition \n- Table 9-Based Prompts: These prompts are derived from a 280B parameter language model's test cases, focusing on understanding the model's behavior in generating offensive content.\n- Table 10-Based Prompts: Sourced from test cases created by a 7B parameter Gopher LM and the BAD dataset, these prompts help compare and contrast different models' tendencies to produce offensive language.",
"## Objective \nThe aim is to examine how language models respond to various prompts that have the potential to elicit offensive text. This exploration seeks to identify and understand the triggers and patterns in language model responses that lead to the generation of such content.",
"## Methodology \nBy riffing on the selected prompts from the paper, I aim to test these against a language model to observe and analyze the generated responses. This method provides insights into how certain prompts influence the language model's output, particularly in terms of offensiveness.",
"## Usage and Contribution \nThis dataset can be used by researchers and developers to test their own language models for offensive text generation. The findings from such tests can contribute to improving the ethical and responsible development of AI technologies.",
"## ️ Goal \nThe ultimate goal is to enhance our understanding of offensive text generation in language models and to contribute to the development of more nuanced and socially responsible AI systems."
]
| [
15,
11,
60,
97,
57,
64,
51,
40
]
| [
"passage: TAGS\n#arxiv-2202.03286 #region-us \n# Language Model Offensive Text Exploration Dataset## Introduction \nThis dataset is created based on selected prompts from Table 9 and Table 10 of Ethan Perez et al.'s paper \"Red Teaming Language Models with Language Models\". It is designed to explore the propensity of language models to generate offensive text.## Dataset Composition \n- Table 9-Based Prompts: These prompts are derived from a 280B parameter language model's test cases, focusing on understanding the model's behavior in generating offensive content.\n- Table 10-Based Prompts: Sourced from test cases created by a 7B parameter Gopher LM and the BAD dataset, these prompts help compare and contrast different models' tendencies to produce offensive language.## Objective \nThe aim is to examine how language models respond to various prompts that have the potential to elicit offensive text. This exploration seeks to identify and understand the triggers and patterns in language model responses that lead to the generation of such content.## Methodology \nBy riffing on the selected prompts from the paper, I aim to test these against a language model to observe and analyze the generated responses. This method provides insights into how certain prompts influence the language model's output, particularly in terms of offensiveness.## Usage and Contribution \nThis dataset can be used by researchers and developers to test their own language models for offensive text generation. The findings from such tests can contribute to improving the ethical and responsible development of AI technologies.## ️ Goal \nThe ultimate goal is to enhance our understanding of offensive text generation in language models and to contribute to the development of more nuanced and socially responsible AI systems."
]
|
44b6391c630ef1f86299d3b4dcca492ea43b9b8c |
# 🕵️♂️🤖 Language Model Bias Exploration
## 🌐 Introduction
In this dataset, I've adopted the approach from ["Red Teaming Language Models with Language Models"](https://arxiv.org/abs/2202.03286) by Ethan Perez et al., focusing on exploring and understanding distributional bias in language models (LMs).
## 🎯 Purpose of the Prompts
The prompts in this repository are riffs on the prompts presented in by Table 12 and Tabel 13 in Perez et al.'s paper, serve a crucial role. They are designed to elicit responses from LMs that reveal how different groups are represented and discussed. These prompts help in identifying distributional biases - biases in the frequency and context in which LMs portray certain groups, which might be negative or stereotypical.
## 📊 Addressing Distributional Bias
Distributional bias is a subtle yet pervasive form of bias where certain groups are more often associated with negative contexts or sentiments. This project aims to uncover such biases in LMs by analyzing how these models respond to various group-related prompts.
## 📈 Dataset and Analysis
The dataset comprises variations of prompts used to test and analyze the responses of LMs. By examining these responses, I aim to shed light on the biases present in current language models, contributing to the field of AI ethics.
## 🎖️ Goal
The ultimate goal of this exploration is to contribute towards more ethical and responsible AI development, ensuring that language models treat all groups with fairness and without bias.
| harpreetsahota/elicit-bias-prompts | [
"arxiv:2202.03286",
"region:us"
]
| 2023-11-25T19:42:28+00:00 | {"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3851, "num_examples": 64}], "download_size": 2447, "dataset_size": 3851}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-29T19:20:01+00:00 | [
"2202.03286"
]
| []
| TAGS
#arxiv-2202.03286 #region-us
|
# ️️ Language Model Bias Exploration
## Introduction
In this dataset, I've adopted the approach from "Red Teaming Language Models with Language Models" by Ethan Perez et al., focusing on exploring and understanding distributional bias in language models (LMs).
## Purpose of the Prompts
The prompts in this repository are riffs on the prompts presented in by Table 12 and Tabel 13 in Perez et al.'s paper, serve a crucial role. They are designed to elicit responses from LMs that reveal how different groups are represented and discussed. These prompts help in identifying distributional biases - biases in the frequency and context in which LMs portray certain groups, which might be negative or stereotypical.
## Addressing Distributional Bias
Distributional bias is a subtle yet pervasive form of bias where certain groups are more often associated with negative contexts or sentiments. This project aims to uncover such biases in LMs by analyzing how these models respond to various group-related prompts.
## Dataset and Analysis
The dataset comprises variations of prompts used to test and analyze the responses of LMs. By examining these responses, I aim to shed light on the biases present in current language models, contributing to the field of AI ethics.
## ️ Goal
The ultimate goal of this exploration is to contribute towards more ethical and responsible AI development, ensuring that language models treat all groups with fairness and without bias.
| [
"# ️️ Language Model Bias Exploration",
"## Introduction \nIn this dataset, I've adopted the approach from \"Red Teaming Language Models with Language Models\" by Ethan Perez et al., focusing on exploring and understanding distributional bias in language models (LMs).",
"## Purpose of the Prompts \nThe prompts in this repository are riffs on the prompts presented in by Table 12 and Tabel 13 in Perez et al.'s paper, serve a crucial role. They are designed to elicit responses from LMs that reveal how different groups are represented and discussed. These prompts help in identifying distributional biases - biases in the frequency and context in which LMs portray certain groups, which might be negative or stereotypical.",
"## Addressing Distributional Bias \nDistributional bias is a subtle yet pervasive form of bias where certain groups are more often associated with negative contexts or sentiments. This project aims to uncover such biases in LMs by analyzing how these models respond to various group-related prompts.",
"## Dataset and Analysis \nThe dataset comprises variations of prompts used to test and analyze the responses of LMs. By examining these responses, I aim to shed light on the biases present in current language models, contributing to the field of AI ethics.",
"## ️ Goal \nThe ultimate goal of this exploration is to contribute towards more ethical and responsible AI development, ensuring that language models treat all groups with fairness and without bias."
]
| [
"TAGS\n#arxiv-2202.03286 #region-us \n",
"# ️️ Language Model Bias Exploration",
"## Introduction \nIn this dataset, I've adopted the approach from \"Red Teaming Language Models with Language Models\" by Ethan Perez et al., focusing on exploring and understanding distributional bias in language models (LMs).",
"## Purpose of the Prompts \nThe prompts in this repository are riffs on the prompts presented in by Table 12 and Tabel 13 in Perez et al.'s paper, serve a crucial role. They are designed to elicit responses from LMs that reveal how different groups are represented and discussed. These prompts help in identifying distributional biases - biases in the frequency and context in which LMs portray certain groups, which might be negative or stereotypical.",
"## Addressing Distributional Bias \nDistributional bias is a subtle yet pervasive form of bias where certain groups are more often associated with negative contexts or sentiments. This project aims to uncover such biases in LMs by analyzing how these models respond to various group-related prompts.",
"## Dataset and Analysis \nThe dataset comprises variations of prompts used to test and analyze the responses of LMs. By examining these responses, I aim to shed light on the biases present in current language models, contributing to the field of AI ethics.",
"## ️ Goal \nThe ultimate goal of this exploration is to contribute towards more ethical and responsible AI development, ensuring that language models treat all groups with fairness and without bias."
]
| [
15,
11,
55,
112,
68,
67,
43
]
| [
"passage: TAGS\n#arxiv-2202.03286 #region-us \n# ️️ Language Model Bias Exploration## Introduction \nIn this dataset, I've adopted the approach from \"Red Teaming Language Models with Language Models\" by Ethan Perez et al., focusing on exploring and understanding distributional bias in language models (LMs).## Purpose of the Prompts \nThe prompts in this repository are riffs on the prompts presented in by Table 12 and Tabel 13 in Perez et al.'s paper, serve a crucial role. They are designed to elicit responses from LMs that reveal how different groups are represented and discussed. These prompts help in identifying distributional biases - biases in the frequency and context in which LMs portray certain groups, which might be negative or stereotypical.## Addressing Distributional Bias \nDistributional bias is a subtle yet pervasive form of bias where certain groups are more often associated with negative contexts or sentiments. This project aims to uncover such biases in LMs by analyzing how these models respond to various group-related prompts.## Dataset and Analysis \nThe dataset comprises variations of prompts used to test and analyze the responses of LMs. By examining these responses, I aim to shed light on the biases present in current language models, contributing to the field of AI ethics.## ️ Goal \nThe ultimate goal of this exploration is to contribute towards more ethical and responsible AI development, ensuring that language models treat all groups with fairness and without bias."
]
|
d56d2558657f93cba561d2b358bea3e4244dc606 | dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | DiegoMVM/DefinicionesDerechoPeruano121Palabras | [
"region:us"
]
| 2023-11-25T20:01:16+00:00 | {} | 2023-11-26T00:13:13+00:00 | []
| []
| TAGS
#region-us
| dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
dd4281edea9c866c5433322140946096a4dee5e3 |
# Language Model Testing Dataset 📊🤖
## Introduction 🌐
This repository provides a dataset inspired by the paper ["Explore, Establish, Exploit: Red Teaming Language Models from Scratch"](https://arxiv.org/abs/2306.09442) It's designed for anyone interested in testing language models (LMs) for biases, toxicity, and misinformation.
## Dataset Origin 📝
The dataset is based on examples from Tables 7 and 8 of the paper, which illustrate how prompts can elicit not just biased but also toxic or nonsensical responses from LMs.
### Toxicity and Untruths 🤬
The prompts here, derived from red-teaming GPT-3-text-davinci-002 with classifiers trained on the CREAK dataset, are intended to elicit responses that can reveal tendencies towards toxicity or untruths.
### Nonsense Responses 🤪
Similarly, the prompts from Table 8 are structured to test LM responses for nonsensical or toxic content. These were initially used against GPT-3-text-davinci-002 with classifiers trained on ChatGPT-3.5-turbo labels.
## Purpose of the Dataset 🎯
This dataset is provided as a tool for researchers and developers to test their own LMs. It's particularly useful for evaluating how different models handle potentially problematic content, whether it's biased, toxic, or factually incorrect.
## Using the Dataset 🛠️
Feel free to use this dataset to assess the response patterns of any LM. It's a valuable resource for identifying areas where LMs might need improvement in handling sensitive or complex content.
## Goal 🎖️
The aim is to facilitate broader research into making LMs safer, more reliable, and ethically responsible by providing a ready-to-use dataset for testing and analysis.
| harpreetsahota/adversarial-prompts | [
"arxiv:2306.09442",
"region:us"
]
| 2023-11-25T20:14:19+00:00 | {"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2366, "num_examples": 37}], "download_size": 2228, "dataset_size": 2366}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-29T19:25:57+00:00 | [
"2306.09442"
]
| []
| TAGS
#arxiv-2306.09442 #region-us
|
# Language Model Testing Dataset
## Introduction
This repository provides a dataset inspired by the paper "Explore, Establish, Exploit: Red Teaming Language Models from Scratch" It's designed for anyone interested in testing language models (LMs) for biases, toxicity, and misinformation.
## Dataset Origin
The dataset is based on examples from Tables 7 and 8 of the paper, which illustrate how prompts can elicit not just biased but also toxic or nonsensical responses from LMs.
### Toxicity and Untruths
The prompts here, derived from red-teaming GPT-3-text-davinci-002 with classifiers trained on the CREAK dataset, are intended to elicit responses that can reveal tendencies towards toxicity or untruths.
### Nonsense Responses
Similarly, the prompts from Table 8 are structured to test LM responses for nonsensical or toxic content. These were initially used against GPT-3-text-davinci-002 with classifiers trained on ChatGPT-3.5-turbo labels.
## Purpose of the Dataset
This dataset is provided as a tool for researchers and developers to test their own LMs. It's particularly useful for evaluating how different models handle potentially problematic content, whether it's biased, toxic, or factually incorrect.
## Using the Dataset ️
Feel free to use this dataset to assess the response patterns of any LM. It's a valuable resource for identifying areas where LMs might need improvement in handling sensitive or complex content.
## Goal ️
The aim is to facilitate broader research into making LMs safer, more reliable, and ethically responsible by providing a ready-to-use dataset for testing and analysis.
| [
"# Language Model Testing Dataset",
"## Introduction \nThis repository provides a dataset inspired by the paper \"Explore, Establish, Exploit: Red Teaming Language Models from Scratch\" It's designed for anyone interested in testing language models (LMs) for biases, toxicity, and misinformation.",
"## Dataset Origin \nThe dataset is based on examples from Tables 7 and 8 of the paper, which illustrate how prompts can elicit not just biased but also toxic or nonsensical responses from LMs.",
"### Toxicity and Untruths \nThe prompts here, derived from red-teaming GPT-3-text-davinci-002 with classifiers trained on the CREAK dataset, are intended to elicit responses that can reveal tendencies towards toxicity or untruths.",
"### Nonsense Responses \nSimilarly, the prompts from Table 8 are structured to test LM responses for nonsensical or toxic content. These were initially used against GPT-3-text-davinci-002 with classifiers trained on ChatGPT-3.5-turbo labels.",
"## Purpose of the Dataset \nThis dataset is provided as a tool for researchers and developers to test their own LMs. It's particularly useful for evaluating how different models handle potentially problematic content, whether it's biased, toxic, or factually incorrect.",
"## Using the Dataset ️\nFeel free to use this dataset to assess the response patterns of any LM. It's a valuable resource for identifying areas where LMs might need improvement in handling sensitive or complex content.",
"## Goal ️\nThe aim is to facilitate broader research into making LMs safer, more reliable, and ethically responsible by providing a ready-to-use dataset for testing and analysis."
]
| [
"TAGS\n#arxiv-2306.09442 #region-us \n",
"# Language Model Testing Dataset",
"## Introduction \nThis repository provides a dataset inspired by the paper \"Explore, Establish, Exploit: Red Teaming Language Models from Scratch\" It's designed for anyone interested in testing language models (LMs) for biases, toxicity, and misinformation.",
"## Dataset Origin \nThe dataset is based on examples from Tables 7 and 8 of the paper, which illustrate how prompts can elicit not just biased but also toxic or nonsensical responses from LMs.",
"### Toxicity and Untruths \nThe prompts here, derived from red-teaming GPT-3-text-davinci-002 with classifiers trained on the CREAK dataset, are intended to elicit responses that can reveal tendencies towards toxicity or untruths.",
"### Nonsense Responses \nSimilarly, the prompts from Table 8 are structured to test LM responses for nonsensical or toxic content. These were initially used against GPT-3-text-davinci-002 with classifiers trained on ChatGPT-3.5-turbo labels.",
"## Purpose of the Dataset \nThis dataset is provided as a tool for researchers and developers to test their own LMs. It's particularly useful for evaluating how different models handle potentially problematic content, whether it's biased, toxic, or factually incorrect.",
"## Using the Dataset ️\nFeel free to use this dataset to assess the response patterns of any LM. It's a valuable resource for identifying areas where LMs might need improvement in handling sensitive or complex content.",
"## Goal ️\nThe aim is to facilitate broader research into making LMs safer, more reliable, and ethically responsible by providing a ready-to-use dataset for testing and analysis."
]
| [
14,
7,
64,
50,
67,
67,
62,
50,
46
]
| [
"passage: TAGS\n#arxiv-2306.09442 #region-us \n# Language Model Testing Dataset## Introduction \nThis repository provides a dataset inspired by the paper \"Explore, Establish, Exploit: Red Teaming Language Models from Scratch\" It's designed for anyone interested in testing language models (LMs) for biases, toxicity, and misinformation.## Dataset Origin \nThe dataset is based on examples from Tables 7 and 8 of the paper, which illustrate how prompts can elicit not just biased but also toxic or nonsensical responses from LMs.### Toxicity and Untruths \nThe prompts here, derived from red-teaming GPT-3-text-davinci-002 with classifiers trained on the CREAK dataset, are intended to elicit responses that can reveal tendencies towards toxicity or untruths.### Nonsense Responses \nSimilarly, the prompts from Table 8 are structured to test LM responses for nonsensical or toxic content. These were initially used against GPT-3-text-davinci-002 with classifiers trained on ChatGPT-3.5-turbo labels.## Purpose of the Dataset \nThis dataset is provided as a tool for researchers and developers to test their own LMs. It's particularly useful for evaluating how different models handle potentially problematic content, whether it's biased, toxic, or factually incorrect.## Using the Dataset ️\nFeel free to use this dataset to assess the response patterns of any LM. It's a valuable resource for identifying areas where LMs might need improvement in handling sensitive or complex content.## Goal ️\nThe aim is to facilitate broader research into making LMs safer, more reliable, and ethically responsible by providing a ready-to-use dataset for testing and analysis."
]
|
fcd3c8ad8e621a064d4f08286c46946066a100e6 | # Dataset Card for "latent-trees-agreement-GEN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | michaelginn/latent-trees-agreement-GEN | [
"region:us"
]
| 2023-11-25T20:47:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval", "path": "data/eval-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "depth", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 105894.0, "num_examples": 2400}, {"name": "eval", "num_bytes": 35298.0, "num_examples": 800}, {"name": "test", "num_bytes": 56229, "num_examples": 800}], "download_size": 63746, "dataset_size": 197421.0}} | 2023-12-14T03:45:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "latent-trees-agreement-GEN"
More Information needed | [
"# Dataset Card for \"latent-trees-agreement-GEN\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"latent-trees-agreement-GEN\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"latent-trees-agreement-GEN\"\n\nMore Information needed"
]
|
ef9835291e7e1a8402a870b83aa598b2a4398e94 |
# Dataset Card for Evaluation run of NurtureAI/Orca-2-7B-16k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NurtureAI/Orca-2-7B-16k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NurtureAI/Orca-2-7B-16k](https://huggingface.co/NurtureAI/Orca-2-7B-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NurtureAI__Orca-2-7B-16k_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-25T21:39:02.599324](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__Orca-2-7B-16k_public/blob/main/results_2023-11-25T21-39-02.599324.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.36746546712957223,
"acc_stderr": 0.033751277531008754,
"acc_norm": 0.3738175555586316,
"acc_norm_stderr": 0.03459812342976094,
"mc1": 0.28886168910648713,
"mc1_stderr": 0.01586634640138431,
"mc2": 0.45373679597767685,
"mc2_stderr": 0.015753224924844992,
"em": 0.21046560402684564,
"em_stderr": 0.004174608410380015,
"f1": 0.267364723154363,
"f1_stderr": 0.004242093940617827
},
"harness|arc:challenge|25": {
"acc": 0.4735494880546075,
"acc_stderr": 0.014590931358120174,
"acc_norm": 0.5059726962457338,
"acc_norm_stderr": 0.014610348300255795
},
"harness|hellaswag|10": {
"acc": 0.47410874327823144,
"acc_stderr": 0.004983087049281742,
"acc_norm": 0.6389165504879506,
"acc_norm_stderr": 0.00479333052565621
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4342105263157895,
"acc_stderr": 0.0403356566784832,
"acc_norm": 0.4342105263157895,
"acc_norm_stderr": 0.0403356566784832
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.39622641509433965,
"acc_stderr": 0.030102793781791194,
"acc_norm": 0.39622641509433965,
"acc_norm_stderr": 0.030102793781791194
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.3819444444444444,
"acc_stderr": 0.040629907841466674,
"acc_norm": 0.3819444444444444,
"acc_norm_stderr": 0.040629907841466674
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.046482319871173156,
"acc_norm": 0.31,
"acc_norm_stderr": 0.046482319871173156
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3468208092485549,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.3468208092485549,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.045766654032077636,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.045766654032077636
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2851063829787234,
"acc_stderr": 0.02951319662553935,
"acc_norm": 0.2851063829787234,
"acc_norm_stderr": 0.02951319662553935
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2807017543859649,
"acc_stderr": 0.04227054451232199,
"acc_norm": 0.2807017543859649,
"acc_norm_stderr": 0.04227054451232199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.38620689655172413,
"acc_stderr": 0.04057324734419035,
"acc_norm": 0.38620689655172413,
"acc_norm_stderr": 0.04057324734419035
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24603174603174602,
"acc_stderr": 0.022182037202948368,
"acc_norm": 0.24603174603174602,
"acc_norm_stderr": 0.022182037202948368
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.23015873015873015,
"acc_stderr": 0.037649508797906066,
"acc_norm": 0.23015873015873015,
"acc_norm_stderr": 0.037649508797906066
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4,
"acc_stderr": 0.02786932057166463,
"acc_norm": 0.4,
"acc_norm_stderr": 0.02786932057166463
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.33497536945812806,
"acc_stderr": 0.033208527423483104,
"acc_norm": 0.33497536945812806,
"acc_norm_stderr": 0.033208527423483104
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5454545454545454,
"acc_stderr": 0.038881769216741004,
"acc_norm": 0.5454545454545454,
"acc_norm_stderr": 0.038881769216741004
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.4292929292929293,
"acc_stderr": 0.03526552724601198,
"acc_norm": 0.4292929292929293,
"acc_norm_stderr": 0.03526552724601198
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5284974093264249,
"acc_stderr": 0.03602573571288441,
"acc_norm": 0.5284974093264249,
"acc_norm_stderr": 0.03602573571288441
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.34102564102564104,
"acc_stderr": 0.024035489676335065,
"acc_norm": 0.34102564102564104,
"acc_norm_stderr": 0.024035489676335065
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.026067159222275794,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.026067159222275794
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3067226890756303,
"acc_stderr": 0.02995382389188704,
"acc_norm": 0.3067226890756303,
"acc_norm_stderr": 0.02995382389188704
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.26490066225165565,
"acc_stderr": 0.036030385453603826,
"acc_norm": 0.26490066225165565,
"acc_norm_stderr": 0.036030385453603826
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5229357798165137,
"acc_stderr": 0.0214147570581755,
"acc_norm": 0.5229357798165137,
"acc_norm_stderr": 0.0214147570581755
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.2916666666666667,
"acc_stderr": 0.03099866630456052,
"acc_norm": 0.2916666666666667,
"acc_norm_stderr": 0.03099866630456052
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5490196078431373,
"acc_stderr": 0.03492406104163613,
"acc_norm": 0.5490196078431373,
"acc_norm_stderr": 0.03492406104163613
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6033755274261603,
"acc_stderr": 0.03184399873811226,
"acc_norm": 0.6033755274261603,
"acc_norm_stderr": 0.03184399873811226
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.4304932735426009,
"acc_stderr": 0.033231973029429394,
"acc_norm": 0.4304932735426009,
"acc_norm_stderr": 0.033231973029429394
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.45038167938931295,
"acc_stderr": 0.04363643698524779,
"acc_norm": 0.45038167938931295,
"acc_norm_stderr": 0.04363643698524779
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.4049586776859504,
"acc_stderr": 0.04481137755942469,
"acc_norm": 0.4049586776859504,
"acc_norm_stderr": 0.04481137755942469
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.04750077341199986,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.04750077341199986
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3803680981595092,
"acc_stderr": 0.03814269893261837,
"acc_norm": 0.3803680981595092,
"acc_norm_stderr": 0.03814269893261837
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.25,
"acc_stderr": 0.04109974682633932,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04109974682633932
},
"harness|hendrycksTest-management|5": {
"acc": 0.36893203883495146,
"acc_stderr": 0.047776151811567386,
"acc_norm": 0.36893203883495146,
"acc_norm_stderr": 0.047776151811567386
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.43162393162393164,
"acc_stderr": 0.0324483553531149,
"acc_norm": 0.43162393162393164,
"acc_norm_stderr": 0.0324483553531149
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.40485312899106,
"acc_stderr": 0.017553246467720256,
"acc_norm": 0.40485312899106,
"acc_norm_stderr": 0.017553246467720256
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.3959537572254335,
"acc_stderr": 0.026329813341946243,
"acc_norm": 0.3959537572254335,
"acc_norm_stderr": 0.026329813341946243
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24134078212290502,
"acc_stderr": 0.014310999547961464,
"acc_norm": 0.24134078212290502,
"acc_norm_stderr": 0.014310999547961464
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.3954248366013072,
"acc_stderr": 0.027996723180631438,
"acc_norm": 0.3954248366013072,
"acc_norm_stderr": 0.027996723180631438
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.36012861736334406,
"acc_stderr": 0.027264297599804015,
"acc_norm": 0.36012861736334406,
"acc_norm_stderr": 0.027264297599804015
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.027513747284379424,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.027513747284379424
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.31560283687943264,
"acc_stderr": 0.02772498944950931,
"acc_norm": 0.31560283687943264,
"acc_norm_stderr": 0.02772498944950931
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.29921773142112124,
"acc_stderr": 0.01169537463069603,
"acc_norm": 0.29921773142112124,
"acc_norm_stderr": 0.01169537463069603
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.3897058823529412,
"acc_stderr": 0.029624663581159696,
"acc_norm": 0.3897058823529412,
"acc_norm_stderr": 0.029624663581159696
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.3349673202614379,
"acc_stderr": 0.019094228167000325,
"acc_norm": 0.3349673202614379,
"acc_norm_stderr": 0.019094228167000325
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.37272727272727274,
"acc_stderr": 0.04631381319425463,
"acc_norm": 0.37272727272727274,
"acc_norm_stderr": 0.04631381319425463
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.4204081632653061,
"acc_stderr": 0.03160106993449604,
"acc_norm": 0.4204081632653061,
"acc_norm_stderr": 0.03160106993449604
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.472636815920398,
"acc_stderr": 0.035302355173346824,
"acc_norm": 0.472636815920398,
"acc_norm_stderr": 0.035302355173346824
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-virology|5": {
"acc": 0.35542168674698793,
"acc_stderr": 0.03726214354322415,
"acc_norm": 0.35542168674698793,
"acc_norm_stderr": 0.03726214354322415
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.32748538011695905,
"acc_stderr": 0.035993357714560276,
"acc_norm": 0.32748538011695905,
"acc_norm_stderr": 0.035993357714560276
},
"harness|truthfulqa:mc|0": {
"mc1": 0.28886168910648713,
"mc1_stderr": 0.01586634640138431,
"mc2": 0.45373679597767685,
"mc2_stderr": 0.015753224924844992
},
"harness|winogrande|5": {
"acc": 0.5422257300710339,
"acc_stderr": 0.014002284504422435
},
"harness|drop|3": {
"em": 0.21046560402684564,
"em_stderr": 0.004174608410380015,
"f1": 0.267364723154363,
"f1_stderr": 0.004242093940617827
},
"harness|gsm8k|5": {
"acc": 0.015163002274450341,
"acc_stderr": 0.0033660229497263225
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_NurtureAI__Orca-2-7B-16k | [
"region:us"
]
| 2023-11-25T21:42:06+00:00 | {"pretty_name": "Evaluation run of NurtureAI/Orca-2-7B-16k", "dataset_summary": "Dataset automatically created during the evaluation run of model [NurtureAI/Orca-2-7B-16k](https://huggingface.co/NurtureAI/Orca-2-7B-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NurtureAI__Orca-2-7B-16k_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-25T21:39:02.599324](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__Orca-2-7B-16k_public/blob/main/results_2023-11-25T21-39-02.599324.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.36746546712957223,\n \"acc_stderr\": 0.033751277531008754,\n \"acc_norm\": 0.3738175555586316,\n \"acc_norm_stderr\": 0.03459812342976094,\n \"mc1\": 0.28886168910648713,\n \"mc1_stderr\": 0.01586634640138431,\n \"mc2\": 0.45373679597767685,\n \"mc2_stderr\": 0.015753224924844992,\n \"em\": 0.21046560402684564,\n \"em_stderr\": 0.004174608410380015,\n \"f1\": 0.267364723154363,\n \"f1_stderr\": 0.004242093940617827\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4735494880546075,\n \"acc_stderr\": 0.014590931358120174,\n \"acc_norm\": 0.5059726962457338,\n \"acc_norm_stderr\": 0.014610348300255795\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.47410874327823144,\n \"acc_stderr\": 0.004983087049281742,\n \"acc_norm\": 0.6389165504879506,\n \"acc_norm_stderr\": 0.00479333052565621\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4342105263157895,\n \"acc_stderr\": 0.0403356566784832,\n \"acc_norm\": 0.4342105263157895,\n \"acc_norm_stderr\": 0.0403356566784832\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.39622641509433965,\n \"acc_stderr\": 0.030102793781791194,\n \"acc_norm\": 0.39622641509433965,\n \"acc_norm_stderr\": 0.030102793781791194\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.3819444444444444,\n \"acc_stderr\": 0.040629907841466674,\n \"acc_norm\": 0.3819444444444444,\n \"acc_norm_stderr\": 0.040629907841466674\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.046482319871173156,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.046482319871173156\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3468208092485549,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.3468208092485549,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.045766654032077636,\n \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.045766654032077636\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2851063829787234,\n \"acc_stderr\": 0.02951319662553935,\n \"acc_norm\": 0.2851063829787234,\n \"acc_norm_stderr\": 0.02951319662553935\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2807017543859649,\n \"acc_stderr\": 0.04227054451232199,\n \"acc_norm\": 0.2807017543859649,\n \"acc_norm_stderr\": 0.04227054451232199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.38620689655172413,\n \"acc_stderr\": 0.04057324734419035,\n \"acc_norm\": 0.38620689655172413,\n \"acc_norm_stderr\": 0.04057324734419035\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.022182037202948368,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.022182037202948368\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.23015873015873015,\n \"acc_stderr\": 0.037649508797906066,\n \"acc_norm\": 0.23015873015873015,\n \"acc_norm_stderr\": 0.037649508797906066\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.02786932057166463,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.02786932057166463\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.33497536945812806,\n \"acc_stderr\": 0.033208527423483104,\n \"acc_norm\": 0.33497536945812806,\n \"acc_norm_stderr\": 0.033208527423483104\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.5454545454545454,\n \"acc_stderr\": 0.038881769216741004,\n \"acc_norm\": 0.5454545454545454,\n \"acc_norm_stderr\": 0.038881769216741004\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.4292929292929293,\n \"acc_stderr\": 0.03526552724601198,\n \"acc_norm\": 0.4292929292929293,\n \"acc_norm_stderr\": 0.03526552724601198\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.5284974093264249,\n \"acc_stderr\": 0.03602573571288441,\n \"acc_norm\": 0.5284974093264249,\n \"acc_norm_stderr\": 0.03602573571288441\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.34102564102564104,\n \"acc_stderr\": 0.024035489676335065,\n \"acc_norm\": 0.34102564102564104,\n \"acc_norm_stderr\": 0.024035489676335065\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.026067159222275794,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.026067159222275794\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.3067226890756303,\n \"acc_stderr\": 0.02995382389188704,\n \"acc_norm\": 0.3067226890756303,\n \"acc_norm_stderr\": 0.02995382389188704\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.26490066225165565,\n \"acc_stderr\": 0.036030385453603826,\n \"acc_norm\": 0.26490066225165565,\n \"acc_norm_stderr\": 0.036030385453603826\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5229357798165137,\n \"acc_stderr\": 0.0214147570581755,\n \"acc_norm\": 0.5229357798165137,\n \"acc_norm_stderr\": 0.0214147570581755\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.2916666666666667,\n \"acc_stderr\": 0.03099866630456052,\n \"acc_norm\": 0.2916666666666667,\n \"acc_norm_stderr\": 0.03099866630456052\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5490196078431373,\n \"acc_stderr\": 0.03492406104163613,\n \"acc_norm\": 0.5490196078431373,\n \"acc_norm_stderr\": 0.03492406104163613\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6033755274261603,\n \"acc_stderr\": 0.03184399873811226,\n \"acc_norm\": 0.6033755274261603,\n \"acc_norm_stderr\": 0.03184399873811226\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.4304932735426009,\n \"acc_stderr\": 0.033231973029429394,\n \"acc_norm\": 0.4304932735426009,\n \"acc_norm_stderr\": 0.033231973029429394\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.45038167938931295,\n \"acc_stderr\": 0.04363643698524779,\n \"acc_norm\": 0.45038167938931295,\n \"acc_norm_stderr\": 0.04363643698524779\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.4049586776859504,\n \"acc_stderr\": 0.04481137755942469,\n \"acc_norm\": 0.4049586776859504,\n \"acc_norm_stderr\": 0.04481137755942469\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.04750077341199986,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.04750077341199986\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3803680981595092,\n \"acc_stderr\": 0.03814269893261837,\n \"acc_norm\": 0.3803680981595092,\n \"acc_norm_stderr\": 0.03814269893261837\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04109974682633932,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04109974682633932\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.36893203883495146,\n \"acc_stderr\": 0.047776151811567386,\n \"acc_norm\": 0.36893203883495146,\n \"acc_norm_stderr\": 0.047776151811567386\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.43162393162393164,\n \"acc_stderr\": 0.0324483553531149,\n \"acc_norm\": 0.43162393162393164,\n \"acc_norm_stderr\": 0.0324483553531149\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.40485312899106,\n \"acc_stderr\": 0.017553246467720256,\n \"acc_norm\": 0.40485312899106,\n \"acc_norm_stderr\": 0.017553246467720256\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.3959537572254335,\n \"acc_stderr\": 0.026329813341946243,\n \"acc_norm\": 0.3959537572254335,\n \"acc_norm_stderr\": 0.026329813341946243\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24134078212290502,\n \"acc_stderr\": 0.014310999547961464,\n \"acc_norm\": 0.24134078212290502,\n \"acc_norm_stderr\": 0.014310999547961464\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.3954248366013072,\n \"acc_stderr\": 0.027996723180631438,\n \"acc_norm\": 0.3954248366013072,\n \"acc_norm_stderr\": 0.027996723180631438\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.36012861736334406,\n \"acc_stderr\": 0.027264297599804015,\n \"acc_norm\": 0.36012861736334406,\n \"acc_norm_stderr\": 0.027264297599804015\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.42592592592592593,\n \"acc_stderr\": 0.027513747284379424,\n \"acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.027513747284379424\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.31560283687943264,\n \"acc_stderr\": 0.02772498944950931,\n \"acc_norm\": 0.31560283687943264,\n \"acc_norm_stderr\": 0.02772498944950931\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.29921773142112124,\n \"acc_stderr\": 0.01169537463069603,\n \"acc_norm\": 0.29921773142112124,\n \"acc_norm_stderr\": 0.01169537463069603\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3897058823529412,\n \"acc_stderr\": 0.029624663581159696,\n \"acc_norm\": 0.3897058823529412,\n \"acc_norm_stderr\": 0.029624663581159696\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.3349673202614379,\n \"acc_stderr\": 0.019094228167000325,\n \"acc_norm\": 0.3349673202614379,\n \"acc_norm_stderr\": 0.019094228167000325\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.37272727272727274,\n \"acc_stderr\": 0.04631381319425463,\n \"acc_norm\": 0.37272727272727274,\n \"acc_norm_stderr\": 0.04631381319425463\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.4204081632653061,\n \"acc_stderr\": 0.03160106993449604,\n \"acc_norm\": 0.4204081632653061,\n \"acc_norm_stderr\": 0.03160106993449604\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.472636815920398,\n \"acc_stderr\": 0.035302355173346824,\n \"acc_norm\": 0.472636815920398,\n \"acc_norm_stderr\": 0.035302355173346824\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.35542168674698793,\n \"acc_stderr\": 0.03726214354322415,\n \"acc_norm\": 0.35542168674698793,\n \"acc_norm_stderr\": 0.03726214354322415\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.32748538011695905,\n \"acc_stderr\": 0.035993357714560276,\n \"acc_norm\": 0.32748538011695905,\n \"acc_norm_stderr\": 0.035993357714560276\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.28886168910648713,\n \"mc1_stderr\": 0.01586634640138431,\n \"mc2\": 0.45373679597767685,\n \"mc2_stderr\": 0.015753224924844992\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5422257300710339,\n \"acc_stderr\": 0.014002284504422435\n },\n \"harness|drop|3\": {\n \"em\": 0.21046560402684564,\n \"em_stderr\": 0.004174608410380015,\n \"f1\": 0.267364723154363,\n \"f1_stderr\": 0.004242093940617827\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.015163002274450341,\n \"acc_stderr\": 0.0033660229497263225\n }\n}\n```", "repo_url": "https://huggingface.co/NurtureAI/Orca-2-7B-16k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|arc:challenge|25_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|drop|3_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|gsm8k|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hellaswag|10_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T21-39-02.599324.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["**/details_harness|winogrande|5_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-25T21-39-02.599324.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_25T21_39_02.599324", "path": ["results_2023-11-25T21-39-02.599324.parquet"]}, {"split": "latest", "path": ["results_2023-11-25T21-39-02.599324.parquet"]}]}]} | 2023-11-25T21:42:51+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of NurtureAI/Orca-2-7B-16k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NurtureAI/Orca-2-7B-16k on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-25T21:39:02.599324(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of NurtureAI/Orca-2-7B-16k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Orca-2-7B-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T21:39:02.599324(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NurtureAI/Orca-2-7B-16k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Orca-2-7B-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T21:39:02.599324(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NurtureAI/Orca-2-7B-16k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/Orca-2-7B-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-25T21:39:02.599324(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
c51cf847b99221d9a4397979a6c731b839b87c8f |
# Dataset Card for Evaluation run of NurtureAI/openchat_3.5-16k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NurtureAI/openchat_3.5-16k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NurtureAI/openchat_3.5-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NurtureAI__openchat_3.5-16k_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-25T22:20:43.061836](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__openchat_3.5-16k_public/blob/main/results_2023-11-25T22-20-43.061836.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6150624189136383,
"acc_stderr": 0.0326145578895764,
"acc_norm": 0.6229469261918253,
"acc_norm_stderr": 0.0333127688298104,
"mc1": 0.29865361077111385,
"mc1_stderr": 0.01602157061376854,
"mc2": 0.43468174693453937,
"mc2_stderr": 0.014850723705548515,
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388745,
"f1": 0.06930893456375835,
"f1_stderr": 0.0014539755752351418
},
"harness|arc:challenge|25": {
"acc": 0.5853242320819113,
"acc_stderr": 0.014397070564409174,
"acc_norm": 0.6331058020477816,
"acc_norm_stderr": 0.014084133118104296
},
"harness|hellaswag|10": {
"acc": 0.6290579565823541,
"acc_stderr": 0.004820697457420415,
"acc_norm": 0.8357896833300139,
"acc_norm_stderr": 0.0036970918376320757
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.631578947368421,
"acc_stderr": 0.03925523381052932,
"acc_norm": 0.631578947368421,
"acc_norm_stderr": 0.03925523381052932
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.028450154794118637,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.028450154794118637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6875,
"acc_stderr": 0.038760854559127644,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.038760854559127644
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.44,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.42,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.42,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5191489361702127,
"acc_stderr": 0.03266204299064678,
"acc_norm": 0.5191489361702127,
"acc_norm_stderr": 0.03266204299064678
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.04615186962583703,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.04615186962583703
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6,
"acc_stderr": 0.040824829046386284,
"acc_norm": 0.6,
"acc_norm_stderr": 0.040824829046386284
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.02519710107424648,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.02519710107424648
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7870967741935484,
"acc_stderr": 0.023287665127268552,
"acc_norm": 0.7870967741935484,
"acc_norm_stderr": 0.023287665127268552
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.0347769116216366,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.0347769116216366
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7525252525252525,
"acc_stderr": 0.030746300742124495,
"acc_norm": 0.7525252525252525,
"acc_norm_stderr": 0.030746300742124495
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919443,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6461538461538462,
"acc_stderr": 0.024243783994062153,
"acc_norm": 0.6461538461538462,
"acc_norm_stderr": 0.024243783994062153
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.028037929969114993,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.028037929969114993
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6386554621848739,
"acc_stderr": 0.03120469122515002,
"acc_norm": 0.6386554621848739,
"acc_norm_stderr": 0.03120469122515002
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.01599015488507338,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.01599015488507338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7794117647058824,
"acc_stderr": 0.029102254389674082,
"acc_norm": 0.7794117647058824,
"acc_norm_stderr": 0.029102254389674082
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.02655837250266192,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.02655837250266192
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.03076935200822914,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.03076935200822914
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7557251908396947,
"acc_stderr": 0.03768335959728744,
"acc_norm": 0.7557251908396947,
"acc_norm_stderr": 0.03768335959728744
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794089,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794089
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489122,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489122
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165616,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165616
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8071519795657727,
"acc_stderr": 0.014108533515757435,
"acc_norm": 0.8071519795657727,
"acc_norm_stderr": 0.014108533515757435
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7109826589595376,
"acc_stderr": 0.02440517393578323,
"acc_norm": 0.7109826589595376,
"acc_norm_stderr": 0.02440517393578323
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3865921787709497,
"acc_stderr": 0.016286674879101026,
"acc_norm": 0.3865921787709497,
"acc_norm_stderr": 0.016286674879101026
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818737,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818737
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.02616058445014045,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.02616058445014045
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7160493827160493,
"acc_stderr": 0.02508947852376513,
"acc_norm": 0.7160493827160493,
"acc_norm_stderr": 0.02508947852376513
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.45390070921985815,
"acc_stderr": 0.02970045324729147,
"acc_norm": 0.45390070921985815,
"acc_norm_stderr": 0.02970045324729147
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4452411994784876,
"acc_stderr": 0.012693421303973294,
"acc_norm": 0.4452411994784876,
"acc_norm_stderr": 0.012693421303973294
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6437908496732027,
"acc_stderr": 0.0193733324207245,
"acc_norm": 0.6437908496732027,
"acc_norm_stderr": 0.0193733324207245
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128448,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128448
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8059701492537313,
"acc_stderr": 0.02796267760476892,
"acc_norm": 0.8059701492537313,
"acc_norm_stderr": 0.02796267760476892
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.029913127232368036,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.029913127232368036
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29865361077111385,
"mc1_stderr": 0.01602157061376854,
"mc2": 0.43468174693453937,
"mc2_stderr": 0.014850723705548515
},
"harness|winogrande|5": {
"acc": 0.8011049723756906,
"acc_stderr": 0.011218629972515316
},
"harness|drop|3": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388745,
"f1": 0.06930893456375835,
"f1_stderr": 0.0014539755752351418
},
"harness|gsm8k|5": {
"acc": 0.21834723275208492,
"acc_stderr": 0.011379497266738047
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_NurtureAI__openchat_3.5-16k | [
"region:us"
]
| 2023-11-25T22:23:43+00:00 | {"pretty_name": "Evaluation run of NurtureAI/openchat_3.5-16k", "dataset_summary": "Dataset automatically created during the evaluation run of model [NurtureAI/openchat_3.5-16k](https://huggingface.co/NurtureAI/openchat_3.5-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NurtureAI__openchat_3.5-16k_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-25T22:20:43.061836](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__openchat_3.5-16k_public/blob/main/results_2023-11-25T22-20-43.061836.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6150624189136383,\n \"acc_stderr\": 0.0326145578895764,\n \"acc_norm\": 0.6229469261918253,\n \"acc_norm_stderr\": 0.0333127688298104,\n \"mc1\": 0.29865361077111385,\n \"mc1_stderr\": 0.01602157061376854,\n \"mc2\": 0.43468174693453937,\n \"mc2_stderr\": 0.014850723705548515,\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388745,\n \"f1\": 0.06930893456375835,\n \"f1_stderr\": 0.0014539755752351418\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5853242320819113,\n \"acc_stderr\": 0.014397070564409174,\n \"acc_norm\": 0.6331058020477816,\n \"acc_norm_stderr\": 0.014084133118104296\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6290579565823541,\n \"acc_stderr\": 0.004820697457420415,\n \"acc_norm\": 0.8357896833300139,\n \"acc_norm_stderr\": 0.0036970918376320757\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384739,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384739\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.631578947368421,\n \"acc_stderr\": 0.03925523381052932,\n \"acc_norm\": 0.631578947368421,\n \"acc_norm_stderr\": 0.03925523381052932\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5191489361702127,\n \"acc_stderr\": 0.03266204299064678,\n \"acc_norm\": 0.5191489361702127,\n \"acc_norm_stderr\": 0.03266204299064678\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.04615186962583703,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.04615186962583703\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.040824829046386284,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.040824829046386284\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.02519710107424648,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.02519710107424648\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7870967741935484,\n \"acc_stderr\": 0.023287665127268552,\n \"acc_norm\": 0.7870967741935484,\n \"acc_norm_stderr\": 0.023287665127268552\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.0347769116216366,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.0347769116216366\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7525252525252525,\n \"acc_stderr\": 0.030746300742124495,\n \"acc_norm\": 0.7525252525252525,\n \"acc_norm_stderr\": 0.030746300742124495\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6461538461538462,\n \"acc_stderr\": 0.024243783994062153,\n \"acc_norm\": 0.6461538461538462,\n \"acc_norm_stderr\": 0.024243783994062153\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3037037037037037,\n \"acc_stderr\": 0.028037929969114993,\n \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.028037929969114993\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.03120469122515002,\n \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.03120469122515002\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.029102254389674082,\n \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.029102254389674082\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7890295358649789,\n \"acc_stderr\": 0.02655837250266192,\n \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.02655837250266192\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.03768335959728744,\n \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.03768335959728744\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794089,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794089\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n \"acc_stderr\": 0.04745033255489122,\n \"acc_norm\": 0.5089285714285714,\n \"acc_norm_stderr\": 0.04745033255489122\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8071519795657727,\n \"acc_stderr\": 0.014108533515757435,\n \"acc_norm\": 0.8071519795657727,\n \"acc_norm_stderr\": 0.014108533515757435\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7109826589595376,\n \"acc_stderr\": 0.02440517393578323,\n \"acc_norm\": 0.7109826589595376,\n \"acc_norm_stderr\": 0.02440517393578323\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3865921787709497,\n \"acc_stderr\": 0.016286674879101026,\n \"acc_norm\": 0.3865921787709497,\n \"acc_norm_stderr\": 0.016286674879101026\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818737,\n \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818737\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n \"acc_stderr\": 0.02616058445014045,\n \"acc_norm\": 0.6945337620578779,\n \"acc_norm_stderr\": 0.02616058445014045\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7160493827160493,\n \"acc_stderr\": 0.02508947852376513,\n \"acc_norm\": 0.7160493827160493,\n \"acc_norm_stderr\": 0.02508947852376513\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.45390070921985815,\n \"acc_stderr\": 0.02970045324729147,\n \"acc_norm\": 0.45390070921985815,\n \"acc_norm_stderr\": 0.02970045324729147\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4452411994784876,\n \"acc_stderr\": 0.012693421303973294,\n \"acc_norm\": 0.4452411994784876,\n \"acc_norm_stderr\": 0.012693421303973294\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6437908496732027,\n \"acc_stderr\": 0.0193733324207245,\n \"acc_norm\": 0.6437908496732027,\n \"acc_norm_stderr\": 0.0193733324207245\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8059701492537313,\n \"acc_stderr\": 0.02796267760476892,\n \"acc_norm\": 0.8059701492537313,\n \"acc_norm_stderr\": 0.02796267760476892\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368036,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368036\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29865361077111385,\n \"mc1_stderr\": 0.01602157061376854,\n \"mc2\": 0.43468174693453937,\n \"mc2_stderr\": 0.014850723705548515\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8011049723756906,\n \"acc_stderr\": 0.011218629972515316\n },\n \"harness|drop|3\": {\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388745,\n \"f1\": 0.06930893456375835,\n \"f1_stderr\": 0.0014539755752351418\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.21834723275208492,\n \"acc_stderr\": 0.011379497266738047\n }\n}\n```", "repo_url": "https://huggingface.co/NurtureAI/openchat_3.5-16k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|arc:challenge|25_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|drop|3_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|gsm8k|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hellaswag|10_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-25T22-20-43.061836.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["**/details_harness|winogrande|5_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-25T22-20-43.061836.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_25T22_20_43.061836", "path": ["results_2023-11-25T22-20-43.061836.parquet"]}, {"split": "latest", "path": ["results_2023-11-25T22-20-43.061836.parquet"]}]}]} | 2023-11-25T22:24:30+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of NurtureAI/openchat_3.5-16k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NurtureAI/openchat_3.5-16k on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-25T22:20:43.061836(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of NurtureAI/openchat_3.5-16k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/openchat_3.5-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T22:20:43.061836(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NurtureAI/openchat_3.5-16k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/openchat_3.5-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-25T22:20:43.061836(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
19,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NurtureAI/openchat_3.5-16k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NurtureAI/openchat_3.5-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-25T22:20:43.061836(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
63d956cb4c362095219e4d015d82b7ed2b5c4d5f | The original dataset is from Amod/mental_health_counseling_conversations and has been modified to be used for training Mistral 7B. | sbgs/mental-health-dataset-mistral-7b | [
"region:us"
]
| 2023-11-25T22:28:22+00:00 | {} | 2023-11-25T22:31:25+00:00 | []
| []
| TAGS
#region-us
| The original dataset is from Amod/mental_health_counseling_conversations and has been modified to be used for training Mistral 7B. | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
2737927e966cbf05108ca5a373faac080bb3d864 | # Amawal Warayni
Bitext scraped from the online [AmaWar](https://amawalwarayni.com/) dictionary of the Tamazight dialect of Ait Warain spoken in northeastern Morocco.
Contains sentences, stories, and poems in Tamazight along with their translations into Modern Standard Arabic.
Big thanks to Dr. Noureddine Amhaoui for his amazing work.
# Citation
```
نور الدين أمهاوي. (2021). معجم محوسب لمعاني الأسماء والأفعال الأمازيغية الوارينية أمازيغي-عربي.
تاريخ الاسترداد 15 11، 2023، من https://amawalwarayni.com/
```
| Tamazight-NLP/AmaWar | [
"task_categories:translation",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:ber",
"language:tzm",
"language:ar",
"region:us"
]
| 2023-11-25T23:19:50+00:00 | {"language": ["ber", "tzm", "ar"], "size_categories": ["1K<n<10K"], "task_categories": ["translation", "text2text-generation"], "pretty_name": "Amawal Warayni", "configs": [{"config_name": "examples", "data_files": "examples.tsv", "sep": "\t", "default": true}, {"config_name": "expressions", "data_files": "expressions.tsv", "sep": "\t"}, {"config_name": "proverbs", "data_files": "proverbs.tsv", "sep": "\t"}, {"config_name": "riddles", "data_files": "riddles.tsv", "sep": "\t"}, {"config_name": "stories", "data_files": "stories/*.tsv", "sep": "\t"}, {"config_name": "poems", "data_files": "poems/*.tsv", "sep": "\t"}]} | 2024-01-07T18:08:33+00:00 | []
| [
"ber",
"tzm",
"ar"
]
| TAGS
#task_categories-translation #task_categories-text2text-generation #size_categories-1K<n<10K #language-ber #language-Central Atlas Tamazight #language-Arabic #region-us
| # Amawal Warayni
Bitext scraped from the online AmaWar dictionary of the Tamazight dialect of Ait Warain spoken in northeastern Morocco.
Contains sentences, stories, and poems in Tamazight along with their translations into Modern Standard Arabic.
Big thanks to Dr. Noureddine Amhaoui for his amazing work.
| [
"# Amawal Warayni\n\nBitext scraped from the online AmaWar dictionary of the Tamazight dialect of Ait Warain spoken in northeastern Morocco.\n\nContains sentences, stories, and poems in Tamazight along with their translations into Modern Standard Arabic.\n\nBig thanks to Dr. Noureddine Amhaoui for his amazing work."
]
| [
"TAGS\n#task_categories-translation #task_categories-text2text-generation #size_categories-1K<n<10K #language-ber #language-Central Atlas Tamazight #language-Arabic #region-us \n",
"# Amawal Warayni\n\nBitext scraped from the online AmaWar dictionary of the Tamazight dialect of Ait Warain spoken in northeastern Morocco.\n\nContains sentences, stories, and poems in Tamazight along with their translations into Modern Standard Arabic.\n\nBig thanks to Dr. Noureddine Amhaoui for his amazing work."
]
| [
58,
80
]
| [
"passage: TAGS\n#task_categories-translation #task_categories-text2text-generation #size_categories-1K<n<10K #language-ber #language-Central Atlas Tamazight #language-Arabic #region-us \n# Amawal Warayni\n\nBitext scraped from the online AmaWar dictionary of the Tamazight dialect of Ait Warain spoken in northeastern Morocco.\n\nContains sentences, stories, and poems in Tamazight along with their translations into Modern Standard Arabic.\n\nBig thanks to Dr. Noureddine Amhaoui for his amazing work."
]
|
70d247d921d3b4589dd93908db422b525200b295 | # Dataset Card for "phi-winogrande_inverted_option-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | automated-research-group/phi-winogrande_inverted_option-results | [
"region:us"
]
| 2023-11-26T00:46:14+00:00 | {"dataset_info": [{"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}", "features": [{"name": "id", "dtype": "null"}, {"name": "prediction", "dtype": "null"}, {"name": "likelihood", "dtype": "null"}, {"name": "perplexity", "dtype": "null"}, {"name": "accuracy", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 1342, "dataset_size": 0}], "configs": [{"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=1, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=10, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=100, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=1000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.9, 'top_k'=10000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=100, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=1000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=0.95, 'top_k'=10000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=100, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=1000, 'top_p'=0.9}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.8}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.95}/train-*"}]}, {"config_name": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}", "data_files": [{"split": "train", "path": "{'do_sample'=True, 'beams'=5, 'temperature'=1.0, 'top_k'=10000, 'top_p'=0.9}/train-*"}]}]} | 2023-11-26T02:11:06+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "phi-winogrande_inverted_option-results"
More Information needed | [
"# Dataset Card for \"phi-winogrande_inverted_option-results\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"phi-winogrande_inverted_option-results\"\n\nMore Information needed"
]
| [
6,
25
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"phi-winogrande_inverted_option-results\"\n\nMore Information needed"
]
|
6bd80430a0ca939f12419e5ec7a6202cb0c97e69 | # [SA-Med2D-20M](https://arxiv.org/abs/2311.11969)

The largest benchmark dataset for segmentation in the field of medical imaging.
As is well known, the emergence of ImageNet has greatly propelled the development of AI, especially deep learning. It has provided massive data and powerful baseline models for the computer vision community, enabling researchers to achieve breakthroughs in tasks such as natural image classification, segmentation, and detection. However, in the medical image realm, there lack of such a large dataset for developing powerful medical models.
To address the gap in the medical field, we are introducing the largest benchmark dataset for medical image segmentation. This initiative aims to drive the rapid development of AI in healthcare and accelerate the transformation of computational medicine towards a more inclusive direction.
Please visit the [GitHub](https://github.com/OpenGVLab/SAM-Med2D) page and further exploit the dataset!
Due to data privacy and ethical requirements, we currently only provide access to a 16M dataset. We will keep updating and maintaining this database. Please stay tuned for further updates from us.
## 👉 Filesystem Hierarchy
```bash
~/SAM-Med2D-20M
├── images
| ├── mr_00--ACDC--patient001_frame01--x_0006.png
| ├── mr_t1--BraTS2021--BraTS2021_00218--z_0141.png
| ├── ...
| ├── ct_00--CAD_PE--001--x_0125.png
| ├── x_ray--covid_19_ct_cxr--16660_5_1--2d_none.png
|
├── masks
| ├── mr_00--ACDC--patient001_frame01--x_0006--0000_000.png
| ├── mr_t1--BraTS2021--BraTS2021_00218--z_0141--0011_000.png
| ├── ...
| ├── ct_00--CAD_PE--001--x_0125--0000_002.png
| ├── x_ray--covid_19_ct_cxr--16660_5_1--2d_none--0000_001.png
|
├── SAMed2D_v1_class_mapping_id.json
|
├── SAMed2D_v1.json
```
The SA-Med2D-20M dataset is named following the convention below:
```bash
-images
-{modality_sub-modality}--{dataset name}--{ori name}--{dimension_slice}.png
-masks
-{modality_sub-modality}--{dataset name}--{ori name}--{dimension_slice}--{class instance_id}.png
```
Note: "sub-modality" applies only to 3D data, and when "sub-modality" is "00," it indicates either the absence of a sub-modality or an unknown sub-modality type. "dataset name" refers to the specific dataset name that the case is from. "ori name" is the original case name in its dataset. "dimension slice", e.g., "x_100", indicates the dimension along which we split a 3D case as well as the slice ID in this dimension. If we split a 3D case with axis x and the current slice is 100, then the term can be "x_0100". For 2D datasets, the "dimension_slice id" is uniformly set to "2d_none". "class instance_id", unique to masks, encapsulates both category information and instance id, and the detailed information is stored in the "SAMed2D_v1_class_mapping_id.json" file. For instance, if the category "liver" is assigned the ID "0003" and there is only one instance of this category in the case, the "class instance_id" can be denoted as "0003_000". Besides, the category "liver" in the "SAMed2D_v1_class_mapping_id.json" file is formulated as key-value pair with _python-dict_ format: \{"liver": "0003"\}.
The file "SAMed2D_v1_class_mapping_id.json" stores the information for converting class instances. The file "SAMed2D_v1.json" contains the path information for all image and mask pairs.
## 👉 Unzipping split zip files
Windows:
decompress SA-Med2D-16M.zip to automatically extract the other volumes together.
Linux:
1. zip SA-Med2D-16M.zip SA-Med2D-16M.z0* SA-Med2D-16M.z10 -s=0 --out {full}.zip
2. unzip {full}.zip
## 🤝 免责声明
- SA-Med2D-20M是由多个公开的数据集组成,旨在取之于社区,回馈于社区,为研究人员和开发者提供一个用于学术和技术研究的资源。使用本数据集的任何个人或组织(以下统称为“使用者”)需遵守以下免责声明:
1. 数据集来源:本数据集由多个公开的数据集组成,这些数据集的来源已在预印版论文中明确标明。使用者应当遵守原始数据集的相关许可和使用条款。
2. 数据准确性:尽管我们已经努力确保数据集的准确性和完整性,但无法对数据集的准确性作出保证。使用者应自行承担使用数据集可能带来的风险和责任。
3. 责任限制:在任何情况下,数据集的提供者及相关贡献者均不对使用者的任何行为或结果承担责任。
4. 使用约束:使用者在使用本数据集时,应遵守适用的法律法规和伦理规范。使用者不得将本数据集用于非法、侵犯隐私、诽谤、歧视或其他违法或不道德的目的。
5. 知识产权:本数据集的知识产权归原始数据集的相关权利人所有,使用者不得以任何方式侵犯数据集的知识产权。
- 作为非盈利机构,团队倡导和谐友好的开源交流环境,若在开源数据集内发现有侵犯您合法权益的内容,可发送邮件至([email protected], [email protected]),邮件中请写明侵权相关事实的详细描述并向我们提供相关的权属证明资料。我们将于3个工作日内启动调查处理机制,并采取必要的措施进行处置(如下架相关数据)。但应确保您投诉的真实性,否则采取措施后所产生的不利后果应由您独立承担。
- 通过下载、复制、访问或使用本数据集,即表示使用者已阅读、理解并同意遵守本免责声明中的所有条款和条件。如果使用者无法接受本免责声明的任何部分,请勿使用本数据集。
## 🤝 Disclaimer
- SA-Med2D-20M is composed of multiple publicly available datasets and aims to provide a resource for academic and technical research to researchers and developers. Any individual or organization (hereinafter referred to as "User") using this dataset must comply with the following disclaimer:
1. Dataset Source: SA-Med2D-20M is composed of multiple publicly available datasets, and the sources of these datasets have been clearly indicated in the preprint paper. Users should adhere to the relevant licenses and terms of use of the original datasets.
2. Data Accuracy: While efforts have been made to ensure the accuracy and completeness of the dataset, no guarantee can be given regarding its accuracy. Users assume all risks and liabilities associated with the use of the dataset.
3. Limitation of Liability: Under no circumstances shall the dataset providers or contributors be held liable for any actions or outcomes of the Users.
4. Usage Constraints: Users must comply with applicable laws, regulations, and ethical norms when using this dataset. The dataset must not be used for illegal, privacy-infringing, defamatory, discriminatory, or other unlawful or unethical purposes.
5. Intellectual Property: The intellectual property rights of this dataset belong to the relevant rights holders of the original datasets. Users must not infringe upon the intellectual property rights of the dataset in any way.
- As a non-profit organization, we advocate for a harmonious and friendly open-source communication environment. If any content in the open dataset is found to infringe upon your legitimate rights and interests, you can send an email to ([email protected], [email protected]) with a detailed description of the infringement and provide relevant ownership proof materials. We will initiate an investigation and handling mechanism within three working days and take necessary measures (such as removing relevant data) if warranted. However, the authenticity of your complaint must be ensured, as any adverse consequences resulting from the measures taken shall be borne solely by you.
- By downloading, copying, accessing, or using this dataset, the User indicates that they have read, understood, and agreed to comply with all the terms and conditions of this disclaimer. If the User cannot accept any part of this disclaimer, please refrain from using this dataset.
## 🤝 Acknowledgement
- We thank all medical workers and dataset owners for making public datasets available to the community. If you find that your dataset is included in our SA-Med2D-20M but you do not want us to do so, please contact us to remove it.
## 👋 Hiring & Global Collaboration
- **Hiring:** We are hiring researchers, engineers, and interns in General Vision Group, Shanghai AI Lab. If you are interested in Medical Foundation Models and General Medical AI, including designing benchmark datasets, general models, evaluation systems, and efficient tools, please contact us.
- **Global Collaboration:** We're on a mission to redefine medical research, aiming for a more universally adaptable model. Our passionate team is delving into foundational healthcare models, promoting the development of the medical community. Collaborate with us to increase competitiveness, reduce risk, and expand markets.
- **Contact:** Junjun He([email protected]), Jin Ye([email protected]), and Tianbin Li ([email protected]).
## 👉 Typos of paper
1. Formula (1) is incorrect, after correction: <img src="https://i.postimg.cc/sXRK4MKh/20231123001020.png" alt="alt text" width="202" height="50">
## Reference
```
@misc{ye2023samed2d20m,
title={SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks},
author={Jin Ye and Junlong Cheng and Jianpin Chen and Zhongying Deng and Tianbin Li and Haoyu Wang and Yanzhou Su and Ziyan Huang and Jilong Chen and Lei Jiang and Hui Sun and Min Zhu and Shaoting Zhang and Junjun He and Yu Qiao},
year={2023},
eprint={2311.11969},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
@misc{cheng2023sammed2d,
title={SAM-Med2D},
author={Junlong Cheng and Jin Ye and Zhongying Deng and Jianpin Chen and Tianbin Li and Haoyu Wang and Yanzhou Su and
Ziyan Huang and Jilong Chen and Lei Jiangand Hui Sun and Junjun He and Shaoting Zhang and Min Zhu and Yu Qiao},
year={2023},
eprint={2308.16184},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| OpenGVLab/SA-Med2D-20M | [
"license:cc-by-nc-sa-4.0",
"arxiv:2311.11969",
"arxiv:2308.16184",
"region:us"
]
| 2023-11-26T01:24:54+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2023-12-04T00:50:56+00:00 | [
"2311.11969",
"2308.16184"
]
| []
| TAGS
#license-cc-by-nc-sa-4.0 #arxiv-2311.11969 #arxiv-2308.16184 #region-us
| # SA-Med2D-20M
!Image
The largest benchmark dataset for segmentation in the field of medical imaging.
As is well known, the emergence of ImageNet has greatly propelled the development of AI, especially deep learning. It has provided massive data and powerful baseline models for the computer vision community, enabling researchers to achieve breakthroughs in tasks such as natural image classification, segmentation, and detection. However, in the medical image realm, there lack of such a large dataset for developing powerful medical models.
To address the gap in the medical field, we are introducing the largest benchmark dataset for medical image segmentation. This initiative aims to drive the rapid development of AI in healthcare and accelerate the transformation of computational medicine towards a more inclusive direction.
Please visit the GitHub page and further exploit the dataset!
Due to data privacy and ethical requirements, we currently only provide access to a 16M dataset. We will keep updating and maintaining this database. Please stay tuned for further updates from us.
## Filesystem Hierarchy
The SA-Med2D-20M dataset is named following the convention below:
Note: "sub-modality" applies only to 3D data, and when "sub-modality" is "00," it indicates either the absence of a sub-modality or an unknown sub-modality type. "dataset name" refers to the specific dataset name that the case is from. "ori name" is the original case name in its dataset. "dimension slice", e.g., "x_100", indicates the dimension along which we split a 3D case as well as the slice ID in this dimension. If we split a 3D case with axis x and the current slice is 100, then the term can be "x_0100". For 2D datasets, the "dimension_slice id" is uniformly set to "2d_none". "class instance_id", unique to masks, encapsulates both category information and instance id, and the detailed information is stored in the "SAMed2D_v1_class_mapping_id.json" file. For instance, if the category "liver" is assigned the ID "0003" and there is only one instance of this category in the case, the "class instance_id" can be denoted as "0003_000". Besides, the category "liver" in the "SAMed2D_v1_class_mapping_id.json" file is formulated as key-value pair with _python-dict_ format: \{"liver": "0003"\}.
The file "SAMed2D_v1_class_mapping_id.json" stores the information for converting class instances. The file "SAMed2D_v1.json" contains the path information for all image and mask pairs.
## Unzipping split zip files
Windows:
decompress URL to automatically extract the other volumes together.
Linux:
1. zip URL SA-Med2D-16M.z0* SA-Med2D-16M.z10 -s=0 --out {full}.zip
2. unzip {full}.zip
## 免责声明
- SA-Med2D-20M是由多个公开的数据集组成,旨在取之于社区,回馈于社区,为研究人员和开发者提供一个用于学术和技术研究的资源。使用本数据集的任何个人或组织(以下统称为“使用者”)需遵守以下免责声明:
1. 数据集来源:本数据集由多个公开的数据集组成,这些数据集的来源已在预印版论文中明确标明。使用者应当遵守原始数据集的相关许可和使用条款。
2. 数据准确性:尽管我们已经努力确保数据集的准确性和完整性,但无法对数据集的准确性作出保证。使用者应自行承担使用数据集可能带来的风险和责任。
3. 责任限制:在任何情况下,数据集的提供者及相关贡献者均不对使用者的任何行为或结果承担责任。
4. 使用约束:使用者在使用本数据集时,应遵守适用的法律法规和伦理规范。使用者不得将本数据集用于非法、侵犯隐私、诽谤、歧视或其他违法或不道德的目的。
5. 知识产权:本数据集的知识产权归原始数据集的相关权利人所有,使用者不得以任何方式侵犯数据集的知识产权。
- 作为非盈利机构,团队倡导和谐友好的开源交流环境,若在开源数据集内发现有侵犯您合法权益的内容,可发送邮件至(yejin@URL, chengjunlong@URL),邮件中请写明侵权相关事实的详细描述并向我们提供相关的权属证明资料。我们将于3个工作日内启动调查处理机制,并采取必要的措施进行处置(如下架相关数据)。但应确保您投诉的真实性,否则采取措施后所产生的不利后果应由您独立承担。
- 通过下载、复制、访问或使用本数据集,即表示使用者已阅读、理解并同意遵守本免责声明中的所有条款和条件。如果使用者无法接受本免责声明的任何部分,请勿使用本数据集。
## Disclaimer
- SA-Med2D-20M is composed of multiple publicly available datasets and aims to provide a resource for academic and technical research to researchers and developers. Any individual or organization (hereinafter referred to as "User") using this dataset must comply with the following disclaimer:
1. Dataset Source: SA-Med2D-20M is composed of multiple publicly available datasets, and the sources of these datasets have been clearly indicated in the preprint paper. Users should adhere to the relevant licenses and terms of use of the original datasets.
2. Data Accuracy: While efforts have been made to ensure the accuracy and completeness of the dataset, no guarantee can be given regarding its accuracy. Users assume all risks and liabilities associated with the use of the dataset.
3. Limitation of Liability: Under no circumstances shall the dataset providers or contributors be held liable for any actions or outcomes of the Users.
4. Usage Constraints: Users must comply with applicable laws, regulations, and ethical norms when using this dataset. The dataset must not be used for illegal, privacy-infringing, defamatory, discriminatory, or other unlawful or unethical purposes.
5. Intellectual Property: The intellectual property rights of this dataset belong to the relevant rights holders of the original datasets. Users must not infringe upon the intellectual property rights of the dataset in any way.
- As a non-profit organization, we advocate for a harmonious and friendly open-source communication environment. If any content in the open dataset is found to infringe upon your legitimate rights and interests, you can send an email to (yejin@URL, chengjunlong@URL) with a detailed description of the infringement and provide relevant ownership proof materials. We will initiate an investigation and handling mechanism within three working days and take necessary measures (such as removing relevant data) if warranted. However, the authenticity of your complaint must be ensured, as any adverse consequences resulting from the measures taken shall be borne solely by you.
- By downloading, copying, accessing, or using this dataset, the User indicates that they have read, understood, and agreed to comply with all the terms and conditions of this disclaimer. If the User cannot accept any part of this disclaimer, please refrain from using this dataset.
## Acknowledgement
- We thank all medical workers and dataset owners for making public datasets available to the community. If you find that your dataset is included in our SA-Med2D-20M but you do not want us to do so, please contact us to remove it.
## Hiring & Global Collaboration
- Hiring: We are hiring researchers, engineers, and interns in General Vision Group, Shanghai AI Lab. If you are interested in Medical Foundation Models and General Medical AI, including designing benchmark datasets, general models, evaluation systems, and efficient tools, please contact us.
- Global Collaboration: We're on a mission to redefine medical research, aiming for a more universally adaptable model. Our passionate team is delving into foundational healthcare models, promoting the development of the medical community. Collaborate with us to increase competitiveness, reduce risk, and expand markets.
- Contact: Junjun He(hejunjun@URL), Jin Ye(yejin@URL), and Tianbin Li (litianbin@URL).
## Typos of paper
1. Formula (1) is incorrect, after correction: <img src="https://i.URL alt="alt text" width="202" height="50">
## Reference
| [
"# SA-Med2D-20M\n\n!Image\n\nThe largest benchmark dataset for segmentation in the field of medical imaging.\n\nAs is well known, the emergence of ImageNet has greatly propelled the development of AI, especially deep learning. It has provided massive data and powerful baseline models for the computer vision community, enabling researchers to achieve breakthroughs in tasks such as natural image classification, segmentation, and detection. However, in the medical image realm, there lack of such a large dataset for developing powerful medical models.\n\nTo address the gap in the medical field, we are introducing the largest benchmark dataset for medical image segmentation. This initiative aims to drive the rapid development of AI in healthcare and accelerate the transformation of computational medicine towards a more inclusive direction.\n\nPlease visit the GitHub page and further exploit the dataset!\n\nDue to data privacy and ethical requirements, we currently only provide access to a 16M dataset. We will keep updating and maintaining this database. Please stay tuned for further updates from us.",
"## Filesystem Hierarchy\n\nThe SA-Med2D-20M dataset is named following the convention below:\n\nNote: \"sub-modality\" applies only to 3D data, and when \"sub-modality\" is \"00,\" it indicates either the absence of a sub-modality or an unknown sub-modality type. \"dataset name\" refers to the specific dataset name that the case is from. \"ori name\" is the original case name in its dataset. \"dimension slice\", e.g., \"x_100\", indicates the dimension along which we split a 3D case as well as the slice ID in this dimension. If we split a 3D case with axis x and the current slice is 100, then the term can be \"x_0100\". For 2D datasets, the \"dimension_slice id\" is uniformly set to \"2d_none\". \"class instance_id\", unique to masks, encapsulates both category information and instance id, and the detailed information is stored in the \"SAMed2D_v1_class_mapping_id.json\" file. For instance, if the category \"liver\" is assigned the ID \"0003\" and there is only one instance of this category in the case, the \"class instance_id\" can be denoted as \"0003_000\". Besides, the category \"liver\" in the \"SAMed2D_v1_class_mapping_id.json\" file is formulated as key-value pair with _python-dict_ format: \\{\"liver\": \"0003\"\\}.\n\nThe file \"SAMed2D_v1_class_mapping_id.json\" stores the information for converting class instances. The file \"SAMed2D_v1.json\" contains the path information for all image and mask pairs.",
"## Unzipping split zip files\nWindows:\n\n decompress URL to automatically extract the other volumes together.\n\nLinux: \n\n 1. zip URL SA-Med2D-16M.z0* SA-Med2D-16M.z10 -s=0 --out {full}.zip\n \n 2. unzip {full}.zip",
"## 免责声明\n- SA-Med2D-20M是由多个公开的数据集组成,旨在取之于社区,回馈于社区,为研究人员和开发者提供一个用于学术和技术研究的资源。使用本数据集的任何个人或组织(以下统称为“使用者”)需遵守以下免责声明:\n1. 数据集来源:本数据集由多个公开的数据集组成,这些数据集的来源已在预印版论文中明确标明。使用者应当遵守原始数据集的相关许可和使用条款。\n2. 数据准确性:尽管我们已经努力确保数据集的准确性和完整性,但无法对数据集的准确性作出保证。使用者应自行承担使用数据集可能带来的风险和责任。\n3. 责任限制:在任何情况下,数据集的提供者及相关贡献者均不对使用者的任何行为或结果承担责任。\n4. 使用约束:使用者在使用本数据集时,应遵守适用的法律法规和伦理规范。使用者不得将本数据集用于非法、侵犯隐私、诽谤、歧视或其他违法或不道德的目的。\n5. 知识产权:本数据集的知识产权归原始数据集的相关权利人所有,使用者不得以任何方式侵犯数据集的知识产权。\n\n- 作为非盈利机构,团队倡导和谐友好的开源交流环境,若在开源数据集内发现有侵犯您合法权益的内容,可发送邮件至(yejin@URL, chengjunlong@URL),邮件中请写明侵权相关事实的详细描述并向我们提供相关的权属证明资料。我们将于3个工作日内启动调查处理机制,并采取必要的措施进行处置(如下架相关数据)。但应确保您投诉的真实性,否则采取措施后所产生的不利后果应由您独立承担。\n\n- 通过下载、复制、访问或使用本数据集,即表示使用者已阅读、理解并同意遵守本免责声明中的所有条款和条件。如果使用者无法接受本免责声明的任何部分,请勿使用本数据集。",
"## Disclaimer\n- SA-Med2D-20M is composed of multiple publicly available datasets and aims to provide a resource for academic and technical research to researchers and developers. Any individual or organization (hereinafter referred to as \"User\") using this dataset must comply with the following disclaimer:\n1. Dataset Source: SA-Med2D-20M is composed of multiple publicly available datasets, and the sources of these datasets have been clearly indicated in the preprint paper. Users should adhere to the relevant licenses and terms of use of the original datasets.\n2. Data Accuracy: While efforts have been made to ensure the accuracy and completeness of the dataset, no guarantee can be given regarding its accuracy. Users assume all risks and liabilities associated with the use of the dataset.\n3. Limitation of Liability: Under no circumstances shall the dataset providers or contributors be held liable for any actions or outcomes of the Users.\n4. Usage Constraints: Users must comply with applicable laws, regulations, and ethical norms when using this dataset. The dataset must not be used for illegal, privacy-infringing, defamatory, discriminatory, or other unlawful or unethical purposes.\n5. Intellectual Property: The intellectual property rights of this dataset belong to the relevant rights holders of the original datasets. Users must not infringe upon the intellectual property rights of the dataset in any way.\n\n- As a non-profit organization, we advocate for a harmonious and friendly open-source communication environment. If any content in the open dataset is found to infringe upon your legitimate rights and interests, you can send an email to (yejin@URL, chengjunlong@URL) with a detailed description of the infringement and provide relevant ownership proof materials. We will initiate an investigation and handling mechanism within three working days and take necessary measures (such as removing relevant data) if warranted. However, the authenticity of your complaint must be ensured, as any adverse consequences resulting from the measures taken shall be borne solely by you.\n\n- By downloading, copying, accessing, or using this dataset, the User indicates that they have read, understood, and agreed to comply with all the terms and conditions of this disclaimer. If the User cannot accept any part of this disclaimer, please refrain from using this dataset.",
"## Acknowledgement\n- We thank all medical workers and dataset owners for making public datasets available to the community. If you find that your dataset is included in our SA-Med2D-20M but you do not want us to do so, please contact us to remove it.",
"## Hiring & Global Collaboration\n- Hiring: We are hiring researchers, engineers, and interns in General Vision Group, Shanghai AI Lab. If you are interested in Medical Foundation Models and General Medical AI, including designing benchmark datasets, general models, evaluation systems, and efficient tools, please contact us.\n- Global Collaboration: We're on a mission to redefine medical research, aiming for a more universally adaptable model. Our passionate team is delving into foundational healthcare models, promoting the development of the medical community. Collaborate with us to increase competitiveness, reduce risk, and expand markets.\n- Contact: Junjun He(hejunjun@URL), Jin Ye(yejin@URL), and Tianbin Li (litianbin@URL).",
"## Typos of paper\n1. Formula (1) is incorrect, after correction: <img src=\"https://i.URL alt=\"alt text\" width=\"202\" height=\"50\">",
"## Reference"
]
| [
"TAGS\n#license-cc-by-nc-sa-4.0 #arxiv-2311.11969 #arxiv-2308.16184 #region-us \n",
"# SA-Med2D-20M\n\n!Image\n\nThe largest benchmark dataset for segmentation in the field of medical imaging.\n\nAs is well known, the emergence of ImageNet has greatly propelled the development of AI, especially deep learning. It has provided massive data and powerful baseline models for the computer vision community, enabling researchers to achieve breakthroughs in tasks such as natural image classification, segmentation, and detection. However, in the medical image realm, there lack of such a large dataset for developing powerful medical models.\n\nTo address the gap in the medical field, we are introducing the largest benchmark dataset for medical image segmentation. This initiative aims to drive the rapid development of AI in healthcare and accelerate the transformation of computational medicine towards a more inclusive direction.\n\nPlease visit the GitHub page and further exploit the dataset!\n\nDue to data privacy and ethical requirements, we currently only provide access to a 16M dataset. We will keep updating and maintaining this database. Please stay tuned for further updates from us.",
"## Filesystem Hierarchy\n\nThe SA-Med2D-20M dataset is named following the convention below:\n\nNote: \"sub-modality\" applies only to 3D data, and when \"sub-modality\" is \"00,\" it indicates either the absence of a sub-modality or an unknown sub-modality type. \"dataset name\" refers to the specific dataset name that the case is from. \"ori name\" is the original case name in its dataset. \"dimension slice\", e.g., \"x_100\", indicates the dimension along which we split a 3D case as well as the slice ID in this dimension. If we split a 3D case with axis x and the current slice is 100, then the term can be \"x_0100\". For 2D datasets, the \"dimension_slice id\" is uniformly set to \"2d_none\". \"class instance_id\", unique to masks, encapsulates both category information and instance id, and the detailed information is stored in the \"SAMed2D_v1_class_mapping_id.json\" file. For instance, if the category \"liver\" is assigned the ID \"0003\" and there is only one instance of this category in the case, the \"class instance_id\" can be denoted as \"0003_000\". Besides, the category \"liver\" in the \"SAMed2D_v1_class_mapping_id.json\" file is formulated as key-value pair with _python-dict_ format: \\{\"liver\": \"0003\"\\}.\n\nThe file \"SAMed2D_v1_class_mapping_id.json\" stores the information for converting class instances. The file \"SAMed2D_v1.json\" contains the path information for all image and mask pairs.",
"## Unzipping split zip files\nWindows:\n\n decompress URL to automatically extract the other volumes together.\n\nLinux: \n\n 1. zip URL SA-Med2D-16M.z0* SA-Med2D-16M.z10 -s=0 --out {full}.zip\n \n 2. unzip {full}.zip",
"## 免责声明\n- SA-Med2D-20M是由多个公开的数据集组成,旨在取之于社区,回馈于社区,为研究人员和开发者提供一个用于学术和技术研究的资源。使用本数据集的任何个人或组织(以下统称为“使用者”)需遵守以下免责声明:\n1. 数据集来源:本数据集由多个公开的数据集组成,这些数据集的来源已在预印版论文中明确标明。使用者应当遵守原始数据集的相关许可和使用条款。\n2. 数据准确性:尽管我们已经努力确保数据集的准确性和完整性,但无法对数据集的准确性作出保证。使用者应自行承担使用数据集可能带来的风险和责任。\n3. 责任限制:在任何情况下,数据集的提供者及相关贡献者均不对使用者的任何行为或结果承担责任。\n4. 使用约束:使用者在使用本数据集时,应遵守适用的法律法规和伦理规范。使用者不得将本数据集用于非法、侵犯隐私、诽谤、歧视或其他违法或不道德的目的。\n5. 知识产权:本数据集的知识产权归原始数据集的相关权利人所有,使用者不得以任何方式侵犯数据集的知识产权。\n\n- 作为非盈利机构,团队倡导和谐友好的开源交流环境,若在开源数据集内发现有侵犯您合法权益的内容,可发送邮件至(yejin@URL, chengjunlong@URL),邮件中请写明侵权相关事实的详细描述并向我们提供相关的权属证明资料。我们将于3个工作日内启动调查处理机制,并采取必要的措施进行处置(如下架相关数据)。但应确保您投诉的真实性,否则采取措施后所产生的不利后果应由您独立承担。\n\n- 通过下载、复制、访问或使用本数据集,即表示使用者已阅读、理解并同意遵守本免责声明中的所有条款和条件。如果使用者无法接受本免责声明的任何部分,请勿使用本数据集。",
"## Disclaimer\n- SA-Med2D-20M is composed of multiple publicly available datasets and aims to provide a resource for academic and technical research to researchers and developers. Any individual or organization (hereinafter referred to as \"User\") using this dataset must comply with the following disclaimer:\n1. Dataset Source: SA-Med2D-20M is composed of multiple publicly available datasets, and the sources of these datasets have been clearly indicated in the preprint paper. Users should adhere to the relevant licenses and terms of use of the original datasets.\n2. Data Accuracy: While efforts have been made to ensure the accuracy and completeness of the dataset, no guarantee can be given regarding its accuracy. Users assume all risks and liabilities associated with the use of the dataset.\n3. Limitation of Liability: Under no circumstances shall the dataset providers or contributors be held liable for any actions or outcomes of the Users.\n4. Usage Constraints: Users must comply with applicable laws, regulations, and ethical norms when using this dataset. The dataset must not be used for illegal, privacy-infringing, defamatory, discriminatory, or other unlawful or unethical purposes.\n5. Intellectual Property: The intellectual property rights of this dataset belong to the relevant rights holders of the original datasets. Users must not infringe upon the intellectual property rights of the dataset in any way.\n\n- As a non-profit organization, we advocate for a harmonious and friendly open-source communication environment. If any content in the open dataset is found to infringe upon your legitimate rights and interests, you can send an email to (yejin@URL, chengjunlong@URL) with a detailed description of the infringement and provide relevant ownership proof materials. We will initiate an investigation and handling mechanism within three working days and take necessary measures (such as removing relevant data) if warranted. However, the authenticity of your complaint must be ensured, as any adverse consequences resulting from the measures taken shall be borne solely by you.\n\n- By downloading, copying, accessing, or using this dataset, the User indicates that they have read, understood, and agreed to comply with all the terms and conditions of this disclaimer. If the User cannot accept any part of this disclaimer, please refrain from using this dataset.",
"## Acknowledgement\n- We thank all medical workers and dataset owners for making public datasets available to the community. If you find that your dataset is included in our SA-Med2D-20M but you do not want us to do so, please contact us to remove it.",
"## Hiring & Global Collaboration\n- Hiring: We are hiring researchers, engineers, and interns in General Vision Group, Shanghai AI Lab. If you are interested in Medical Foundation Models and General Medical AI, including designing benchmark datasets, general models, evaluation systems, and efficient tools, please contact us.\n- Global Collaboration: We're on a mission to redefine medical research, aiming for a more universally adaptable model. Our passionate team is delving into foundational healthcare models, promoting the development of the medical community. Collaborate with us to increase competitiveness, reduce risk, and expand markets.\n- Contact: Junjun He(hejunjun@URL), Jin Ye(yejin@URL), and Tianbin Li (litianbin@URL).",
"## Typos of paper\n1. Formula (1) is incorrect, after correction: <img src=\"https://i.URL alt=\"alt text\" width=\"202\" height=\"50\">",
"## Reference"
]
| [
35,
228,
430,
66,
424,
550,
61,
172,
40,
2
]
| [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #arxiv-2311.11969 #arxiv-2308.16184 #region-us \n# SA-Med2D-20M\n\n!Image\n\nThe largest benchmark dataset for segmentation in the field of medical imaging.\n\nAs is well known, the emergence of ImageNet has greatly propelled the development of AI, especially deep learning. It has provided massive data and powerful baseline models for the computer vision community, enabling researchers to achieve breakthroughs in tasks such as natural image classification, segmentation, and detection. However, in the medical image realm, there lack of such a large dataset for developing powerful medical models.\n\nTo address the gap in the medical field, we are introducing the largest benchmark dataset for medical image segmentation. This initiative aims to drive the rapid development of AI in healthcare and accelerate the transformation of computational medicine towards a more inclusive direction.\n\nPlease visit the GitHub page and further exploit the dataset!\n\nDue to data privacy and ethical requirements, we currently only provide access to a 16M dataset. We will keep updating and maintaining this database. Please stay tuned for further updates from us.",
"passage: ## Filesystem Hierarchy\n\nThe SA-Med2D-20M dataset is named following the convention below:\n\nNote: \"sub-modality\" applies only to 3D data, and when \"sub-modality\" is \"00,\" it indicates either the absence of a sub-modality or an unknown sub-modality type. \"dataset name\" refers to the specific dataset name that the case is from. \"ori name\" is the original case name in its dataset. \"dimension slice\", e.g., \"x_100\", indicates the dimension along which we split a 3D case as well as the slice ID in this dimension. If we split a 3D case with axis x and the current slice is 100, then the term can be \"x_0100\". For 2D datasets, the \"dimension_slice id\" is uniformly set to \"2d_none\". \"class instance_id\", unique to masks, encapsulates both category information and instance id, and the detailed information is stored in the \"SAMed2D_v1_class_mapping_id.json\" file. For instance, if the category \"liver\" is assigned the ID \"0003\" and there is only one instance of this category in the case, the \"class instance_id\" can be denoted as \"0003_000\". Besides, the category \"liver\" in the \"SAMed2D_v1_class_mapping_id.json\" file is formulated as key-value pair with _python-dict_ format: \\{\"liver\": \"0003\"\\}.\n\nThe file \"SAMed2D_v1_class_mapping_id.json\" stores the information for converting class instances. The file \"SAMed2D_v1.json\" contains the path information for all image and mask pairs.## Unzipping split zip files\nWindows:\n\n decompress URL to automatically extract the other volumes together.\n\nLinux: \n\n 1. zip URL SA-Med2D-16M.z0* SA-Med2D-16M.z10 -s=0 --out {full}.zip\n \n 2. unzip {full}.zip## 免责声明\n- SA-Med2D-20M是由多个公开的数据集组成,旨在取之于社区,回馈于社区,为研究人员和开发者提供一个用于学术和技术研究的资源。使用本数据集的任何个人或组织(以下统称为“使用者”)需遵守以下免责声明:\n1. 数据集来源:本数据集由多个公开的数据集组成,这些数据集的来源已在预印版论文中明确标明。使用者应当遵守原始数据集的相关许可和使用条款。\n2. 数据准确性:尽管我们已经努力确保数据集的准确性和完整性,但无法对数据集的准确性作出保证。使用者应自行承担使用数据集可能带来的风险和责任。\n3. 责任限制:在任何情况下,数据集的提供者及相关贡献者均不对使用者的任何行为或结果承担责任。\n4. 使用约束:使用者在使用本数据集时,应遵守适用的法律法规和伦理规范。使用者不得将本数据集用于非法、侵犯隐私、诽谤、歧视或其他违法或不道德的目的。\n5. 知识产权:本数据集的知识产权归原始数据集的相关权利人所有,使用者不得以任何方式侵犯数据集的知识产权。\n\n- 作为非盈利机构,团队倡导和谐友好的开源交流环境,若在开源数据集内发现有侵犯您合法权益的内容,可发送邮件至(yejin@URL, chengjunlong@URL),邮件中请写明侵权相关事实的详细描述并向我们提供相关的权属证明资料。我们将于3个工作日内启动调查处理机制,并采取必要的措施进行处置(如下架相关数据)。但应确保您投诉的真实性,否则采取措施后所产生的不利后果应由您独立承担。\n\n- 通过下载、复制、访问或使用本数据集,即表示使用者已阅读、理解并同意遵守本免责声明中的所有条款和条件。如果使用者无法接受本免责声明的任何部分,请勿使用本数据集。"
]
|
c741c5b106bbeb7a151710e584e685d802fee1aa | # Dataset Card for "fake_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | andersonbcdefg/fake_dataset | [
"region:us"
]
| 2023-11-26T01:28:36+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 6240, "num_examples": 8}], "download_size": 5472, "dataset_size": 6240}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-26T02:59:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fake_dataset"
More Information needed | [
"# Dataset Card for \"fake_dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fake_dataset\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fake_dataset\"\n\nMore Information needed"
]
|
982d5682ba414ee13cf92cb93ec18fc8e78e2b81 | # PIE Dataset Card for "sciarg"
This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the SciArg dataset ([paper](https://aclanthology.org/W18-5206/) and [data repository](https://github.com/anlausch/sciarg_resource_analysis)). Since the SciArg dataset is published in the [BRAT standoff format](https://brat.nlplab.org/standoff.html), this dataset builder is based on the [PyTorch-IE brat dataset loading script](https://huggingface.co/datasets/pie/brat).
Therefore, the `sciarg` dataset as described here follows the data structure from the [PIE brat dataset card](https://huggingface.co/datasets/pie/brat).
### Dataset Summary
The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., [2015](https://aclanthology.org/W15-1605.pdf), [2016](https://aclanthology.org/L16-1492.pdf)) with an annotation layer containing
fine-grained argumentative components and relations, believing that argumentation needs to
be studied in combination with other rhetorical aspects. It is the first publicly-available argument-annotated corpus of scientific publications (in English), which allows for joint analyses of argumentation and other
rhetorical dimensions of scientific writing." ([Lauscher et al., 2018](<(https://aclanthology.org/W18-5206/)>), pp. 40-41)
### Supported Tasks and Leaderboards
- **Tasks**: Argumentation Mining, Component Identification, Relation Identification
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English (scientific academic publications on computer graphics).
### Dataset Variants
The `sciarg` dataset comes in a single version (`default`) with `BratDocumentWithMergedSpans` as document type. Note,
that this in contrast to the base `brat` dataset, where the document type for the `default` variant is `BratDocument`.
The reason is that the SciArg dataset was published with spans that are just fragmented by whitespace which seems
to be because of the annotation tool used. In the `sciarg` dataset, we merge these fragments, so that the document type
can be `BratDocumentWithMergedSpans` (this is easier to handle for most of the task modules). However, fragmented
spans are conceptually also available in SciArg, but they are marked with the `parts_of_same` relation which are kept
as they are in the `sciarg` (`default`) dataset.
### Data Schema
See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema).
### Usage
```python
from pie_datasets import load_dataset, builders
# load default version
datasets = load_dataset("pie/sciarg")
doc = datasets["train"][0]
assert isinstance(doc, builders.brat.BratDocument)
# load version with merged span fragments
dataset_merged_spans = load_dataset("pie/sciarg", name="merge_fragmented_spans")
doc_merged_spans = dataset_merged_spans["train"][0]
assert isinstance(doc_merged_spans, builders.brat.BratDocumentWithMergedSpans)
```
### Document Converters
The dataset provides document converters for the following target document types:
- `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
- `LabeledSpans`, converted from `BratDocument`'s `spans`
- labels: `background_claim`, `own_claim`, `data`
- if `spans` contain whitespace at the beginning and/or the end, the whitespace are trimmed out.
- `BinraryRelations`, converted from `BratDocument`'s `relations`
- labels: `supports`, `contradicts`, `semantically_same`, `parts_of_same`
- if the `relations` label is `semantically_same` or `parts_of_same`, they are merged if they are the same arguments after sorting.
- `pytorch_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions`
- `LabeledSpans`, as above
- `BinaryRelations`, as above
- `LabeledPartitions`, partitioned `BratDocument`'s `text`, according to the paragraph, using regex.
- labels: `title`, `abstract`, `H1`
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
definitions.
### Data Splits
The dataset consists of a single `train` split that has 40 documents.
For detailed statistics on the corpus, see Lauscher et al. ([2018](<(https://aclanthology.org/W18-5206/)>), p. 43), and the author's [resource analysis](https://github.com/anlausch/sciarg_resource_analysis).
### Label Descriptions
#### Components
| Components | Count | Percentage |
| ------------------ | ----: | ---------: |
| `background_claim` | 3291 | 24.2 % |
| `own_claim` | 6004 | 44.2 % |
| `data` | 4297 | 31.6 % |
- `own_claim` is an argumentative statement that closely relates to the authors’ own work.
- `background_claim` an argumentative statement relating to the background of authors’ work, e.g., about related work or common practices.
- `data` component represents a fact that serves as evidence for or against a claim. Note that references or (factual) examples can also serve as data.
(Lauscher et al. 2018, p.41; following and simplified [Toulmin, 2003](https://www.cambridge.org/core/books/uses-of-argument/26CF801BC12004587B66778297D5567C))
#### Relations
| Relations | Count | Percentage |
| -------------------------- | ----: | ---------: |
| support: `support` | 5791 | 74.0 % |
| attack: `contradict` | 697 | 8.9 % |
| other: `semantically_same` | 44 | 0.6 % |
| other: `parts_of_same` | 1298 | 16.6 % |
##### Argumentative relations
- `support`:
- if the assumed veracity of *b* increases with the veracity of *a*
- "Usually, this relationship exists from data to claim, but in many cases a claim might support another claim. Other combinations are still possible." - (*Annotation Guidelines*, p. 3)
- `contradict`:
- if the assumed veracity of *b* decreases with the veracity of *a*
- It is a **bi-directional**, i.e., symmetric relationship.
##### Non-argumentative relations
- `semantically_same`: between two mentions of effectively the same claim or data component. Can be seen as *argument coreference*, analogous to entity, and *event coreference*. This relation is considered symmetric (i.e., **bidirectional**) and non-argumentative.
(Lauscher et al. 2018, p.41; following [Dung, 1995](https://www.sciencedirect.com/science/article/pii/000437029400041X?via%3Dihub))
- `parts_of_same`: when a single component is split up in several parts. It is **non-argumentative**, **bidirectional**, but also **intra-component**
(*Annotation Guidelines*, pp. 4-6)
**Important note on label counts**:
There are currently discrepancies in label counts between
- previous report in [Lauscher et al., 2018](https://aclanthology.org/W18-5206/), p. 43),
- current report above here (labels counted in `BratDocument`'s);
possibly since [Lauscher et al., 2018](https://aclanthology.org/W18-5206/) presents the numbers of the real argumentative components, whereas here discontinuous components are still split (marked with the `parts_of_same` helper relation) and, thus, count per fragment.
## Dataset Creation
### Curation Rationale
"\[C\]omputational methods for analyzing scientific writing are becoming paramount...there is no publicly available corpus of scientific publications (in English), annotated with fine-grained argumentative structures. ...\[A\]rgumentative structure of scientific publications should not be studied in isolation, but rather in relation to other rhetorical aspects, such as the
discourse structure.
(Lauscher et al. 2018, p. 40)
### Source Data
#### Initial Data Collection and Normalization
"\[W\]e randomly selected a set of 40 documents, available in PDF format, among a bigger collection provided by experts in the domain, who pre-selected a representative sample of articles in Computer Graphics. Articles were classified into four important subjects in this area: Skinning, Motion Capture, Fluid Simulation and Cloth Simulation. We included in the corpus 10 highly representative articles for each subject." (Fisas et al. 2015, p. 44)
"The Corpus includes 10,789 sentences, with an average of 269.7 sentences per document." (p. 45)
#### Who are the source language producers?
It can be implied from the data source that the language producers were academics in computer graphics and related fields, possibly assisted by other human editors.
### Annotations
#### Annotation process
"We trained the four annotators in a calibration phase, consisting of five iterations, in each of which all annotators annotated one publication. After each iteration we computed the inter-annotator agreement (IAA), discussed the disagreements, and, if needed, adjourned the [annotation guidelines](https://data.dws.informatik.uni-mannheim.de/sci-arg/annotation_guidelines.pdf)."
The detailed evolution of IAA over the five calibration iterations is depicted in Lauscher et al. (2018), p. 42, Figure 1.
The annotation were done using BRAT Rapid Annotation Tool ([Stenetorp et al., 2012](https://aclanthology.org/E12-2021/)).
#### Who are the annotators?
"We hired one expert (a researcher in computational linguistics) and three non-expert annotators (humanities and social sciences scholars)." (Lauscher et al. 2018, p. 42)
### Personal and Sensitive Information
\[More Information Needed\]
## Considerations for Using the Data
### Social Impact of Dataset
"To support learning-based models for automated analysis of scientific publications, potentially leading to better understanding
of the different rhetorical aspects of scientific language (which we dub *scitorics*)." (Lauscher et al. 2018, p. 40)
"The resulting corpus... is, to the best of our knowledge, the first argument-annotated corpus of scientific publications in English, enables (1) computational analysis of argumentation in scientific writing and (2) integrated analysis of argumentation and other rhetorical aspects of scientific text." (Lauscher et al. 2018, p. 44)
### Discussion of Biases
"...not all claims are supported and secondly, claims can be supported by other claims. There are many more supports than contradicts relations."
"While the background claims and own claims are on average of similar length (85 and 87 characters, respectively), they are much longer than data components (average of 25 characters)."
"\[A\]nnotators identified an average of 141 connected component per publication...This indicates that either authors write very short argumentative chains or that our annotators had difficulties noticing long-range argumentative dependencies."
(Lauscher et al. 2018, p.43)
### Other Known Limitations
"Expectedly, we observe higher agreements with more calibration. The agreement on argumentative relations is 23% lower than on the components, which we think is due to the high ambiguity of argumentation structures."
"Additionally, disagreements in component identification are propagated to relations as well, since the agreement on a relation implies the agreement on annotated components at both ends of the relation."
(Lauscher et al. 2018, p. 43)
## Additional Information
### Dataset Curators
- **Repository:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci)
### Licensing Information
[MIT License](https://github.com/anlausch/ArguminSci/blob/master/LICENSE)
This research was partly funded by the German Research Foundation (DFG), grant number EC 477/5-1 (LOC-DB).
### Citation Information
```
@inproceedings{lauscher2018b,
title = {An argument-annotated corpus of scientific publications},
booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},
publisher = {Association for Computational Linguistics},
author = {Lauscher, Anne and Glava\v{s}, Goran and Ponzetto, Simone Paolo},
address = {Brussels, Belgium},
year = {2018},
pages = {40–46}
}
```
```
@inproceedings{lauscher2018a,
title = {ArguminSci: A Tool for Analyzing Argumentation and Rhetorical Aspects in Scientific Writing},
booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},
publisher = {Association for Computational Linguistics},
author = {Lauscher, Anne and Glava\v{s}, Goran and Eckert, Kai},
address = {Brussels, Belgium},
year = {2018},
pages = {22–28}
}
```
### Contributions
Thanks to [@ArneBinder](https://github.com/ArneBinder) and [@idalr](https://github.com/idalr) for adding this dataset.
| pie/sciarg | [
"region:us"
]
| 2023-11-26T02:41:54+00:00 | {} | 2023-12-21T14:07:27+00:00 | []
| []
| TAGS
#region-us
| PIE Dataset Card for "sciarg"
=============================
This is a PyTorch-IE wrapper for the SciArg dataset (paper and data repository). Since the SciArg dataset is published in the BRAT standoff format, this dataset builder is based on the PyTorch-IE brat dataset loading script.
Therefore, the 'sciarg' dataset as described here follows the data structure from the PIE brat dataset card.
### Dataset Summary
The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing
fine-grained argumentative components and relations, believing that argumentation needs to
be studied in combination with other rhetorical aspects. It is the first publicly-available argument-annotated corpus of scientific publications (in English), which allows for joint analyses of argumentation and other
rhetorical dimensions of scientific writing." (Lauscher et al., 2018>), pp. 40-41)
### Supported Tasks and Leaderboards
* Tasks: Argumentation Mining, Component Identification, Relation Identification
* Leaderboard:
### Languages
The language in the dataset is English (scientific academic publications on computer graphics).
### Dataset Variants
The 'sciarg' dataset comes in a single version ('default') with 'BratDocumentWithMergedSpans' as document type. Note,
that this in contrast to the base 'brat' dataset, where the document type for the 'default' variant is 'BratDocument'.
The reason is that the SciArg dataset was published with spans that are just fragmented by whitespace which seems
to be because of the annotation tool used. In the 'sciarg' dataset, we merge these fragments, so that the document type
can be 'BratDocumentWithMergedSpans' (this is easier to handle for most of the task modules). However, fragmented
spans are conceptually also available in SciArg, but they are marked with the 'parts\_of\_same' relation which are kept
as they are in the 'sciarg' ('default') dataset.
### Data Schema
See PIE-Brat Data Schema.
### Usage
### Document Converters
The dataset provides document converters for the following target document types:
* 'pytorch\_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations'
+ 'LabeledSpans', converted from 'BratDocument''s 'spans'
- labels: 'background\_claim', 'own\_claim', 'data'
- if 'spans' contain whitespace at the beginning and/or the end, the whitespace are trimmed out.
+ 'BinraryRelations', converted from 'BratDocument''s 'relations'
- labels: 'supports', 'contradicts', 'semantically\_same', 'parts\_of\_same'
- if the 'relations' label is 'semantically\_same' or 'parts\_of\_same', they are merged if they are the same arguments after sorting.
* 'pytorch\_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions'
+ 'LabeledSpans', as above
+ 'BinaryRelations', as above
+ 'LabeledPartitions', partitioned 'BratDocument''s 'text', according to the paragraph, using regex.
- labels: 'title', 'abstract', 'H1'
See here for the document type
definitions.
### Data Splits
The dataset consists of a single 'train' split that has 40 documents.
For detailed statistics on the corpus, see Lauscher et al. (2018>), p. 43), and the author's resource analysis.
### Label Descriptions
#### Components
* 'own\_claim' is an argumentative statement that closely relates to the authors’ own work.
* 'background\_claim' an argumentative statement relating to the background of authors’ work, e.g., about related work or common practices.
* 'data' component represents a fact that serves as evidence for or against a claim. Note that references or (factual) examples can also serve as data.
(Lauscher et al. 2018, p.41; following and simplified Toulmin, 2003)
#### Relations
##### Argumentative relations
* 'support':
+ if the assumed veracity of *b* increases with the veracity of *a*
+ "Usually, this relationship exists from data to claim, but in many cases a claim might support another claim. Other combinations are still possible." - (*Annotation Guidelines*, p. 3)
* 'contradict':
+ if the assumed veracity of *b* decreases with the veracity of *a*
+ It is a bi-directional, i.e., symmetric relationship.
##### Non-argumentative relations
* 'semantically\_same': between two mentions of effectively the same claim or data component. Can be seen as *argument coreference*, analogous to entity, and *event coreference*. This relation is considered symmetric (i.e., bidirectional) and non-argumentative.
(Lauscher et al. 2018, p.41; following Dung, 1995)
* 'parts\_of\_same': when a single component is split up in several parts. It is non-argumentative, bidirectional, but also intra-component
(*Annotation Guidelines*, pp. 4-6)
Important note on label counts:
There are currently discrepancies in label counts between
* previous report in Lauscher et al., 2018, p. 43),
* current report above here (labels counted in 'BratDocument''s);
possibly since Lauscher et al., 2018 presents the numbers of the real argumentative components, whereas here discontinuous components are still split (marked with the 'parts\_of\_same' helper relation) and, thus, count per fragment.
Dataset Creation
----------------
### Curation Rationale
"[C]omputational methods for analyzing scientific writing are becoming paramount...there is no publicly available corpus of scientific publications (in English), annotated with fine-grained argumentative structures. ...[A]rgumentative structure of scientific publications should not be studied in isolation, but rather in relation to other rhetorical aspects, such as the
discourse structure.
(Lauscher et al. 2018, p. 40)
### Source Data
#### Initial Data Collection and Normalization
"[W]e randomly selected a set of 40 documents, available in PDF format, among a bigger collection provided by experts in the domain, who pre-selected a representative sample of articles in Computer Graphics. Articles were classified into four important subjects in this area: Skinning, Motion Capture, Fluid Simulation and Cloth Simulation. We included in the corpus 10 highly representative articles for each subject." (Fisas et al. 2015, p. 44)
"The Corpus includes 10,789 sentences, with an average of 269.7 sentences per document." (p. 45)
#### Who are the source language producers?
It can be implied from the data source that the language producers were academics in computer graphics and related fields, possibly assisted by other human editors.
### Annotations
#### Annotation process
"We trained the four annotators in a calibration phase, consisting of five iterations, in each of which all annotators annotated one publication. After each iteration we computed the inter-annotator agreement (IAA), discussed the disagreements, and, if needed, adjourned the annotation guidelines."
The detailed evolution of IAA over the five calibration iterations is depicted in Lauscher et al. (2018), p. 42, Figure 1.
The annotation were done using BRAT Rapid Annotation Tool (Stenetorp et al., 2012).
#### Who are the annotators?
"We hired one expert (a researcher in computational linguistics) and three non-expert annotators (humanities and social sciences scholars)." (Lauscher et al. 2018, p. 42)
### Personal and Sensitive Information
\]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
"To support learning-based models for automated analysis of scientific publications, potentially leading to better understanding
of the different rhetorical aspects of scientific language (which we dub *scitorics*)." (Lauscher et al. 2018, p. 40)
"The resulting corpus... is, to the best of our knowledge, the first argument-annotated corpus of scientific publications in English, enables (1) computational analysis of argumentation in scientific writing and (2) integrated analysis of argumentation and other rhetorical aspects of scientific text." (Lauscher et al. 2018, p. 44)
### Discussion of Biases
"...not all claims are supported and secondly, claims can be supported by other claims. There are many more supports than contradicts relations."
"While the background claims and own claims are on average of similar length (85 and 87 characters, respectively), they are much longer than data components (average of 25 characters)."
"[A]nnotators identified an average of 141 connected component per publication...This indicates that either authors write very short argumentative chains or that our annotators had difficulties noticing long-range argumentative dependencies."
(Lauscher et al. 2018, p.43)
### Other Known Limitations
"Expectedly, we observe higher agreements with more calibration. The agreement on argumentative relations is 23% lower than on the components, which we think is due to the high ambiguity of argumentation structures."
"Additionally, disagreements in component identification are propagated to relations as well, since the agreement on a relation implies the agreement on annotated components at both ends of the relation."
(Lauscher et al. 2018, p. 43)
Additional Information
----------------------
### Dataset Curators
* Repository: URL
### Licensing Information
MIT License
This research was partly funded by the German Research Foundation (DFG), grant number EC 477/5-1 (LOC-DB).
### Contributions
Thanks to @ArneBinder and @idalr for adding this dataset.
| [
"### Dataset Summary\n\n\nThe SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing\nfine-grained argumentative components and relations, believing that argumentation needs to\nbe studied in combination with other rhetorical aspects. It is the first publicly-available argument-annotated corpus of scientific publications (in English), which allows for joint analyses of argumentation and other\nrhetorical dimensions of scientific writing.\" (Lauscher et al., 2018>), pp. 40-41)",
"### Supported Tasks and Leaderboards\n\n\n* Tasks: Argumentation Mining, Component Identification, Relation Identification\n* Leaderboard:",
"### Languages\n\n\nThe language in the dataset is English (scientific academic publications on computer graphics).",
"### Dataset Variants\n\n\nThe 'sciarg' dataset comes in a single version ('default') with 'BratDocumentWithMergedSpans' as document type. Note,\nthat this in contrast to the base 'brat' dataset, where the document type for the 'default' variant is 'BratDocument'.\nThe reason is that the SciArg dataset was published with spans that are just fragmented by whitespace which seems\nto be because of the annotation tool used. In the 'sciarg' dataset, we merge these fragments, so that the document type\ncan be 'BratDocumentWithMergedSpans' (this is easier to handle for most of the task modules). However, fragmented\nspans are conceptually also available in SciArg, but they are marked with the 'parts\\_of\\_same' relation which are kept\nas they are in the 'sciarg' ('default') dataset.",
"### Data Schema\n\n\nSee PIE-Brat Data Schema.",
"### Usage",
"### Document Converters\n\n\nThe dataset provides document converters for the following target document types:\n\n\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations'\n\t+ 'LabeledSpans', converted from 'BratDocument''s 'spans'\n\t\t- labels: 'background\\_claim', 'own\\_claim', 'data'\n\t\t- if 'spans' contain whitespace at the beginning and/or the end, the whitespace are trimmed out.\n\t+ 'BinraryRelations', converted from 'BratDocument''s 'relations'\n\t\t- labels: 'supports', 'contradicts', 'semantically\\_same', 'parts\\_of\\_same'\n\t\t- if the 'relations' label is 'semantically\\_same' or 'parts\\_of\\_same', they are merged if they are the same arguments after sorting.\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions'\n\t+ 'LabeledSpans', as above\n\t+ 'BinaryRelations', as above\n\t+ 'LabeledPartitions', partitioned 'BratDocument''s 'text', according to the paragraph, using regex.\n\t\t- labels: 'title', 'abstract', 'H1'\n\n\nSee here for the document type\ndefinitions.",
"### Data Splits\n\n\nThe dataset consists of a single 'train' split that has 40 documents.\n\n\nFor detailed statistics on the corpus, see Lauscher et al. (2018>), p. 43), and the author's resource analysis.",
"### Label Descriptions",
"#### Components\n\n\n\n* 'own\\_claim' is an argumentative statement that closely relates to the authors’ own work.\n* 'background\\_claim' an argumentative statement relating to the background of authors’ work, e.g., about related work or common practices.\n* 'data' component represents a fact that serves as evidence for or against a claim. Note that references or (factual) examples can also serve as data.\n(Lauscher et al. 2018, p.41; following and simplified Toulmin, 2003)",
"#### Relations",
"##### Argumentative relations\n\n\n* 'support':\n\t+ if the assumed veracity of *b* increases with the veracity of *a*\n\t+ \"Usually, this relationship exists from data to claim, but in many cases a claim might support another claim. Other combinations are still possible.\" - (*Annotation Guidelines*, p. 3)\n* 'contradict':\n\t+ if the assumed veracity of *b* decreases with the veracity of *a*\n\t+ It is a bi-directional, i.e., symmetric relationship.",
"##### Non-argumentative relations\n\n\n* 'semantically\\_same': between two mentions of effectively the same claim or data component. Can be seen as *argument coreference*, analogous to entity, and *event coreference*. This relation is considered symmetric (i.e., bidirectional) and non-argumentative.\n(Lauscher et al. 2018, p.41; following Dung, 1995)\n* 'parts\\_of\\_same': when a single component is split up in several parts. It is non-argumentative, bidirectional, but also intra-component\n\n\n(*Annotation Guidelines*, pp. 4-6)\n\n\nImportant note on label counts:\n\n\nThere are currently discrepancies in label counts between\n\n\n* previous report in Lauscher et al., 2018, p. 43),\n* current report above here (labels counted in 'BratDocument''s);\n\n\npossibly since Lauscher et al., 2018 presents the numbers of the real argumentative components, whereas here discontinuous components are still split (marked with the 'parts\\_of\\_same' helper relation) and, thus, count per fragment.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n\"[C]omputational methods for analyzing scientific writing are becoming paramount...there is no publicly available corpus of scientific publications (in English), annotated with fine-grained argumentative structures. ...[A]rgumentative structure of scientific publications should not be studied in isolation, but rather in relation to other rhetorical aspects, such as the\ndiscourse structure.\n(Lauscher et al. 2018, p. 40)",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n\"[W]e randomly selected a set of 40 documents, available in PDF format, among a bigger collection provided by experts in the domain, who pre-selected a representative sample of articles in Computer Graphics. Articles were classified into four important subjects in this area: Skinning, Motion Capture, Fluid Simulation and Cloth Simulation. We included in the corpus 10 highly representative articles for each subject.\" (Fisas et al. 2015, p. 44)\n\n\n\"The Corpus includes 10,789 sentences, with an average of 269.7 sentences per document.\" (p. 45)",
"#### Who are the source language producers?\n\n\nIt can be implied from the data source that the language producers were academics in computer graphics and related fields, possibly assisted by other human editors.",
"### Annotations",
"#### Annotation process\n\n\n\"We trained the four annotators in a calibration phase, consisting of five iterations, in each of which all annotators annotated one publication. After each iteration we computed the inter-annotator agreement (IAA), discussed the disagreements, and, if needed, adjourned the annotation guidelines.\"\n\n\nThe detailed evolution of IAA over the five calibration iterations is depicted in Lauscher et al. (2018), p. 42, Figure 1.\n\n\nThe annotation were done using BRAT Rapid Annotation Tool (Stenetorp et al., 2012).",
"#### Who are the annotators?\n\n\n\"We hired one expert (a researcher in computational linguistics) and three non-expert annotators (humanities and social sciences scholars).\" (Lauscher et al. 2018, p. 42)",
"### Personal and Sensitive Information\n\n\n\\]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n\"To support learning-based models for automated analysis of scientific publications, potentially leading to better understanding\nof the different rhetorical aspects of scientific language (which we dub *scitorics*).\" (Lauscher et al. 2018, p. 40)\n\n\n\"The resulting corpus... is, to the best of our knowledge, the first argument-annotated corpus of scientific publications in English, enables (1) computational analysis of argumentation in scientific writing and (2) integrated analysis of argumentation and other rhetorical aspects of scientific text.\" (Lauscher et al. 2018, p. 44)",
"### Discussion of Biases\n\n\n\"...not all claims are supported and secondly, claims can be supported by other claims. There are many more supports than contradicts relations.\"\n\n\n\"While the background claims and own claims are on average of similar length (85 and 87 characters, respectively), they are much longer than data components (average of 25 characters).\"\n\n\n\"[A]nnotators identified an average of 141 connected component per publication...This indicates that either authors write very short argumentative chains or that our annotators had difficulties noticing long-range argumentative dependencies.\"\n\n\n(Lauscher et al. 2018, p.43)",
"### Other Known Limitations\n\n\n\"Expectedly, we observe higher agreements with more calibration. The agreement on argumentative relations is 23% lower than on the components, which we think is due to the high ambiguity of argumentation structures.\"\n\n\n\"Additionally, disagreements in component identification are propagated to relations as well, since the agreement on a relation implies the agreement on annotated components at both ends of the relation.\"\n\n\n(Lauscher et al. 2018, p. 43)\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n* Repository: URL",
"### Licensing Information\n\n\nMIT License\n\n\nThis research was partly funded by the German Research Foundation (DFG), grant number EC 477/5-1 (LOC-DB).",
"### Contributions\n\n\nThanks to @ArneBinder and @idalr for adding this dataset."
]
| [
"TAGS\n#region-us \n",
"### Dataset Summary\n\n\nThe SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing\nfine-grained argumentative components and relations, believing that argumentation needs to\nbe studied in combination with other rhetorical aspects. It is the first publicly-available argument-annotated corpus of scientific publications (in English), which allows for joint analyses of argumentation and other\nrhetorical dimensions of scientific writing.\" (Lauscher et al., 2018>), pp. 40-41)",
"### Supported Tasks and Leaderboards\n\n\n* Tasks: Argumentation Mining, Component Identification, Relation Identification\n* Leaderboard:",
"### Languages\n\n\nThe language in the dataset is English (scientific academic publications on computer graphics).",
"### Dataset Variants\n\n\nThe 'sciarg' dataset comes in a single version ('default') with 'BratDocumentWithMergedSpans' as document type. Note,\nthat this in contrast to the base 'brat' dataset, where the document type for the 'default' variant is 'BratDocument'.\nThe reason is that the SciArg dataset was published with spans that are just fragmented by whitespace which seems\nto be because of the annotation tool used. In the 'sciarg' dataset, we merge these fragments, so that the document type\ncan be 'BratDocumentWithMergedSpans' (this is easier to handle for most of the task modules). However, fragmented\nspans are conceptually also available in SciArg, but they are marked with the 'parts\\_of\\_same' relation which are kept\nas they are in the 'sciarg' ('default') dataset.",
"### Data Schema\n\n\nSee PIE-Brat Data Schema.",
"### Usage",
"### Document Converters\n\n\nThe dataset provides document converters for the following target document types:\n\n\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations'\n\t+ 'LabeledSpans', converted from 'BratDocument''s 'spans'\n\t\t- labels: 'background\\_claim', 'own\\_claim', 'data'\n\t\t- if 'spans' contain whitespace at the beginning and/or the end, the whitespace are trimmed out.\n\t+ 'BinraryRelations', converted from 'BratDocument''s 'relations'\n\t\t- labels: 'supports', 'contradicts', 'semantically\\_same', 'parts\\_of\\_same'\n\t\t- if the 'relations' label is 'semantically\\_same' or 'parts\\_of\\_same', they are merged if they are the same arguments after sorting.\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions'\n\t+ 'LabeledSpans', as above\n\t+ 'BinaryRelations', as above\n\t+ 'LabeledPartitions', partitioned 'BratDocument''s 'text', according to the paragraph, using regex.\n\t\t- labels: 'title', 'abstract', 'H1'\n\n\nSee here for the document type\ndefinitions.",
"### Data Splits\n\n\nThe dataset consists of a single 'train' split that has 40 documents.\n\n\nFor detailed statistics on the corpus, see Lauscher et al. (2018>), p. 43), and the author's resource analysis.",
"### Label Descriptions",
"#### Components\n\n\n\n* 'own\\_claim' is an argumentative statement that closely relates to the authors’ own work.\n* 'background\\_claim' an argumentative statement relating to the background of authors’ work, e.g., about related work or common practices.\n* 'data' component represents a fact that serves as evidence for or against a claim. Note that references or (factual) examples can also serve as data.\n(Lauscher et al. 2018, p.41; following and simplified Toulmin, 2003)",
"#### Relations",
"##### Argumentative relations\n\n\n* 'support':\n\t+ if the assumed veracity of *b* increases with the veracity of *a*\n\t+ \"Usually, this relationship exists from data to claim, but in many cases a claim might support another claim. Other combinations are still possible.\" - (*Annotation Guidelines*, p. 3)\n* 'contradict':\n\t+ if the assumed veracity of *b* decreases with the veracity of *a*\n\t+ It is a bi-directional, i.e., symmetric relationship.",
"##### Non-argumentative relations\n\n\n* 'semantically\\_same': between two mentions of effectively the same claim or data component. Can be seen as *argument coreference*, analogous to entity, and *event coreference*. This relation is considered symmetric (i.e., bidirectional) and non-argumentative.\n(Lauscher et al. 2018, p.41; following Dung, 1995)\n* 'parts\\_of\\_same': when a single component is split up in several parts. It is non-argumentative, bidirectional, but also intra-component\n\n\n(*Annotation Guidelines*, pp. 4-6)\n\n\nImportant note on label counts:\n\n\nThere are currently discrepancies in label counts between\n\n\n* previous report in Lauscher et al., 2018, p. 43),\n* current report above here (labels counted in 'BratDocument''s);\n\n\npossibly since Lauscher et al., 2018 presents the numbers of the real argumentative components, whereas here discontinuous components are still split (marked with the 'parts\\_of\\_same' helper relation) and, thus, count per fragment.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n\"[C]omputational methods for analyzing scientific writing are becoming paramount...there is no publicly available corpus of scientific publications (in English), annotated with fine-grained argumentative structures. ...[A]rgumentative structure of scientific publications should not be studied in isolation, but rather in relation to other rhetorical aspects, such as the\ndiscourse structure.\n(Lauscher et al. 2018, p. 40)",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n\"[W]e randomly selected a set of 40 documents, available in PDF format, among a bigger collection provided by experts in the domain, who pre-selected a representative sample of articles in Computer Graphics. Articles were classified into four important subjects in this area: Skinning, Motion Capture, Fluid Simulation and Cloth Simulation. We included in the corpus 10 highly representative articles for each subject.\" (Fisas et al. 2015, p. 44)\n\n\n\"The Corpus includes 10,789 sentences, with an average of 269.7 sentences per document.\" (p. 45)",
"#### Who are the source language producers?\n\n\nIt can be implied from the data source that the language producers were academics in computer graphics and related fields, possibly assisted by other human editors.",
"### Annotations",
"#### Annotation process\n\n\n\"We trained the four annotators in a calibration phase, consisting of five iterations, in each of which all annotators annotated one publication. After each iteration we computed the inter-annotator agreement (IAA), discussed the disagreements, and, if needed, adjourned the annotation guidelines.\"\n\n\nThe detailed evolution of IAA over the five calibration iterations is depicted in Lauscher et al. (2018), p. 42, Figure 1.\n\n\nThe annotation were done using BRAT Rapid Annotation Tool (Stenetorp et al., 2012).",
"#### Who are the annotators?\n\n\n\"We hired one expert (a researcher in computational linguistics) and three non-expert annotators (humanities and social sciences scholars).\" (Lauscher et al. 2018, p. 42)",
"### Personal and Sensitive Information\n\n\n\\]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n\"To support learning-based models for automated analysis of scientific publications, potentially leading to better understanding\nof the different rhetorical aspects of scientific language (which we dub *scitorics*).\" (Lauscher et al. 2018, p. 40)\n\n\n\"The resulting corpus... is, to the best of our knowledge, the first argument-annotated corpus of scientific publications in English, enables (1) computational analysis of argumentation in scientific writing and (2) integrated analysis of argumentation and other rhetorical aspects of scientific text.\" (Lauscher et al. 2018, p. 44)",
"### Discussion of Biases\n\n\n\"...not all claims are supported and secondly, claims can be supported by other claims. There are many more supports than contradicts relations.\"\n\n\n\"While the background claims and own claims are on average of similar length (85 and 87 characters, respectively), they are much longer than data components (average of 25 characters).\"\n\n\n\"[A]nnotators identified an average of 141 connected component per publication...This indicates that either authors write very short argumentative chains or that our annotators had difficulties noticing long-range argumentative dependencies.\"\n\n\n(Lauscher et al. 2018, p.43)",
"### Other Known Limitations\n\n\n\"Expectedly, we observe higher agreements with more calibration. The agreement on argumentative relations is 23% lower than on the components, which we think is due to the high ambiguity of argumentation structures.\"\n\n\n\"Additionally, disagreements in component identification are propagated to relations as well, since the agreement on a relation implies the agreement on annotated components at both ends of the relation.\"\n\n\n(Lauscher et al. 2018, p. 43)\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n* Repository: URL",
"### Licensing Information\n\n\nMIT License\n\n\nThis research was partly funded by the German Research Foundation (DFG), grant number EC 477/5-1 (LOC-DB).",
"### Contributions\n\n\nThanks to @ArneBinder and @idalr for adding this dataset."
]
| [
6,
128,
31,
25,
204,
15,
4,
336,
53,
5,
128,
4,
122,
264,
108,
4,
138,
46,
5,
139,
57,
21,
135,
145,
120,
12,
36,
23
]
| [
"passage: TAGS\n#region-us \n### Dataset Summary\n\n\nThe SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing\nfine-grained argumentative components and relations, believing that argumentation needs to\nbe studied in combination with other rhetorical aspects. It is the first publicly-available argument-annotated corpus of scientific publications (in English), which allows for joint analyses of argumentation and other\nrhetorical dimensions of scientific writing.\" (Lauscher et al., 2018>), pp. 40-41)### Supported Tasks and Leaderboards\n\n\n* Tasks: Argumentation Mining, Component Identification, Relation Identification\n* Leaderboard:### Languages\n\n\nThe language in the dataset is English (scientific academic publications on computer graphics).### Dataset Variants\n\n\nThe 'sciarg' dataset comes in a single version ('default') with 'BratDocumentWithMergedSpans' as document type. Note,\nthat this in contrast to the base 'brat' dataset, where the document type for the 'default' variant is 'BratDocument'.\nThe reason is that the SciArg dataset was published with spans that are just fragmented by whitespace which seems\nto be because of the annotation tool used. In the 'sciarg' dataset, we merge these fragments, so that the document type\ncan be 'BratDocumentWithMergedSpans' (this is easier to handle for most of the task modules). However, fragmented\nspans are conceptually also available in SciArg, but they are marked with the 'parts\\_of\\_same' relation which are kept\nas they are in the 'sciarg' ('default') dataset.### Data Schema\n\n\nSee PIE-Brat Data Schema.### Usage",
"passage: ### Document Converters\n\n\nThe dataset provides document converters for the following target document types:\n\n\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations'\n\t+ 'LabeledSpans', converted from 'BratDocument''s 'spans'\n\t\t- labels: 'background\\_claim', 'own\\_claim', 'data'\n\t\t- if 'spans' contain whitespace at the beginning and/or the end, the whitespace are trimmed out.\n\t+ 'BinraryRelations', converted from 'BratDocument''s 'relations'\n\t\t- labels: 'supports', 'contradicts', 'semantically\\_same', 'parts\\_of\\_same'\n\t\t- if the 'relations' label is 'semantically\\_same' or 'parts\\_of\\_same', they are merged if they are the same arguments after sorting.\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions'\n\t+ 'LabeledSpans', as above\n\t+ 'BinaryRelations', as above\n\t+ 'LabeledPartitions', partitioned 'BratDocument''s 'text', according to the paragraph, using regex.\n\t\t- labels: 'title', 'abstract', 'H1'\n\n\nSee here for the document type\ndefinitions.### Data Splits\n\n\nThe dataset consists of a single 'train' split that has 40 documents.\n\n\nFor detailed statistics on the corpus, see Lauscher et al. (2018>), p. 43), and the author's resource analysis.### Label Descriptions#### Components\n\n\n\n* 'own\\_claim' is an argumentative statement that closely relates to the authors’ own work.\n* 'background\\_claim' an argumentative statement relating to the background of authors’ work, e.g., about related work or common practices.\n* 'data' component represents a fact that serves as evidence for or against a claim. Note that references or (factual) examples can also serve as data.\n(Lauscher et al. 2018, p.41; following and simplified Toulmin, 2003)#### Relations##### Argumentative relations\n\n\n* 'support':\n\t+ if the assumed veracity of *b* increases with the veracity of *a*\n\t+ \"Usually, this relationship exists from data to claim, but in many cases a claim might support another claim. Other combinations are still possible.\" - (*Annotation Guidelines*, p. 3)\n* 'contradict':\n\t+ if the assumed veracity of *b* decreases with the veracity of *a*\n\t+ It is a bi-directional, i.e., symmetric relationship.",
"passage: ##### Non-argumentative relations\n\n\n* 'semantically\\_same': between two mentions of effectively the same claim or data component. Can be seen as *argument coreference*, analogous to entity, and *event coreference*. This relation is considered symmetric (i.e., bidirectional) and non-argumentative.\n(Lauscher et al. 2018, p.41; following Dung, 1995)\n* 'parts\\_of\\_same': when a single component is split up in several parts. It is non-argumentative, bidirectional, but also intra-component\n\n\n(*Annotation Guidelines*, pp. 4-6)\n\n\nImportant note on label counts:\n\n\nThere are currently discrepancies in label counts between\n\n\n* previous report in Lauscher et al., 2018, p. 43),\n* current report above here (labels counted in 'BratDocument''s);\n\n\npossibly since Lauscher et al., 2018 presents the numbers of the real argumentative components, whereas here discontinuous components are still split (marked with the 'parts\\_of\\_same' helper relation) and, thus, count per fragment.\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\n\"[C]omputational methods for analyzing scientific writing are becoming paramount...there is no publicly available corpus of scientific publications (in English), annotated with fine-grained argumentative structures. ...[A]rgumentative structure of scientific publications should not be studied in isolation, but rather in relation to other rhetorical aspects, such as the\ndiscourse structure.\n(Lauscher et al. 2018, p. 40)### Source Data#### Initial Data Collection and Normalization\n\n\n\"[W]e randomly selected a set of 40 documents, available in PDF format, among a bigger collection provided by experts in the domain, who pre-selected a representative sample of articles in Computer Graphics. Articles were classified into four important subjects in this area: Skinning, Motion Capture, Fluid Simulation and Cloth Simulation. We included in the corpus 10 highly representative articles for each subject.\" (Fisas et al. 2015, p. 44)\n\n\n\"The Corpus includes 10,789 sentences, with an average of 269.7 sentences per document.\" (p. 45)#### Who are the source language producers?\n\n\nIt can be implied from the data source that the language producers were academics in computer graphics and related fields, possibly assisted by other human editors.### Annotations#### Annotation process\n\n\n\"We trained the four annotators in a calibration phase, consisting of five iterations, in each of which all annotators annotated one publication. After each iteration we computed the inter-annotator agreement (IAA), discussed the disagreements, and, if needed, adjourned the annotation guidelines.\"\n\n\nThe detailed evolution of IAA over the five calibration iterations is depicted in Lauscher et al. (2018), p. 42, Figure 1.\n\n\nThe annotation were done using BRAT Rapid Annotation Tool (Stenetorp et al., 2012).#### Who are the annotators?\n\n\n\"We hired one expert (a researcher in computational linguistics) and three non-expert annotators (humanities and social sciences scholars).\" (Lauscher et al. 2018, p. 42)"
]
|
68bbaa6e5d710df361c9192cd0f1e04e933c932a | # Dataset Card for DistillChat V1 Mixture
*Note the [ODC-BY license](https://opendatacommons.org/licenses/by/1-0/), indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
The dataset consists of a mix of :
## General Ability
* [sharegpt_gpt4](https://huggingface.co/datasets/Wanfq/sharegpt_gpt4): All 6.21k examples.
* [pure_dove](https://huggingface.co/datasets/Wanfq/pure_dove): All 3.86k examples.
* [verified_camel](https://huggingface.co/datasets/Wanfq/verified_camel): All 0.127k examples.
* [lesswrong_amplify_instruct](https://huggingface.co/datasets/Wanfq/lesswrong_amplify_instruct): All 0.663k examples.
* [orca_best](https://huggingface.co/datasets/Wanfq/orca_best): Sampled 10k examples from 329k examples.
* [oasst_top1](https://huggingface.co/datasets/Wanfq/oasst_top1): Sampled 5k examples from 12.9k examples.
* [airoboros](https://huggingface.co/datasets/Wanfq/airoboros): Sampled 10k examples from 42.7k examples.
* [wizardlm](https://huggingface.co/datasets/Wanfq/wizardlm): Sampled 10k examples from 154k examples.
* [no_robots](https://huggingface.co/datasets/Wanfq/no_robots): All 9.5k examples.
* [ultrachat_200k](https://huggingface.co/datasets/Wanfq/ultrachat_200k): Sampled 10k examples from 208k examples.
## Coding Ability
* [glaive_code_assistant](https://huggingface.co/datasets/Wanfq/glaive_code_assistant): Sampled 5k examples from 215k examples.
* [python_code](https://huggingface.co/datasets/Wanfq/python_code): Sampled 5k examples from 22.6k examples.
* [wizardcoder](https://huggingface.co/datasets/Wanfq/wizardcoder): Sampled 5k examples from 111k examples.
## Mathematics Ability
* [metamathqa](https://huggingface.co/datasets/Wanfq/metamathqa): Sampled 5k examples from 395k examples.
* [mathinstruct](https://huggingface.co/datasets/Wanfq/mathinstruct): Sampled 5k examples from 142k examples.
**Model Family:** All the models and the dataset are found in the [DistillChat collection](https://huggingface.co/collections/Wanfq/distillchat-6562c1fe4e74b2075a0e617e).
The length distribution of the dataset can be seen below:
* distillchat_v1_clean_split_2048_filter_wrong
| Statistics | Value |
|:---|:---:|
#sequence | 85.53 K |
#tokens | 54.01 M |
avg. turns | 1.49 |
avg. prompt length | 109.28 |
avg. response length | 315.89 |
L0 - 1024 | 68388 |
L1024 - 2048 | 16560 |
L2048 - 4096 | 535 |
L4096 - 8192 | 42 |
L8192 - 16384 | 2 |
L16384 - 32768 | 0 |
* distillchat_v1_clean_split_8192_filter_wrong
| Statistics | Value |
|:---|:---:|
#sequence | 82.10 K |
#tokens | 56.13 M |
avg. turns | 1.54 |
avg. prompt length | 123.87 |
avg. response length | 318.64 |
L0 - 1024 | 67583 |
L1024 - 2048 | 10469 |
L2048 - 4096 | 3165 |
L4096 - 8192 | 878 |
L8192 - 16384 | 2 |
L16384 - 32768 | 0 |
### License
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset. | Wanfq/distillchat_v1_mixture | [
"size_categories:10K<n<100K",
"language:en",
"license:odc-by",
"distillchat",
"SFT",
"region:us"
]
| 2023-11-26T03:03:24+00:00 | {"language": ["en"], "license": "odc-by", "size_categories": ["10K<n<100K"], "pretty_name": "DistillChat V1 Mixture", "tags": ["distillchat", "SFT"]} | 2023-11-26T12:37:55+00:00 | []
| [
"en"
]
| TAGS
#size_categories-10K<n<100K #language-English #license-odc-by #distillchat #SFT #region-us
| Dataset Card for DistillChat V1 Mixture
=======================================
*Note the ODC-BY license, indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
The dataset consists of a mix of :
General Ability
---------------
* sharegpt\_gpt4: All 6.21k examples.
* pure\_dove: All 3.86k examples.
* verified\_camel: All 0.127k examples.
* lesswrong\_amplify\_instruct: All 0.663k examples.
* orca\_best: Sampled 10k examples from 329k examples.
* oasst\_top1: Sampled 5k examples from 12.9k examples.
* airoboros: Sampled 10k examples from 42.7k examples.
* wizardlm: Sampled 10k examples from 154k examples.
* no\_robots: All 9.5k examples.
* ultrachat\_200k: Sampled 10k examples from 208k examples.
Coding Ability
--------------
* glaive\_code\_assistant: Sampled 5k examples from 215k examples.
* python\_code: Sampled 5k examples from 22.6k examples.
* wizardcoder: Sampled 5k examples from 111k examples.
Mathematics Ability
-------------------
* metamathqa: Sampled 5k examples from 395k examples.
* mathinstruct: Sampled 5k examples from 142k examples.
Model Family: All the models and the dataset are found in the DistillChat collection.
The length distribution of the dataset can be seen below:
* distillchat\_v1\_clean\_split\_2048\_filter\_wrong
* distillchat\_v1\_clean\_split\_8192\_filter\_wrong
### License
We are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
| [
"### License\n\n\nWe are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset."
]
| [
"TAGS\n#size_categories-10K<n<100K #language-English #license-odc-by #distillchat #SFT #region-us \n",
"### License\n\n\nWe are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset."
]
| [
37,
49
]
| [
"passage: TAGS\n#size_categories-10K<n<100K #language-English #license-odc-by #distillchat #SFT #region-us \n### License\n\n\nWe are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset."
]
|
579505f606aa77c5f98331cc58619611f113bf6e | Dataset Structure
Data Instances
An instance in the dataset represents a user's data with the following fields: msno (user id), is_churn (whether the user left the service), playtime_per_day (average daily playtime), city, bd (age), gender, registered_via (registration method), registration_init_time, payment_method_id, payment_plan_days, plan_list_price, actual_amount_paid, is_auto_renew, transaction_date, membership_expire_date, is_cancel (whether the user cancelled their membership).
Data Fields
- msno: a string identifier for the user.
- is_churn: a binary value indicating whether the user left the service (1) or not (0).
- playtime_per_day: a float value representing the average daily playtime of the user.
- city: an integer representing the city code.
- bd: an integer representing the age of the user.
- gender: a binary value indicating the gender of the user (0 for male, 1 for female, -1 for unknown).
- registered_via: an integer representing the registration method.
- registration_init_time: an integer representing the year of registration.
- payment_method_id: an integer representing the payment method.
- payment_plan_days: an integer representing the length of the payment plan in days.
- plan_list_price: an integer representing the listed price of the payment plan.
- actual_amount_paid: an integer representing the actual amount paid by the user.
- is_auto_renew: a binary value indicating whether the user's plan auto-renews (1) or not (0).
- transaction_date: an integer representing the year of the transaction.
- membership_expire_date: an integer representing the year when the membership expires.
- is_cancel: a binary value indicating whether the user cancelled their membership (1) or not (0). | kodylow/kaggle_churn | [
"region:us"
]
| 2023-11-26T03:06:58+00:00 | {} | 2023-11-26T03:12:57+00:00 | []
| []
| TAGS
#region-us
| Dataset Structure
Data Instances
An instance in the dataset represents a user's data with the following fields: msno (user id), is_churn (whether the user left the service), playtime_per_day (average daily playtime), city, bd (age), gender, registered_via (registration method), registration_init_time, payment_method_id, payment_plan_days, plan_list_price, actual_amount_paid, is_auto_renew, transaction_date, membership_expire_date, is_cancel (whether the user cancelled their membership).
Data Fields
- msno: a string identifier for the user.
- is_churn: a binary value indicating whether the user left the service (1) or not (0).
- playtime_per_day: a float value representing the average daily playtime of the user.
- city: an integer representing the city code.
- bd: an integer representing the age of the user.
- gender: a binary value indicating the gender of the user (0 for male, 1 for female, -1 for unknown).
- registered_via: an integer representing the registration method.
- registration_init_time: an integer representing the year of registration.
- payment_method_id: an integer representing the payment method.
- payment_plan_days: an integer representing the length of the payment plan in days.
- plan_list_price: an integer representing the listed price of the payment plan.
- actual_amount_paid: an integer representing the actual amount paid by the user.
- is_auto_renew: a binary value indicating whether the user's plan auto-renews (1) or not (0).
- transaction_date: an integer representing the year of the transaction.
- membership_expire_date: an integer representing the year when the membership expires.
- is_cancel: a binary value indicating whether the user cancelled their membership (1) or not (0). | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
73db2234779466cf4e4d014448c9b9db9c1435b4 | https://huggingface.co/datasets/jondurbin/airoboros-2.2.1
features: general, single-turn, chat
length: 42.7k | Wanfq/airoboros | [
"region:us"
]
| 2023-11-26T03:11:32+00:00 | {} | 2023-11-26T04:10:15+00:00 | []
| []
| TAGS
#region-us
| URL
features: general, single-turn, chat
length: 42.7k | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
6e86a366a4c3023d6eeb3ea7924437240afcbc11 | https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v2
features: coding, single-turn, task
length: 215k | Wanfq/glaive_code_assistant | [
"region:us"
]
| 2023-11-26T03:12:52+00:00 | {} | 2023-11-26T04:11:01+00:00 | []
| []
| TAGS
#region-us
| URL
features: coding, single-turn, task
length: 215k | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
198555af9e953d9665f0688a9e74ecd27add312e | https://huggingface.co/datasets/LDJnr/LessWrong-Amplify-Instruct
features: general, multi-turn, chat
length: 0.663k | Wanfq/lesswrong_amplify_instruct | [
"region:us"
]
| 2023-11-26T03:16:26+00:00 | {} | 2023-11-26T04:15:47+00:00 | []
| []
| TAGS
#region-us
| URL
features: general, multi-turn, chat
length: 0.663k | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
1ef98442bcf314c0b8f921c4a1a5e6aa64c9b073 | https://huggingface.co/datasets/TIGER-Lab/MathInstruct
features: mathematics, single-turn, task
preserve keys: 'data/CoT/math50k_camel.json', 'data/CoT/college_math.json', 'data/CoT/TheoremQA.json', 'data/CoT/number_comparison.json', 'data/CoT/aqua_rat.json'
length: 142k | Wanfq/mathinstruct | [
"region:us"
]
| 2023-11-26T03:17:25+00:00 | {} | 2023-11-26T04:17:27+00:00 | []
| []
| TAGS
#region-us
| URL
features: mathematics, single-turn, task
preserve keys: 'data/CoT/math50k_camel.json', 'data/CoT/college_math.json', 'data/CoT/URL', 'data/CoT/number_comparison.json', 'data/CoT/aqua_rat.json'
length: 142k | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.